WO2022219313A1 - System and methods for automatically applying reasonable adjustments - Google Patents

System and methods for automatically applying reasonable adjustments Download PDF

Info

Publication number
WO2022219313A1
WO2022219313A1 PCT/GB2022/050904 GB2022050904W WO2022219313A1 WO 2022219313 A1 WO2022219313 A1 WO 2022219313A1 GB 2022050904 W GB2022050904 W GB 2022050904W WO 2022219313 A1 WO2022219313 A1 WO 2022219313A1
Authority
WO
WIPO (PCT)
Prior art keywords
assessment
user
cognitive
parameter
adjustment
Prior art date
Application number
PCT/GB2022/050904
Other languages
French (fr)
Inventor
Christopher James QUICKFALL
Louise Marie KARWOWSKI
William Rupert BROWN
Original Assignee
Cognassist (Uk) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognassist (Uk) Ltd filed Critical Cognassist (Uk) Ltd
Publication of WO2022219313A1 publication Critical patent/WO2022219313A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising

Definitions

  • a system for automatically adjusting a parameter of an assessment to accommodate individual user cognitive capabilities comprising: a) a cognitive ability data (“CAD”) database comprising a memory for storing results from a cognitive assessment test for the user, wherein said results define one or more cognitive domains of the user; b) an administrator server in communication with the CAD database, the administrator server comprising 1) an administrator computing device, 2) an assessment module wherein an assessment is stored therein, the assessment comprising one or more questions or processes for the user, and one or more parameters; and 3) a reasonable adjustment (RA) module, wherein the administrator computing device is configured to provide a first set of software instructions to the RA module, wherein the administrator server comprises one or more processors to execute the first set of instructions to: i) provide an adjustment criteria to a parameter of the one or more parameters, wherein the adjustment criteria corresponds to a cognitive capability; ii) retrieve the one or more cognitive domains for the user via the CAD database; and iii)
  • CAD cognitive ability data
  • the one or more parameters comprise a presentation parameter, a function activation parameter, a time-based parameter, a process/journey parameter, a marketing parameter, or a combination thereof.
  • the presentation parameter comprises font type; font size; font color; spacing between letters; words, lines, and/or paragraphs; background color of an assessment (e.g., as displayed through a computing device as described herein); providing a digital screen overlay of any specific color; displaying the assessment (e.g., via a computing device as described herein) with an alternative template or Cascading Style Sheet (CSS) or other form of screen structure; limiting a maximum amount of text displayed to the user at any given time (e.g., max 5 lines) before moving on to the next segment of text; a change in tone of voice of presented text e.g., from descriptive to emotional, or analytical, or a combination thereof.
  • CSS Cascading Style Sheet
  • the function activation parameter comprises an ability for user to activate speech-to-text software for an assessment (e.g., to enable speech to be recorded instead of typing); ability for user to access dictionary or thesaurus to be available for use; activation of subtitles; activation of audio descriptions; phonetic spell-correction software; a sign-language avatar (e.g., that uses British Sign Language) if video is used; an image based sign language substitute; or a combination thereof.
  • the time-based parameters comprise enforcing a break during the assessment; time allowed for each question, section of questions, section of the assessment, and/or entire assessment, including for example, time allowed to read, type, and/or think; enforcing a break during the test at a predetermined time (e.g., prior to the commencement of the next question after 50% of time has lapsed and/or after a finite amount of time such as 40 mins); enforcing a break for a finite period (e.g., 20 mins), forcing slower progression by the user through a task (e.g., by monitoring how fast a user is responding and if beyond a predetermined maximum speed, to prevent a response within a specific timeframe from an event, such as showing a question and/or activating a pop-up recommending to slow); or a combination thereof.
  • a predetermined time e.g., prior to the commencement of the next question after 50% of time has lapsed and/or after a finite amount of
  • the marketing parameter comprises modifying the baseline question wherein the modification relies more on (“markets”) a determined cognition than the baseline question.
  • the marketing comprises presenting the question to rely more on the user’s quantitative / data cognition compared to the baseline question.
  • the question may market numbers or equations to present part or all of the baseline question.
  • the marketing comprises presenting the question to rely more on the user’s emotional cognition compared to the baseline question.
  • the question may market a story that appeals to the user’s emotions to present part or all of the baseline question.
  • the marketing comprises presenting the question to rely more on the user’s visual cognition compared to the baseline question.
  • the question may market an image to describe part or all of the baseline question.
  • the marketing parameters may be directed to commercial advertising wherein the advertising is modified to appeal to the user’s preferred cognition.
  • the adjustment criteria defines a maximum and/or minimum threshold value for the cognitive domain of the one or more cognitive domains. In some embodiments, the adjustment criteria is based on 1) statistical properties relating to one or more cognitive domains, 2) relational attributes relating to two or more cognitive domains, or 3) both. In some embodiments, the relational attributes relating to two or more cognitive domains comprise a relationship between the two or more cognitive domains.
  • the adjustment of the parameter comprises a binary adjustment, wherein the parameter is either activated or not as presented to the user with the assessment. In some embodiments, the adjustment of the parameter further comprises 1) a graduated adjustment, 2) an options adjustment, or 3) both. In some embodiments, the options adjustment comprises two or more options for adjusting the parameter. In some embodiments, the graduated adjustment comprises any number of ranges a parameter can adjusted based on the corresponding cognitive domain for the user.
  • the system further comprises an Evaluator Module configured to: a) receive results of the assessment from the user; b) evaluate the results of the assessment from the user and a plurality of other users; c) determine the impact of the parameter adjustment based on the results of the assessment from the plurality of other users and user; and d) modify the adjustment criteria if said impact corresponds to a negative impact or insufficient impact based on the results of the assessment from the plurality of users and the user as relating to the parameter adjustment.
  • the adjustment criteria is modified automatically, or manually by an administrator.
  • a method for automatically adjusting a parameter for an assessment to accommodate a user’s cognitive capabilities comprising: a) providing an adjustment criteria for the parameter of the assessment stored on an administrator server, wherein the adjustment criteria corresponds to a cognitive capability; b) retrieving a cognitive domain of the user via a cognitive assessment data (CAD) database, wherein the CAD database is in communication with the administrator server; c) adjusting the parameter based on the adjustment criteria and cognitive domain, wherein the cognitive domain corresponds to the cognitive capability; and d) administering the assessment to the user via a user computing device in communication with the administrator server, wherein the assessment is administered with the adjusted parameter.
  • CAD cognitive assessment data
  • the method further comprises: a) analyzing the results from the assessment from the user and a plurality of other users; b) determining an impact of the parameter adjustment; and c) modifying the adjustment criteria based on the impact.
  • said modifying the adjustment criteria is based on the impact 1) having a negative impact on the results of the assessment as compared to without adjusting the parameter, or 2) having no impact or insufficient impact on the results of the assessment as compared to without adjusting the parameter.
  • the method further comprises receiving results of a cognitive assessment test by the CAD database, wherein said results define one or more cognitive domains for the user.
  • the one or more parameters comprise a presentation parameter, a function activation parameter, a time-based parameter, a process/journey parameter, a marketing parameter, or a combination thereof.
  • the adjustment criteria defines a maximum and/or minimum threshold value for the cognitive domain of the one or more cognitive domains.
  • the adjustment criteria is based on 1) statistical properties relating to one or more cognitive domains, 2) relational attributes relating to two or more cognitive domains, or 3) both.
  • the relational attributes relating to two or more cognitive domains comprise a relationship between the two or more cognitive domains.
  • the adjustment of the parameter comprises a binary adjustment, wherein the parameter is either activated or not as presented to the user with the assessment.
  • the adjustment of the parameter further comprises 1) a graduated adjustment, 2) an options adjustment, or 3) both.
  • the options adjustment comprises two or more options for adjusting the parameter.
  • the graduated adjustment comprises any number of ranges a parameter can adjusted based on the corresponding cognitive domain for the user.
  • the method further comprises evaluating each parameter of the assessment to identify one or more parameters associated with an adjustment criteria. In some embodiments, the method further comprises adjusting a plurality of parameters, wherein each adjustment is based on a corresponding adjustment criteria and cognitive domain, wherein each cognitive domain corresponds to a cognitive capability associated with the corresponding adjustment criteria.
  • a system for a computer-implemented method for predicting a plurality of outcomes for an assessment taken by a user comprising: a) providing a cognitive dataset associated with the user; b) providing a 3rd party dataset associated with the user; c) transforming the cognitive dataset and the 3rd party dataset into a combined dataset; d) generating a predictive model from the combined dataset; and e) using the predictive model to predict the plurality of outcomes for the assessment.
  • the transforming the cognitive dataset and the 3rd party dataset into a combined dataset comprises auditing, merging, and/or cleaning the combined dataset.
  • the dataset comprises a plurality of search indexes.
  • the generating a predictive model from the combined dataset comprises updating the predictive model.
  • the predictive model is updated using a plurality of new cognitive data associated with user, a plurality of new 3rd party data associated with the user, or a combination thereof.
  • the plurality of outcomes for the assessment comprises a plurality of adjusted parameters for the assessment. In some embodiments, the plurality of adjusted parameters improves the outcome of the assessment compared to an assessment with no adjusted parameters.
  • FIG. 1 depicts an exemplary computer based system comprising an administrator server, a cognitive ability data (CAD) database, and one or more user devices, in accordance with some embodiments.
  • CAD cognitive ability data
  • FIG. 2 depicts another illustration of the components of the computer based system from FIG. 1, in accordance with some embodiments.
  • FIG. 3 depicts the steps and components for generating a user cognitive ability data, in accordance with some embodiments.
  • FIG. 4 depicts an exemplary method for adjusting parameters of an assessment, in accordance with some embodiments.
  • FIG. 5 depicts the method of FIG. 4, further comprising a machine learning process, in accordance with some embodiments.
  • FIG. 6 depicts an exemplary linear regression model for determining a graduated adjustment of a parameter, in accordance with some embodiments.
  • FIG. 7 depicts an exemplary illustration of a multiple regression equation and model for determining eligibility of a parameter adjustment, in accordance with some embodiments.
  • FIG. 8 depicts an exemplary illustration of a relationship between an individual user response time and frequency, in accordance with some embodiments.
  • FIG. 9A depicts an exemplary illustration of a relationship between a frequency distribution of a relative ratio between a CAD score for two cognitive domains (a relational attribute), in accordance with some embodiments.
  • FIG. 9B depicts an activation parameter based on a threshold for the relational attribute from FIG. 9A, in accordance with some embodiments.
  • FIG. 10 depicts an exemplary illustration between extra time allotment and a cognitive domain percentile for a user based on three cognitive domains, in accordance with some embodiments.
  • FIG. 11 provides an exemplary flow chart depicting an exemplary relationship between various components of a system described in an embodiment herein.
  • FIG. 12 depicts an exemplary machine learning flow, in accordance with some embodiments.
  • FIG. 13 provides an exemplary computing system capable of implementing the systems and methods of the present disclosure.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the disclosure. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • assessment generally refers to a task or activity performed by a user, wherein the assessment comprises one or more parameters that can be adjusted to accommodate the cognitive abilities of the user.
  • assessment may comprise a test from any type of organization (e.g., academic institution, government agency, employer, retailer, etc.), wherein the test may comprise any type of questions and/or formats and may be for any type of subject area or industry.
  • assessment may also comprise an application or software for use by a user in connection with an employment, Kir, any personal / leisure activity (e.g., a computer game), and/or business activity.
  • assessment may also comprise an interactive platform (such as a website) for a product, such an e-commerce product.
  • the term “reasonable adjustment” as used herein generally refers to an adjustment of a parameter for an assessment to compensate for one or more cognitive capabilities of a user. As described herein, in some embodiments, such cognitive capabilities may place the user at a disadvantage, or be considered a disability, in comparison with a general user population. In some embodiments, such cognitive capabilities of the user may be more advanced or superior to a general user population.
  • the term “parameter” as used herein generally refers to a variable feature relating to an assessment.
  • Each assessment may include any number of a parameters that can be adjusted, as described herein.
  • the categories of types of parameters include presentation parameters (e.g., font or background of an assessment as presented to a user as displayed through a computing device), time-based parameters (e.g., amount of time allotted for a test), functional availability parameters (e.g., option to turn on subtitles with an assessment presentation), process/journey parameters, and/or marketing parameters (e.g., modification or part or all of the question).
  • the term “cognitive domain” generally refers to a particular cognitive ability or attribute (e.g., reading speed, verbal comprehension, etc.) identified with a user.
  • the cognitive domain covers an entire range of the cognitive ability or attribute.
  • the cognitive domain may be identified as being a disability or deficient cognition (via for example, the cognitive assessment data), and/or the cognitive domain may be identified as being superior or exceeding an average user ability.
  • Each cognitive domain may be identified with a cognitive level, which can be obtained via a statistical output based on a cognitive assessment data for a user, or can be obtained based on a relationship between the cognitive level of two or more cognitive domains of the user.
  • the term “user” generally refers to a test taker, an employee, an individual, a group of individuals, or any combination thereof.
  • the term “about” in some cases generally refers to an amount that is approximately the stated amount.
  • the term “about” generally refers to an amount that is greater or less than the amount by 10%, 5%, or 1%, including increments therein.
  • assessments e.g., tests
  • Users seeking for such compensatory measures are required to take and submit results from a cognitive assessment test prior to taking an assessment (e.g., 2 weeks prior to the assessment date), so as to ensure an administrator can verify the validity of the cognitive assessment, review the results, and adjust assessment parameters as needed, and as available, in time for the user when taking the assessment.
  • Such manual adjustment of parameters by one or more administrators on a case by case analysis for numerous individual users often results in inconsistent and inequitable adjustments that can vary by the different users.
  • manual adjustment of parameters for numerous individual users can become burdensome on the administrators, including being a lengthy process.
  • the systems and methods comprise individual users taking a cognitive assessment test prior to taking an assessment (e.g., a test), wherein results from the cognitive assessment test are used to generate cognitive abilities data (CAD) for each user, and which is stored in a database so as to be retrieved for future assessments taken by the user.
  • CAD cognitive abilities data
  • the CAD for each user identifies one or more cognitive domains associated with the user for which a disability or deficient cognition is identified, and for which may be compensated through assessment parameter adjustments.
  • the CAD for each user identifies one or more cognitive domains associated with a user that are eligible, for assessment parameter adjustments (e.g., a more advanced journey module, a more advanced learning module, etc.).
  • an assessment administrator (“administrator”) does not need to review or validate the CAD for each user, but rather sets up and provides rules for a given assessment that specify what parameters are to be adjusted based on certain cognitive domains being present for a user.
  • the corresponding CAD and associated cognitive domains are automatically compared against the rules for an assessment, wherein adjustments are made to the assessment parameters to compensate for the respective cognitive abilities and attributes of the user that 1) would otherwise place such users at a disadvantage for the assessment, as compared with a general population of users, and/or 2) would otherwise fail to adequately stimulate the user or provide an effective assessment (e.g., test, learning module, employment task).
  • adjustments to compensate for such deficiencies and/or disabilities in cognitive abilities is referred to as reasonable adjustments.
  • the system and methods disclosed herein comprise using a computer implemented system comprising an administrator server, a CAD database, and one or more user computing devices (“user device”).
  • the administrator server comprises one or more administrator computing devices, a Reasonable Adjustments (“RA”) module, an Assessment Module, and optionally an Evaluation Module.
  • the modules are stored on a memory on the server, or a memory on the one or more administrator computing devices.
  • an assessment is stored on the Assessment Module and configured to be accessed by one or more users via a user device.
  • the assessment comprises one or more parameters, as described herein, for which the RA module is configured to apply reasonable adjustments based on instructions provided by the administrator.
  • the parameters are automatically adjusted by the computer implemented system based on the user CAD retrieved from the CAD database and the instructions provided by the administrator for an assessment.
  • Cognitive Ability Data CAD
  • the cognitive capabilities and attributes for a user are provided via cognitive ability data (CAD), and are identified as respective cognitive domains for the user.
  • cognitive capabilities and attributes for a user that are identified as being deficient per the respective CAD are identified as deficient cognitive domains for the user.
  • cognitive capabilities and attributes for a user that are identified as being superior per the respective CAD are identified as superior cognitive domains for the user.
  • CAD is generated based on a cognitive assessment test taken by the user.
  • the cognitive assessment test comprises a format and/or questions as known in the art.
  • CAD is generated based on one or more cognitive assessment tests taken.
  • the one or more cognitive assessment tests are taken in series.
  • the cognitive assessment test is a computer based test.
  • FIG. 3 provides an exemplary flow chart for generating CAD.
  • a user takes one or more cognitive assessment tests 302.
  • the user also provides characteristics information 304, such as demographic data.
  • demographic data comprises gender, geography, ethnicity, handedness, educational attainment, date of birth, others, or a combination thereof.
  • a 3 rd party or a user also provides historical information 305 through e.g., an application programming interface API.
  • historical information comprises noncognitive data e.g., the user’s school attendance record, prior courses, prior course grades, grade level, or grade level completion.
  • 3 rd party data further comprises demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof.
  • the results from the one or more cognitive assessments, the characteristics information, and/or the historical information are processed to generate 306 said cognitive ability data (CAD).
  • CAD cognitive ability data
  • such CAD is then provided and used by systems and methods described herein for determining what assessment parameters need to be adjusted.
  • the CAD is obtained in digital format.
  • the CAD comprises a pre-derived dataset that has been through processes that enable the scientific validity of said cognitive assessment and generation of CAD to be satisfied.
  • the systems and method disclosed herein do not alter the CAD as obtained, but only use the CAD in determining what adjustment to make to parameters for an assessment.
  • the CAD comprises an indication of a specific cognitive ability.
  • the CAD comprises a relative measure of a cognitive ability for a user, by providing such relative measure as one of a number of statistical formats such as T-scores, z-score, standard scores, percentiles, scale scores, scaled scores, standard deviations, stanines, percentage stanines, other such statistical mathematical method, or a combination thereof.
  • a number of statistical formats such as T-scores, z-score, standard scores, percentiles, scale scores, scaled scores, standard deviations, stanines, percentage stanines, other such statistical mathematical method, or a combination thereof.
  • each score from a CAD for a user represents a single attribute (or cognitive ability) of the individual user’s cognitive abilities, corresponding to a given cognitive domain.
  • one or more cognitive abilities will be dependent on more than attribute identified with a score (e.g., a given cognitive domain may be determined based on the score for two or more individual cognitive attributes).
  • there are relational attributes or relational abilities (between two or more cognitive attributes) which are functions of one another and may be represented as ratios, frequencies, fractions, proportions, mean, medians, modes.
  • the CAD comprises relational attributes, which may comprise intervals (or the difference) between two scores, ratios (division between two scores), and which may be labelled as “difference scores” or “discrepancy scores/analysis.”
  • relational attributes may comprise intervals (or the difference) between two scores, ratios (division between two scores), and which may be labelled as “difference scores” or “discrepancy scores/analysis.”
  • multiple statistical techniques are available for determining such difference scores, wherein differences between a score for two cognitive attributes within a cognitive profile are analyzed to determine whether they’re in a normal or expected range, and thereby identify an adjustment (based on this difference).
  • such information is useful even when an individual does not qualify for a difficulty and/or disability diagnosis.
  • the unit of measure is usually a standardized form (as described herein), or simply difference scores with base-rate information.
  • standardized scores used for relational attributes such as difference scores may be as follows (as an example): “the difference between cognitive domain 1 and cognitive domain 2
  • n is a CAD score for a given cognitive domain
  • X is the number of cognitive domains considered for this relational attribute
  • ⁇ ( X) is the sum of each score of the cognitive domains considered for this relational attribute
  • min (X) is the smallest score of all the scores for the cognitive domains considered for this relational attribute.
  • a threshold may be specified for a relational attribute, wherein certain parameter adjustments are triggered if the calculated relational attribute score is greater than or less than the threshold. For example, the threshold by a score greater than 23 standard score.
  • FIGs. 9A and 9B provide exemplary depictions of an output for a relational attribute.
  • FIG. 9A provides a depiction as to the frequency of scores calculated for a relational attribute, as compared with a user population, with the threshold 23 identified.
  • FIG. 9B depicts the triggering of a parameter that becomes available or on once the threshold of 23 is met.
  • the threshold depends on how closely the cognitive domains correlate. For example, if cognitive domain 1 and cognitive domain 2 correlate very highly, even a small difference between scores on these two cognitive domains might be clinically or educationally relevant (e.g., trigger an adjustment of one or more parameters). By contrast, if cognitive domain 3 and cognitive domain 4 don’t correlate much at all, then a large difference in the corresponding cognitive scores may not be relevant at all.
  • a computer implemented system processes raw results from a user’s cognitive assessment test to generate corresponding CAD and optionally stores said generated CAD in the CAD database.
  • the CAD is generated based on user responses to the cognitive assessment test, wherein content of the responses and/or other attributes related to the responses are considered for generating the CAD.
  • cognitive data is based on response attributes such as response accuracy, speed of providing accurate responses, providing no response, and/or speed of providing inaccurate responses.
  • error rates and error types are relevant for some assessments, which may not always correlate to the inverse of correct responses, but can include the number of erroneous responses to an intentionally erroneous prompt (as opposed to errors on normal questions), or errors that are related in some way.
  • verbal learning tests will sometimes record words that were incorrectly recalled but semantically related to the target words.
  • response attributes are further categorized based on specific cognitive abilities and/or cognitive bios, thereby generating cognitive data with regards to relational abilities (between cognitive abilities).
  • the cognitive assessment test output is a set of values for pre-determined cognitive abilities and attributes. In some embodiments, these values become the Cognitive Ability Data (CAD).
  • cognitivas identified as being deficient or as a disability are identified as deficient cognitive domains.
  • a user identified with a deficient cognitive domain for reading speed refers to the user requiring more time (e.g., compared to general user population) to read through material provided in an assessment.
  • Another example of a deficient cognitive domain is verbal comprehension, wherein a user may have more difficulty (e.g., compared to general user population) in understanding verbal instructions (e.g., provided in an assessment), and thus an exemplary parameter adjustment would be to provide subtitles.
  • each assessment comprises one or more parameters that can be adjusted based on rules specified by an administrator.
  • an assessment comprises a test from any type of organization (e.g., academic institution, government agency, employer, retailer, etc.), wherein the test may comprise any type of questions and formats and may be for any type of subject area or industry.
  • an assessment comprises an application or software for use by a user in connection with an employment, Kir, any personal or leisure activity (e.g., a computer game), and/or business activity.
  • the assessment may refer to the software used and presentation of font as part of an employment, wherein productivity may be improved based on adjustment of certain parameters (e.g., font size, color, etc.).
  • an assessment refers to a product website or other software for presenting the product.
  • the assessment may refer to a website or interactive platform for an e-commerce product, wherein parameters relating to said website or interactive platform can be adjusted, based on a user’s CAD, so as to adapt specific features, journeys, and/or actions to support user experience.
  • the one or more parameters for assessments can be categorized, such as presentation parameters, function activation parameters, time-based parameters, process/journey parameters, or marketing parameters.
  • a given assessment comprises presentation parameters, function activation parameters, time-based parameters, process/journey parameters, and/or marketing parameters, or a combination thereof.
  • presentation parameters comprise parameters relating to the presentation of an assessment, including specific characteristics of the assessment.
  • presentation parameters comprise font type; font size; font color; spacing between letters; words, lines, and/or paragraphs; background color of an assessment (e.g., as displayed through a computing device as described herein); providing a digital screen overlay of any specific color; displaying the assessment (e.g., via a computing device as described herein) with an alternative template or Cascading Style Sheet (CSS) or other form of screen structure; limiting a maximum amount of text displayed to the user at any given time (e.g., max 5 lines) before moving on to the next segment of text; a change in tone of voice of presented text e.g., from descriptive to emotional, or analytical, or a combination thereof.
  • CSS Cascading Style Sheet
  • function activation parameters comprise parameters relating to a function or operation that is activated or becomes available to be activated by the user based on a threshold adjustment criteria being met, as described herein.
  • function activation parameters comprise an ability for user to activate speech-to-text software for an assessment (e.g., to enable speech to be recorded instead of typing); ability for user to access dictionary or thesaurus to be available for use; activation of subtitles; activation of audio descriptions; phonetic spell-correction software; a sign-language avatar (e.g., that uses British Sign Language) if video is used; an image based sign language substitute; or a combination thereof.
  • time-based parameters comprise parameters relating to time as it relates to an assessment.
  • time-based parameters comprise enforcing a break during the assessment; time allowed for each question, section of questions, section of the assessment, and/or entire assessment, including for example, time allowed to read, type, and/or think; enforcing a break during the test at a predetermined time (e.g., prior to the commencement of the next question after 50% of time has lapsed and/or after a finite amount of time such as 40 mins); enforcing a break for a finite period (e.g., 20 mins), forcing slower progression by the user through a task (e.g., by monitoring how fast a user is responding and if beyond a predetermined maximum speed, to prevent a response within a specific timeframe from an event, such as showing a question and/or activating a pop-up recommending to slow); or a combination thereof.
  • a predetermined time e.g., prior to the commencement of the next
  • an adjustment of background and text colors for an assessment software application (e.g., Microsoft® Word) can be specified.
  • the systems and methods described herein are configured to determine the most appropriate background and text colors to improve comprehension and thereby instruct the software application (e.g., Microsoft® Word), which colors to adapt.
  • the systems and methods described herein are configured to instruct the software application to specify a font type, a font size, and letter and word and line spacing, all of which may improve a cognitive ability.
  • the software application corresponding to an assessment comprises a browser, wherein adjustment of typeface comprises changing a webpage’s cascading style sheets (CSS) formatting to better support a viewer.
  • CSS cascading style sheets
  • process/journey parameters are configured to change the experience journey that a user is taken on as the user moves through the software (e.g., through a learning module, a given set of tasks, etc.).
  • a user may have the questions asked in a different order, possibly due to where their cognitive strengths or weaknesses lie (e.g., identified via CAD as described herein) so that they are less cognitive exhausted as they progress through the test.
  • such questions are asked in a different order by staggering questions that are heavy on parts of their cognition where the user is identified as being weaker and spacing them out with questions the user is likely to find less intense due to a natural strength in the processing tasks involved.
  • the assessment environment is an e-learning course a school has bought to instruct about the speed of light.
  • vectors may be used to communicate speed, however for a language strong individual, comparisons to real life may be applied instead.
  • the process/journey parameters are not focused only for presentation adjustments, but also the process the software takes the user through that affects outcomes.
  • a system described herein identifies the optimum process parameters as well as other parameters to be able to markedly improve outcomes.
  • the marketing parameter comprises modifying the baseline question wherein the modification relies more on (“markets”) a determined cognition than the baseline question.
  • the marketing comprises presenting the question to rely more on the user’s quantitative / data cognition compared to the baseline question.
  • the question may market numbers or equations to present part or all of the baseline question.
  • the marketing comprises presenting the question to rely more on the user’s emotional cognition compared to the baseline question.
  • the question may market a story that appeals to the user’s emotions to present part or all of the baseline question.
  • the marketing comprises presenting the question to rely more on the user’s visual cognition compared to the baseline question.
  • the question may market an image to describe part or all of the baseline question.
  • the assessment refers to environments other than tests and assessments, such as at a place of employment.
  • adjustment of certain parameters could be used to improve productivity.
  • the CAD could be used to alter the contrast ratio between font and background of an application software by changing the color of each to be more sympathetic to the structure of cones and rods in a user’s retina, thereby reducing strain and increasing the length of time the user is able to concentrate on the screen.
  • each assessment receives rules (e.g., instructions as described herein) that provide criteria for adjusting (adjustment criteria) one or more assessment parameters based on the CAD for a given user.
  • the adjustment criteria for an assessment parameter corresponds with a deficient cognitive domain or superior cognitive domain, as described herein.
  • an administrator specifies an increase in time permitted for an assessment or a section of an assessment for those users identified with a deficient cognitive domain relating to reading speed.
  • the adjustment criteria may specify a threshold of the deficient cognitive domain, as described herein, to trigger an assessment parameter adjustment.
  • a user with a deficient cognitive domain correlating to reading speed would trigger an assessment parameter adjustment.
  • a user may be identified with two deficient cognitive domains, and one normal cognitive domain, that together are used to correlate an assessment parameter adjustment criteria. In this case, depending on the weight of each cognitive domain for the correlation, the assessment parameter is adjusted accordingly (e.g., see Example 3).
  • the administrator will identify all the available parameters. As described herein, there are a plurality of types of assessments that may be available, and thus, the administrator may consider the specific subject area and/or industry of the assessment when determining the parameters applicable for being associated with an adjustment criteria. In some embodiments, different assessments will have cognitive abilities that are more pertinent and important for the given subject area, and thus corresponding parameters to aid those pertinent and important cognitive abilities would not be provided with an adjustment criteria, or would be provided with a minimal adjustment criteria. For example, an assessment that is an English proficiency test for immigration would not provide parameter adjustments such as automatic spellchecks or dictionary support. In another example, assessments for industries where speed is a requirement (e.g., pilot, air- traffic controller), would not provide parameter adjustments for extra time.
  • speed a requirement
  • the administrator will evaluate each parameter one by one and assign rules (instructions) as needed.
  • administrators will provide rules to set parameters by establishing a minimum / maximum threshold of cognitive performance, which will likely be set by one or both of (1) statistical properties (e.g., standard deviations of the population for one or more cognitive abilities or attributes) (e.g., see FIG. 6), and/or (2) cognitive properties such as relationships between two or more cognitive data-points e.g., see FIG. 7.
  • a reasonable adjustment of a parameter is configured to be activated by just a statistical property, by just a cognitive property, or both.
  • a reasonable adjustment module is used to calculate an adjustment for each parameter for an assessment with respect to a given user.
  • mathematical models are used to calculate whether an assessment parameter is to be adjusted and by how much.
  • metadata is also used in calculating a parameter adjustment.
  • metadata comprises Question Data and Assessment Data.
  • Question Data comprises data specifically related to questions / prompts as part of an assessment.
  • Question Data comprises a) question word count, b) question format (e.g., open text, multiple choice, equation, etc.), c) whether a question includes video, and/or d) whether a question includes audio.
  • Assessment Data comprises data specifically related to the assessment itself or a section of the assessment.
  • Assessment Data comprises a) total length of an assessment, b) number of questions as part of the assessment, and/or c) assessment modality (e.g., written, oral, multiple choice, etc.).
  • assessment modality e.g., written, oral, multiple choice, etc.
  • the calculation may be that a time based adjustment only applies where s>1.5 or n>20.
  • the adjustment criteria is targeted to specific areas of the assessment, where they are deemed most needed and which reduces any unintended benefits of a parameter adjustment.
  • a parameter is binary, in that it is either on (e.g., activated) or off (e.g., not activated).
  • An adjustment criteria would specify what cognitive domain (e.g., deficient cognitive domain, superior cognitive domain) would either activate or deactivate said parameter.
  • An example of a binary parameter is the option for a user to activate a speech-to-text option for the assessment, based on the corresponding cognitive domain (per the user CAD described herein) meeting the required threshold, and thereby enabling a user to provide a response orally and have it entered in text.
  • a parameter is both binary and graduated, wherein the parameter becomes activated upon a cognitive domain meeting a required threshold, and once activated, the parameter adjustment is graduated.
  • an amount of a parameter e.g., amount of extra time allowed for an assessment
  • mathematical models such as linear regression, non-linear equations, or other types are used for determining a specific amount of parameter adjustment (e.g., a specific amount of extra time).
  • a first user having a cognitive level of a deficient cognitive domain requiring only 26% of an amount of extra time would get the same amount of extra time as a second user requiring 49% of an amount of extra time (e.g., 50%), thereby providing an advantage to the first user not only compared with the second user, but with all users.
  • the systems and method described herein enable the application of far more accurate graduations based on mathematical modelling of the population, in theory achieving almost infinite graduations as opposed to the current 4 graduations (25%, 50% 75% and 100%).
  • a parameter is both binary and provides multiple options, wherein the parameter is either available or not available, or wherein a parameter comprises multiple options, depending on a corresponding cognitive level of a cognitive domain of the user.
  • a parameter reflects a color displayed for a given question or section of an assessment, wherein the three options available for adjustment are one of red, green or blue, each option correlating with a cognitive level for a corresponding cognitive domain.
  • Other options that may be available include a choice of font type, font size, playback speed of text-to-speech voice, etc.
  • the system and/or methods described herein would automatically determine an optimum font type and/or size for the reader to process, and assigning it to the corresponding assessment, and/or the system automatically determining an optimum color for text color/ background color and assigning this to the assessment.
  • the systems and methods described herein accommodate more options than would be practical for a human to consider, with these options also being more targeted to the specific needs of the individual user.
  • the administrator will not need to apply adjustments on a case by case basis for the one or more users, as the system and methods described herein will pre-set the adjustment criteria, so as to automate the evaluation of each adjustment against the respective cognitive assessment data (CAD) of each user, and thereby determine if the parameter adjustment applies and optionally its graduation or option.
  • the adjustment of parameters is more rigorous through automation, as described herein, as opposed to manual adjustment wherein a human needs to perform the eligibility criteria.
  • the adjustment criteria is configured to continuously be improved over time by using statistical computation.
  • the initial adjustment criteria (as provided by an administrator) for a parameter adjustment is data driven.
  • the impact of the parameter adjustment is reviewed once enough data is gathered, thereby allowing more complex feedback loops to be introduced over time to improve the parameter adjustment (e.g., machine learning process).
  • the machine learning process can be incorporated with the system and methods described herein, thereby fully automating said systems and methods over time, including totally removing the human from the initial set-up (e.g., setting of adjustment criteria).
  • the systems and methods described herein comprise a machine learning process for modifying the specified adjustment criteria for one or more parameters provided by an administrator.
  • responses from a plurality of users for an assessment is analyzed to determine the effectiveness of the parameter adjustment for one or more parameters.
  • the adjustment criteria can be modified to improve the performance related to the corresponding cognitive domain (e.g., a specific cognitive capability, such as reading speed, which may be deficient or superior compared to a general population of users).
  • no improvement or no impact corresponds to insufficient impact by the parameter adjustment (e.g., an insufficient improvement to adequately compensate for the identified cognitive domain of the user).
  • no improvement corresponds to a negative impact, such as a further worsening of a performance on an aspect of the assessment relating to an identified cognitive domain (e.g., compared to if the parameter had not been adjusted).
  • the adjustment criteria is automatically modified (e.g., by a machine learning process described herein) and further refined as responses from more users are received relating to the same assessment and/or different assessment with comparable parameter adjustments.
  • an administrator will manually update the adjustment criteria.
  • the machine learning process will identify other parameters that may be more impactful in improving the performance related to a corresponding cognitive domain.
  • the machine learning process identifies aspects of an assessment that need to be improved.
  • the system as described herein may determine that a high proportion of users identified with a certain deficient cognitive domain continue to have difficulty with an aspect of an assessment despite corresponding parameters being adjusted. Accordingly, in some embodiments, the system, as described herein, may identify the aspects of the assessment that need to potentially be adjusted. For example, there may be one question out of ten questions wherein more than 50% of users with deficient cognitive domain related to reading comprehension are likely to perform poorly with this question.
  • a computer-implemented method for predicting a plurality of outcomes for an assessment taken by a user comprising: a) providing a cognitive dataset associated with the user; b) providing a 3rd party dataset associated with the user; c) transforming the cognitive dataset and the 3rd party dataset into a combined dataset; d) generating a predictive model from the combined dataset; and e) using the predictive model to predict the plurality of outcomes for the assessment.
  • the transforming the cognitive dataset and the 3rd party dataset into a combined dataset comprises auditing, merging, and/or cleaning the combined dataset.
  • the dataset comprises a plurality of search indexes.
  • the generating a predictive model from the combined dataset comprises updating the predictive model.
  • the predictive model is updated using a plurality of new cognitive data associated with user, a plurality of new 3rd party data associated with the user, or a combination thereof.
  • the plurality of outcomes for the assessment comprises a plurality of adjusted parameters for the assessment. In some embodiments, the plurality of adjusted parameters improves the outcome of the assessment compared to an assessment with no adjusted parameters.
  • a system described herein is configured to identify an individual’s cognition or cognitive abilities without the need to map out via CAD or other cognitive assessments. For example, in some embodiments, wherein a sufficient number of users with profiles / CAD have used a system, as described herein, and the corresponding behavior has been tracked (e.g., mouse clicks, time spent on a section, interactions with content, personal settings, etc.) to obtain behavioral data, the system is configured to infer that relationship by taking such behavioral data and build out inferred cognitive data about new users of the software without profiles / CAD.
  • the system is configured to infer that relationship by taking such behavioral data and build out inferred cognitive data about new users of the software without profiles / CAD.
  • the inferred cognitive data will initially be broad inferences such as more significant cognitive bias such as a heavy non-verbal bias or weak memory but could quickly become more detailed based on the engagement by the new user with the system and with the adaptions.
  • the system would iterate its inferred understanding of the new users’ cognition and adapt based on this improving understanding.
  • this inferred cognition capability allows the system described herein to make adjustments and measure their effectiveness against expectations, and refine the Inferred dataset about a given user, over time building a more and more accurate inferred dataset on said user, while ensuring said user is continuing to have cognitively personalized experiences tailored to their respective needs.
  • FIG. 1 provides an exemplary depiction of a computer implemented system 100 described herein.
  • the system 100 comprises a network 102, a cognitive ability data (CAD) database 104, one or more administrator servers 106, and one or more user devices (110-1, 110-2, 110- n, wherein n denotes the number of users).
  • CAD cognitive ability data
  • Each of the components 104, 106, and 110 may be operatively connected to one another via network 102 or any type of communication links that allows transmission of data from one component to another.
  • the administrator terminal 106 comprises a server. In some embodiments, one or more assessments and associated parameters are stored on the administrator server 107. In some embodiments, the administrator terminal 106 comprises an administrator computing device 107, wherein the one or more assessments are stored. In some embodiments, the administrator server comprises an Assessment Module 109 stored therein, or stored in an administrator computing device 107. In some embodiments, one or more assessments are stored in the Assessment Module 109. In some embodiments, each assessment taken by a user is stored by a user in their own cloud, or they may have bought or sourced in a software as a service (SaaS) based assessment platform and the assessment is then stored in the providers cloud.
  • SaaS software as a service
  • the administrator server 106 comprises a Reasonable Adjustment (RA) module 108.
  • the RA module is configured to adjust one or more parameters of an assessment.
  • an administrator is able to use the administrator computing device 107 to provide instructions (e.g., rules as described herein) to be stored with the RA module for a given assessment.
  • the CAD database is in communication with the administrator server 106, such that the CAD for a user is accessible by the RA module.
  • the RA module is configured to adjust one or more parameters for an assessment based on the instructions stored therein (e.g., from the administrator), and based on the CAD retrieved from the CAD database for a given user.
  • the administrator server comprises a processor configured to execute the instructions provided by the administrator (e.g., instructions provided to the RA module).
  • each parameter of the assessment will be evaluated by the RA module, based on the administrator instructions, and adjusted as needed.
  • a user device 110 comprises, for example, one or more computing devices configured to perform one or more operations consistent with the disclosed embodiments.
  • a user device may be a computing device that is capable of executing software or applications provided by the administrator server 106 (e.g., executing instructions for an assessment and corresponding adjusted parameters).
  • the assessment is hosted by the administrator server on one or more interactive webpages and accessed by the one or more users via the respective user devices 110.
  • the one or more users as described herein, comprise employees of a company, job candidates, job-seekers, students, any individual, a group of individuals, etc.
  • the assessment are presented to a user (e.g., subject) via an interface (e.g., display for a computing device), wherein the assessment is further configured to receive input from the user (e.g., via input devices as described herein, such as keyboard, mouse, camera, microphone, etc.).
  • FIG. 2 provides an exemplary relationship between the administrator server, assessment, and user.
  • the input received from the user for an assessment is received and analyzed by an Evaluation Module (“Eval Module”) 111 (see FIG. 1), stored on the administrator server.
  • the input received from the assessment are reported to one or more end users by the Eval Module.
  • the end users comprises administrators.
  • a user device comprises, among other things, desktop computers, laptops or notebook computers, mobile devices (e.g., smart phones, cell phones, personal digital assistants (PDAs), and tablets), or wearable devices (e.g., smartwatches).
  • a user device can also include any other media content player, for example, a set-top box, a television set, a video game system, or any electronic device capable of providing or rendering data.
  • a user device comprises known computing components, such as one or more processors, and one or more memory devices storing software instructions executed by the processor(s) and data.
  • the system 100 comprises a plurality of user devices.
  • each user device is associated with a user.
  • users may include employees of a company, candidates for a job position, jobseekers, students, teachers, instructors, professors, administrators, individuals, groups of individuals, or a combination thereof.
  • more than one user is associated with a user device.
  • more than one user device is associated with a user.
  • the users are located geographically at a same location, for example a school or testing center, or a particular geographical location. In some instances, some or all of the user and user devices are at remote geographical locations (e.g., different cities, countries, etc.).
  • the system 100 comprises a plurality of nodes.
  • each user device in the system corresponds to a node.
  • a “user device 110” is followed by a number or a letter, it means that the “user device 110” may correspond to a node sharing the same number or letter.
  • user device 110-1 may correspond to node 1 which is associated with user 1
  • user device 110-2 may correspond to node 2 which is associated with user 2
  • user device 110-n may correspond to node n which is associated with user n, where n may be any integer greater than 1.
  • a node is a logically independent entity in the system. Therefore, in some embodiments, the plurality of nodes in the system can represent different entities. For example, each node may be associated with a user, a group of users, or groups of users.
  • a user device is configured to receive input from one or more users.
  • a user provides an input to a user device using an input device, for example, a keyboard, a mouse, a touch-screen panel, voice recognition and/or dictation software, other such input methods, or any combination of the above.
  • an input device for example, a keyboard, a mouse, a touch-screen panel, voice recognition and/or dictation software, other such input methods, or any combination of the above.
  • different users provide different input, depending on their CAD and adjusted parameters for an assessment.
  • two-way data transfer capability may be provided between the network, administrator server, and each user device.
  • the user devices are configured to communicate with one another.
  • the user devices communicate directly with one another via a peer-to-peer communication channel.
  • the peer- to-peer communication channel helps to reduce workload on the server by utilizing resources (e.g., bandwidth, storage space, and/or processing power) of the user devices.
  • the administrator server 106 comprises one or more server computing devices 107 configured to perform one or more operations consistent with disclosed embodiments.
  • an administrator server 106 is implemented as a single computing device (e.g., computer, tablet, smartphone, or others as described herein).
  • a user device communicates with the administrator server through the network 106.
  • a user device may be directly connected to the administrator server through a separate link (not shown in FIG. 1).
  • the administrator server may be configured to operate as a front-end device configured to provide access to one or more assessments consistent with certain disclosed embodiments.
  • the administrator server processes input data from a user device with respect to an assessment (e.g., via the Evaluator Module 111).
  • the administrator server analyzes input data from a user device with respect to an assessment to determine the impact of adjustments made to one or more parameters.
  • the administrator server is configured to store responses by users to one or more assessments in a memory module on the administrator sever.
  • the server is configured to search, retrieve, and analyze data and information stored in the memory module.
  • the data and information comprise user’s historical performance in one or more assessments, as well as adjustments made to corresponding parameters for all such assessments.
  • an administrator server comprises a web server, an enterprise server, or any other type of computer server, and can be computer programmed to accept requests (e.g., HTTP, or other protocols that can initiate data transmission) from a computing device (e.g., a user device) and to serve the computing device with requested data.
  • the administrator server is a server in a data network (e.g., a cloud computing network).
  • the administrator server comprises known computing components, such as one or more processors, one or more memory devices storing software instructions executed by the processor(s), and data.
  • the administrator server comprises one or more processors and at least one memory for storing program instructions.
  • the processor(s) are a single or multiple microprocessors, field programmable gate arrays (FPGAs), or digital signal processors (DSPs) capable of executing particular sets of instructions.
  • Computer- readable instructions can be stored on a tangible non-transitory computer-readable medium, such as a flexible disk, a hard disk, a CD-ROM (compact disk-read only memory), and MO (magneto-optical), a DVD-ROM (digital versatile disk-read only memory), a DVD RAM (digital versatile disk-random access memory), or a semiconductor memory.
  • a tangible non-transitory computer-readable medium such as a flexible disk, a hard disk, a CD-ROM (compact disk-read only memory), and MO (magneto-optical), a DVD-ROM (digital versatile disk-read only memory), a DVD RAM (digital versatile disk-random access memory), or a semiconductor memory.
  • the methods disclosed herein are implemented in hardware components or combinations of hardware and software such as, for example, ASICs, special purpose computers, or general purpose computers. While FIG. 1 illustrates the administrator server as a single server, in some embodiments, multiple devices may implement the functionality associated with the server.
  • the network is configured to provide communication between various components of the system 100 depicted in FIG. 1.
  • the network may be implemented, in some embodiments, as one or more networks that connect devices and/or components in the network layout for allowing communication between them.
  • the network may be implemented as the Internet, a wireless network, a wired network, a local area network (LAN), a Wide Area Network (WANs), Bluetooth, Near Field Communication (NFC), or any other type of network that provides communications between one or more components of the network layout.
  • the network may be implemented using cell and/or pager networks, satellite, licensed radio, or a combination of licensed and unlicensed radio.
  • the network may be wireless, wired, or a combination thereof.
  • the RA module 108 is implemented as one or more computers storing instructions that, when executed by one or more processor(s), process administrator provided instructions for adjustment criteria of one or more parameters for an assessment, wherein cognitive ability data (CAD) is retrieved for each user from a CAD database and compared against the adjustment criteria, to determine, for each user, any adjustments to the parameters for a specific assessment.
  • the administrator server comprises the computing device 107 in which the RA module is implemented.
  • an administrator server computing device comprises, among other things, desktop computers, laptops or notebook computers, mobile devices (e.g., smart phones, cell phones, personal digital assistants (PDAs), and tablets), or wearable devices (e.g., smartwatches).
  • the RA module is implemented on separate computing devices from the administrator server. For example, an administrator may provide instructions to the computing device 107, and the administrator server then connects with a RA module, via the network, located on a different server or different computing device.
  • the RA module comprises software stored in memory accessible by the administrator server (e.g., in a memory local to the server or remote memory accessible over a communication link, such as the network).
  • the RA module comprises an algorithm for processing the adjustment criteria of one or more parameters.
  • the user devices, the administrator server, and the RA module are connected or interconnected to a cognitive ability data (CAD) database 104.
  • the CAD database comprises one or more memory devices configured to store data (e.g., cognitive ability data for a plurality of users).
  • the CAD database may also, in some embodiments, be implemented as a computer system with a storage device.
  • the CAD database is used by the RA module to retrieve corresponding CAD for a user.
  • a user uploads corresponding CAD from a previous cognitive assessment test to be stored in the CAD database.
  • a user takes a cognitive assessment test, wherein the results are processed and stored as CAD in the CAD database.
  • the CAD database is co-located with the administrator server.
  • the disclosed embodiments are not limited to the configuration and/or arrangement of the database(s).
  • the CAD database is cloud-based.
  • any of the user devices, the administrator server, the CAD database and/or the RA module may, in some embodiments, be implemented as a computer system.
  • the network is shown in FIG. 1 as a “central” point for communications between components of the system, the disclosed embodiments are not limited thereto.
  • one or more components of the system 100 may be interconnected in a variety of ways, and may in some embodiments be directly connected to, co-located with, or remote from one another, as one of ordinary skill will appreciate.
  • the disclosed embodiments may be implemented on the server, the disclosed embodiments are not so limited.
  • other devices such as one or more user devices
  • the CAD is communicated from the CAD database to the administrator server through secure means or unsecure means. In some embodiments, the CAD is communicated from the CAD database to the administrator server through a csv, db, SQL, method, and/or through block-chain technology.
  • system 100 is at least partly presented and interacted through a website, cloud deployed software, virtual reality (VR) environments, augmented reality (AR) environments, extended reality (XR) environments, or a combination thereof.
  • VR virtual reality
  • AR augmented reality
  • XR extended reality
  • FIG. 4 provides a depiction of an exemplary method 400 for automatically adjusting one or more assessment parameters, as described herein, based on instructions provided from an administrator and CAD data corresponding to a user.
  • an administrator provides instructions 402 for adjusting one or more parameters for an assessment.
  • said instructions comprise adjustment criteria for each parameter based on a specified cognitive domain and may further comprise thresholds of a cognitive level for the cognitive domain.
  • the instructions are provided to a RA module configured to store said instructions for the specific assessment and process said instructions to adjust the parameters for the assessment.
  • the RA module retrieves 404 the corresponding cognitive ability data (CAD) for the user.
  • the system as described herein (e.g., Assessment module 109), will check if the user has CAD shared, and if this check shows the CAD is available, the system will inject the CAD into the RA module, wherein the RA module will perform a computation for each parameter (for an assessment) to determine if the CAD demonstrates eligibility for that parameter (based on the adjustment criteria provided by the administrator). For each parameter where eligibility is found, the RA module will activate 410 the parameter for the parts of the assessment that the parameter has been made available.
  • CAD cognitive ability data
  • each assessment will have a finite number of possible parameter adjustments that can be enacted.
  • the administrator would have to review each parameter available and evaluate the cognitive assessment report (corresponding to the user) to determine if the user qualifies for a specific parameter, then move to the next available parameter, until all available parameters have been checked and adjustment applied accordingly or not.
  • the administrator will have a policy (e.g., adjustment criteria as described herein) for each parameter that they will review the cognitive assessment report against, to determine if the parameter applies for the user, and in the case of graduated parameters, the amount of the parameter to be applied, or the case of multiple options, the determined option for the parameter.
  • the systems and method described herein enable a much faster and robust evaluation and application of parameter adjustments than traditional methods.
  • the CAD is obtained from a CAD database in communication with the RA module.
  • the user is able to upload 406 CAD from a past cognitive assessment test (CAT) to the CAD database.
  • CAT cognitive assessment test
  • the user is able to take a cognitive assessment test (CAT) 408, wherein the results are processed and stored into the CAD database.
  • a separate server and/or computing device is used to process the results from the CAT.
  • a 3 rd party and/or the user optionally provides historical information 407.
  • 3rd party data comprises data from any source e.g., demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof.
  • the user is provided (e.g., via the Administrator server 106) with the option of uploading a compatible cognitive assessment or the option of taking a cognitive assessment test prior to the assessment.
  • a user will be prompted to elect a permissions to share CAD function when taking an assessment.
  • the method allows for a digital cognitive assessment to be placed at the start of the process, or for previously completed digital cognitive assessments, to share (e.g., via an application programming interface API) the output values of the previously completed cognitive assessment, with a system described herein.
  • the cognitive assessment output is a set of values for pre-determined cognitive abilities and attributes. In some embodiments, these values become the Cognitive Ability Data (CAD).
  • the assessment is stored in the administrator server (e.g., Assessment Module).
  • the Assessment Module receives and stores the assessment parameter adjustment applied to a given assessment for a user.
  • the assessment can then be administered 412 to the user via a user device, which is in communication with the administration server.
  • the user device will provide the assessment with the respective parameters adjusted (e.g., activated).
  • the assessment is provided via an interface on the user device (e.g., a display for a computing device).
  • the user is able to provide input to the assessment via any input means as described herein.
  • FIG. 5 provides a depiction of an exemplary method 500 for automatically adjusting one or more assessment parameters, wherein steps 502 - 512 are the same as described for steps 402-412 in FIG. 4, but method 500 further comprises a machine algorithm process.
  • the administrator server as described herein, is configured to receive the results of the assessment by a user.
  • the administrator server receives and stores the input received from a user for an assessment in an Evaluator Module, as described herein, stored on the server.
  • the Evaluator Module is configured to evaluate the input from the users 514 and review the impact by adjusting the one or more parameters.
  • the Evaluator Module will automatically update the adjustment criteria 516 to correct for the undesired impact or lack of impact. In some embodiments, the Evaluator Module will provide an alert or notification to an administrator, who will then provide new instructions, thereby manually modifying the adjustment criteria. In some embodiments, little or no impact corresponds to insufficient impact by the parameter adjustment (e.g., an insufficient improvement to adequately compensate for the identified cognitive domain (e.g., deficient cognitive domain) of the user. In some embodiments, undesired impact (or negative impact) corresponds to a further worsening of a performance on an aspect of the assessment relating to an identified cognitive domain.
  • FIG. 12 provides another view of an exemplary flow for the machine learning algorithmic process.
  • a problem or a plurality of problems are defined by any type of organization (e.g., academic institution, government agency, employer, retailer, etc.) for which the organization wishes to use cognitive and/or noncognitive data to solve the one or more problems.
  • a combined data set, for the algorithm’s use is selected from a plurality of datasets described herein e.g., a cognitive dataset or a 3 rd party data set.
  • the plurality of datasets may comprise structured data, unstructured data, or semi-structured data. Structured data may include data in a relational database such as dates, names, phone numbers, addresses, etc.
  • Unstructured data may include text files (e.g., word processing, spreadsheets, presentations, emails, or data logs), emails, social media (e.g., data from Facebook ® , Twitter ® , or Linkedln ® ), websites (e.g., YouTube ® , Instagram ® , or photosharing sites), mobile data (e.g., text messages or locations), communications (e.g., instant messaging, phone recordings, or collaboration software), media (e.g., MP3, digital photos, or audio and video files), or business applications (e.g., MS ® Office and productivity applications). Unstructured data may be transformed into structured data.
  • text files e.g., word processing, spreadsheets, presentations, emails, or data logs
  • emails social media
  • social media e.g., data from Facebook ® , Twitter ® , or Linkedln ®
  • websites e.g., YouTube ® , Instagram ® , or photosharing sites
  • mobile data e.g., text messages or locations
  • the present disclosure may assess usage of a particular word and/or phrase (e.g., a personal pronoun, emotionally negative words, diagnosis terms, etc.) in the unstructured data retrieved from e.g., online conversations.
  • the algorithm transforms the plurality of usages into a plurality of structured datapoints e.g., occurrences and/or prevalence of personal pronoun usage.
  • Semi- structured data may include machine markup language XML, open standard JavaScript ® Object Notation (JSON), or not-only structured language (NoSQL).
  • FIG. 12 further provides curation of the combined data set, by the algorithm, comprising auditing, cleaning, and or/merging the combined datasets and/or also to create searchable indexes wherein the combined dataset is structured for the algorithm’s use.
  • Data comprising 3rd party data may be audited after retrieval, for example, from a plurality of organizations different from the organization desiring to solve the one or more problems. Auditing may comprise standardizing using standard conventions such as government standards, trade standards, and/or certifying organization standards e.g., standard ethnicity lists or codes, standard nationality lists or codes, or census conventions, etc.
  • Data comprising 3rd party data may be cleaned after retrieval, for example, from a plurality of organizations different from the organization desiring to solve the one or more problems.
  • Cleaning may comprise removal of outliers, determination of missing data, and/or presence of anomalous data e.g., a user’s age is 205 years.
  • Cleaning may further comprise removal of datasets wherein the user improperly completed the cognitive assessment e.g., less than full effort by the user, intentional malingering by the user, and/or feigning cognitive difficulty.
  • a determination of improper completion may also be determined during the assessment by e.g., a below chance performance on a forced choice test.
  • a determination of improper completion may also be determined using a standalone assessment e.g., an assessment presented to the user as the cognitive assessment but instead assesses performance validity.
  • Data comprising 3rd party data may be merged after retrieval, for example, from a plurality of organizations different from the organization desiring to solve the one or more problems.
  • Merging may comprise associating cognition data from each individual user with their data from the 3rd party.
  • Merging may further comprise associating cognitive profiles with specific learning outcomes from the 3rd party.
  • Associating cognitive profiles with specific learning outcomes may comprise a unique identifier that is common between both the user’s cognition dataset and the 3rd party dataset.
  • a unique identifier may be the Unique Learner Number (ULN) used in the education sector and government.
  • UPN Unique Learner Number
  • the algorithm may use the combined dataset to create one or more predictive models.
  • the predictive models may be validated, by the algorithm, against a training dataset. After validation of the one or more models, the predictive models may be deployed to the one or more organizations for solving the one or more problems.
  • the predictive models may be updated before, during, or after deployment through a feedback loop with addition of new cognitive data or new 3 rd party data. From the predictive models, by the algorithm, one or more outcomes may be determined for one or more users wherein the one or more outcomes may solve the one or more problems of the one or more organizations.
  • FIG. 13 shows a computer system 1301 that is programmed or otherwise configured for automatically adjusting a parameter of an assessment to accommodate individual user cognitive capabilities.
  • the computer system 1301 can regulate various aspects of the present disclosure, such as, for example, receive or generate sequence reads, correlate sequences to specific epitopes or autoantibodies, output a result for the user as to the presence of an autoantibody or profile, or an expected progression of a disease.
  • the computer system 1301 can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device.
  • the electronic device can be a mobile electronic device.
  • the computer system 1301 includes a central processing unit (CPU, also “processor” and “computer processor” herein) 1305, which can be a single core or multi core processor, or a plurality of processors for parallel processing.
  • the computer system 1301 also includes memory or memory location 1310 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 1315 (e.g., hard disk), communication interface 1320 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 1325, such as cache, other memory, data storage or electronic display adapters.
  • the memory 1310, storage unit 1315, interface 1320 and peripheral devices 1325 are in communication with the CPU 1305 through a communication bus (solid lines), such as a motherboard.
  • the storage unit 1315 can be a data storage unit (or data repository) for storing data.
  • the computer system 1301 can be operatively coupled to a computer network (“network”) 1330 with the aid of the communication interface 1320.
  • the network 1330 can be the Internet, an internet or extranet, or an intranet and extranet that is in communication with the Internet.
  • the network 1330 in some cases is a telecommunication or data network.
  • the network 1330 can include one or more computer servers, which can enable distributed computing, such as cloud computing.
  • the network 1330 in some cases with the aid of the computer system 1301, can implement a peer-to-peer network, which may enable devices coupled to the computer system 1301 to behave as a client or a server.
  • the CPU 1305 can execute a sequence of machine-readable instructions, which can be embodied in a program or software.
  • the instructions may be stored in a memory location, such as the memory 1310.
  • the instructions can be directed to the CPU 1305, which can subsequently program or otherwise configure the CPU 1305 to implement methods of the present disclosure. Examples of operations performed by the CPU 1305 can include fetch, decode, execute, and writeback.
  • the CPU 1305 can be part of a circuit, such as an integrated circuit.
  • a circuit such as an integrated circuit.
  • One or more other components of the system 1301 can be included in the circuit.
  • the circuit is an application specific integrated circuit (ASIC).
  • the storage unit 1315 can store files, such as drivers, libraries and saved programs.
  • the storage unit 1315 can store user data, e.g., user preferences and user programs.
  • the computer system 1301 in some cases can include one or more additional data storage units that are external to the computer system 1301, such as located on a remote server that is in communication with the computer system 1301 through an intranet or the Internet.
  • the computer system 1301 can communicate with one or more remote computer systems through the network 1330.
  • the computer system 1301 can communicate with a remote computer system of a user.
  • remote computer systems include personal computers (e.g., portable PC), slate or tablet PC’s (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants.
  • the user can access the computer system 1301 via the network 1330.
  • Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 1301, such as, for example, on the memory 1310 or electronic storage unit 1315.
  • the machine executable or machine-readable code can be provided in the form of software. During use, the code can be executed by the processor 1305. In some cases, the code can be retrieved from the storage unit 1315 and stored on the memory 1310 for ready access by the processor 1305. In some situations, the electronic storage unit 1315 can be precluded, and machine-executable instructions are stored on memory 1310.
  • the code can be pre-compiled and configured for use with a machine having a processer adapted to execute the code or can be compiled during runtime.
  • the code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as- compiled fashion.
  • aspects of the systems and methods provided herein can be embodied in programming.
  • Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code or associated data that is carried on or embodied in a type of machine readable medium.
  • Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk.
  • “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server.
  • another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links.
  • a machine readable medium such as computer-executable code
  • a tangible storage medium such as computer-executable code
  • Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings.
  • Volatile storage media include dynamic memory, such as main memory of such a computer platform.
  • Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system.
  • Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code or data.
  • Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
  • the computer system 1301 can include or be in communication with an electronic display 1335 that comprises a user interface (UI) 1340 for providing, for example, selecting autoantibodies for analysis, interacting with graphs correlating autoantibodies to specific generated profiles. Examples of ET’s include, without limitation, a graphical user interface (GET) and web-based user interface.
  • UI user interface
  • ET graphical user interface
  • Methods and systems of the present disclosure can be implemented by way of one or more algorithms.
  • An algorithm can be implemented by way of software upon execution by the central processing unit 1305. The algorithm can, for example, calculate statistics measurements to identify autoantibodies and generate profiles or predict efficacy and toxicity of a treatment.
  • An exemplary adjustment criteria provided by an administrator may comprise providing extra time for reading tasks on an assessment.
  • the administrator may specify an extra time parameter be activated for a user that is in the 15 th percentile for reading speed (e.g., a deficient cognitive domain), as determined via the corresponding CAD of the user.
  • CAD is provided as a statistical measure and in relation to a population (e.g., percentile).
  • the administrator may set a maximum amount of extra allotted time (e.g., 50%).
  • the adjustment criteria may further specify a graduated parameter adjustment from about 0% to 50% of extra allotted time (or any value from 0% to the maximum extra allotted time).
  • the adjustment criteria is specified according to a statistical measure, wherein the parameter will be graduated between 0% and 50% of extra allotted time depending on where the user falls according to the statistical measure.
  • the parameter for extra time will be activated if the cognitive level of the reading speed deficient cognitive domain for the user is within the 15 th percentile, and the amount of extra time provided will be determined based on what the exact percentile is.
  • the adjustment criteria may further model the “tail” of the population (e.g., based on all the CAD in the CAD database) against a normal standard deviation. As such, the bottom 15% of the users (e.g., for a plurality of users taking an assessment) can be identified, and have their parameters adjusted accordingly.
  • Another option of determining which users are eligible to receive a parameter adjustment is through scores, such as a Std. Score. Accordingly, a user will be eligible to receive a parameter adjustment (e.g., that relates to reading speed) if their corresponding Std. Score is below a certain threshold.
  • FIG. 6 provides an exemplary linear regression model for determining the extra time provided (y-axis) based on the cognitive level for reading speed, wherein the threshold is depicted as being less than 16th percentile (per the below provided linear regression equation).
  • An exemplary prompt by a system described herein may be: “what is the maximum additional time you would like to allow?”, wherein an administrator may input “50%” (as an example). The system may then prompt: “what threshold would you like to use to trigger additional time?”, wherein the administrator may input: “percentile ⁇ 16” or “standard score ⁇ 85”.
  • a system described herein e.g., Administrator Server described herein
  • the above equation would change depending on the administrator inputs above (e.g., 50% max allotted time & 16 th percentile threshold).
  • linear regression is one example of a model, but there are several other relationships between a parameter and cognitive level (e.g., percentile rank, relational attribute) of cognitive domain (based on the CAD) that can be used, several of which are non-linear (e.g., power, log, inverse standard deviation, probability distribution, etc.).
  • a parameter and cognitive level e.g., percentile rank, relational attribute
  • non-linear e.g., power, log, inverse standard deviation, probability distribution, etc.
  • a parameter may be dependent on multiple cognitive domains (e.g., multiple cognitive capabilities) or a single cognitive domain (as described herein, which may be for example statistical based or a relational attribute). Accordingly, the relationship between the cognitive domains may be taken into account when determining a parameter adjustment.
  • Individual regression equations could be used, but using a multiple regression equation, as depicted in FIG. 7, provides an exemplary relationship between the parameter and cognitive domains.
  • FIG. 7 depicts a multiple regression equation and curve for the parameter relating to subtitles, and the following three cognitive domains: working memory test, verbal comprehension test, and verbal memory test.
  • the parameter adjustment is simply binary, in that the subtitles function is either activated or not. Accordingly, the mathematical model is not linear, but follows a step-wise curve wherein a maximum threshold (herein 85) will not be provided the subtitle function for the assessment.
  • a maximum threshold herein 85
  • xi corresponds to a standard score for working memory test
  • X 2 corresponds to a standard score verbal comprehension test
  • X 3 corresponds to a standard score for verbal memory test. Accordingly, xi through X 3 could be calculated using individual regression equations, however including them in a multiple regression equation means their relationship to each other is taken into account. In some cases, Administrator input might be a standard score but this, may be converted to percentiles by the system for ease of regression modelling.
  • FIG. 10 provides an exemplary linear regression model for determining the extra time provided (y-axis) based on a plurality of cognitive domains, wherein a threshold is specified to trigger the allotted time.
  • An exemplary prompt by a system described herein may be: “which cognitive domains may have an impact on this parameter adjustment?”, wherein the Administrator may input “Cognitive Domain 1 (xl), Cognitive Domain 2 (x2), and Cognitive Domain 3 (x3)” (as an example).
  • the system may then prompt: “what threshold would you like to be taken into account for the combination of the three domains?”, wherein the administrator may input: “below average, or ⁇ 50 th percentile, or ⁇ below Std. Score 100”. The system may then prompt: “what is the maximum additional time you would like to allocate?”, which the administrator may input: “100%”. In some embodiments, the threshold for each domain is prompted.
  • y the additional time allotted
  • x the relevant cognitive threshold (or cognitive level) as percentile
  • a (100) the intercept of the y axis (or maximum additional time allowed)
  • b (-2) the relationship between x and y (automatically calculated by a system described herein).
  • the multiple cognitive domains may all be weighted equally or differently when determining the relevant cognitive threshold.
  • the system may prompt: “would you like all domains weighted equally?”.
  • an administrator may input: “yes”, wherein a system (as described herein) will then equally partials out b (-2) to generate the following regression equation:
  • each domain corresponds to the respective cognitive domain standard score.
  • FIG. 8 provides a relationship between Response Time (in seconds) for a given task on an assessment with the Frequency of an individual user’s response time distribution.
  • the response time variability is commonly used with users having attentional issues, such as Attention Deficient Hyperactivity Disorder (ADHD).
  • ADHD Attention Deficient Hyperactivity Disorder
  • frequency stands for the number of questions that they responded to in that time.
  • Variability in response times can be worked out mathematically (e.g. Std. Dev or variance) and can also be depicted in a graph such as in FIG. 8. Higher variability in response times can indicate attentional issues or lower overall cognitive performance.
  • Response times can be used as indications of many things including cognitive performance (e.g., response time for correctly answered items), impulse control (response times to incorrectly answered but intuitive items e.g. Stroop task) and overall variability which can relate to attentional issues such as ADHD or lower overall cognitive performance.
  • cognitive performance e.g., response time for correctly answered items
  • impulse control response times to incorrectly answered but intuitive items e.g. Stroop task
  • overall variability which can relate to attentional issues such as ADHD or lower overall cognitive performance.
  • FIG. 12 provides another view of an exemplary flow for the machine learning algorithmic process describe herein.
  • a problem or a plurality of problems are defined by any type of organization (e.g., academic institution, government agency, employer, retailer, etc.) for which the organization wishes to use cognitive or noncognitive data to solve the one or more problems.
  • a combined data set, for the algorithm’s predictive use is selected from a plurality of datasets described herein e.g., a cognitive dataset or a 3rd party data set.
  • a 3rd party dataset may comprise data from any source e.g., demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof.
  • the plurality of datasets may comprise structured data, unstructured data, or semi-structured data.
  • Structured data may include data in a relational database such as dates, names, phone numbers, or addresses.
  • Unstructured data may include text files (e.g., word processing, spreadsheets, presentations, emails, or data logs), emails, social media (e.g., data from Facebook®, Twitter®, or Linkedln®), websites (e.g., YouTube®, Instagram®, or photo sharing sites), mobile data (e.g., text messages or locations), communications (e.g., instant messaging, phone recordings, or collaboration software), media (e.g., MP3, digital photos, or audio and video files), or business applications (e.g., MS® Office and productivity applications).
  • Semi-structured data may include machine markup language XML, open standard JavaScript® Object Notation (JSON), or not- only structured language (NoSQL).
  • FIG. 12 further provides for curation of the combined data set, by the algorithm, comprising auditing, merging, or cleaning the combined datasets or also to create searchable indexes wherein the combined dataset is structured for the algorithm’s predictive use.
  • the algorithm may use the combined dataset to create one or more predictive models using one or more machine learning approaches e.g., logistic regression analysis, discriminant function analysis, decision tree analysis, neural network analysis, etc.
  • the predictive models may be validated, by the algorithm, against one or more independent datasets. After validation of the one or more predictive models, the predictive models may be deployed to the one or more organizations for solving the one or more problems.
  • the predictive models may be updated before, during, or after deployment through a feedback loop with addition of new cognitive data or new 3rd party data. From the predictive models, by the algorithm, one or more outcomes may be determined for one or more users wherein the one or more outcomes may solve the one or more problems of the one or more organizations.
  • Table 1 and Table 2 depict improved predictive performance of the present disclosure.
  • the machine learning algorithm classified users as likely or not likely of withdrawing from a program e.g., an academic program.
  • Table 1 depicts classification without using cognitive data.
  • the algorithm correctly classified only 68.9% of users.
  • Table 2 depicts classification using cognitive data described herein.
  • the algorithm correctly classified 74.4% of users.
  • the improved predictive performance is about an 8% improvement.
  • Such information may be used, for example, to proactively identify users as at risk for dropping out of a program. Further, additional support may be provided for the user to reduce the likelihood that the user drops out of the program. Additionally, past data may be used to predict future outcomes for the user and/or other users.
  • a 3 rd party dataset may comprise data from any source e.g., demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof.
  • Improved marketing strategies may comprise more effective advertisements for products or services from the one or more organizations. For example, an organization may use cognitive data or 3 rd party data to improve its advertising.
  • the organization may present its advertising to appeal more to a customer’s quantitative cognition using data e.g., numbers and statistics about the product or service.
  • the organization may present its advertising to appeal more to a customer’s emotional cognition using personal stories e.g., a personal testimonies about the product or service.
  • the organization may present its advertising to appeal more to a customer’s visual cognition using imagery e.g., images or videos of the product or service.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Tourism & Hospitality (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Data Mining & Analysis (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Disclosed herein, are systems and methods for automatically adjusting assessment parameters for individual users based on their cognitive capabilities. In some embodiments, the systems and methods cognitive abilities data (CAD) for each user are used to identify one or more cognitive domains associated with the user that enable an assessment parameter to be eligible for adjustment. In some embodiments, an assessment administrator ("administrator") does not need to review or validate the CAD for each user, but rather sets up and provides rules for a given assessment that specify what parameters are to be adjusted based on certain cognitive domains being present for a user. Accordingly, for each user, the corresponding CAD and associated cognitive domains are automatically compared against the rules for an assessment, wherein adjustments are made to the assessment parameters to compensate for the respective cognitive abilities and attributes of the user that 1) would otherwise place such users at a disadvantage for the assessment (e.g., test, learning module, employment task), as compared with a general population of users, and/or 2) would otherwise fail to adequately stimulate the user or provide an effective assessment (e.g., test, learning module, employment task).

Description

SYSTEM AND METHODS FOR AUTOMATICALLY APPLYING REASONABLE
ADJUSTMENTS
CROSS-REFERENCE
[0001] This application claims the benefit of U.S. Provisional Application No. 63/174,428, filed April 13, 2021, which is hereby incorporated by reference in its/their entirety herein.
INCORPORATION BY REFERENCE
[0002] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
BACKGROUND
[0003] Adjusting parameters for tests and other assessments to compensate for an individual’s cognitive capabilities can be a challenging task for administrators of the tests and assessments. Generally, administrators can rely on results from cognitive assessment tests taken by individuals to manually determine what parameters for a given test or assessment needs to be adjusted for a particular individual on a case by case analysis. Such manual determinations, however, can be a time-consuming and costly process, particularly if a large number of individuals are seeking parameter adjustments. Moreover, such assessment parameter adjustments may be unfairly applied across different individuals due to inconsistencies between more than one administrator making such parameter adjustments.
SUMMARY
[0004] Disclosed herein, in one aspect, is a system for automatically adjusting a parameter of an assessment to accommodate individual user cognitive capabilities, comprising: a) a cognitive ability data (“CAD”) database comprising a memory for storing results from a cognitive assessment test for the user, wherein said results define one or more cognitive domains of the user; b) an administrator server in communication with the CAD database, the administrator server comprising 1) an administrator computing device, 2) an assessment module wherein an assessment is stored therein, the assessment comprising one or more questions or processes for the user, and one or more parameters; and 3) a reasonable adjustment (RA) module, wherein the administrator computing device is configured to provide a first set of software instructions to the RA module, wherein the administrator server comprises one or more processors to execute the first set of instructions to: i) provide an adjustment criteria to a parameter of the one or more parameters, wherein the adjustment criteria corresponds to a cognitive capability; ii) retrieve the one or more cognitive domains for the user via the CAD database; and iii) adjust the parameter according to the adjustment criteria and a cognitive domain for the user, wherein the cognitive domain corresponds to the cognitive capability; and c) a user computing device in communication with the administrator server, the user computing device configured to provide the assessment to the user with the corresponding adjusted parameter.
[00051 In some embodiments, the one or more parameters comprise a presentation parameter, a function activation parameter, a time-based parameter, a process/journey parameter, a marketing parameter, or a combination thereof.
[00061 In some embodiments, the presentation parameter comprises font type; font size; font color; spacing between letters; words, lines, and/or paragraphs; background color of an assessment (e.g., as displayed through a computing device as described herein); providing a digital screen overlay of any specific color; displaying the assessment (e.g., via a computing device as described herein) with an alternative template or Cascading Style Sheet (CSS) or other form of screen structure; limiting a maximum amount of text displayed to the user at any given time (e.g., max 5 lines) before moving on to the next segment of text; a change in tone of voice of presented text e.g., from descriptive to emotional, or analytical, or a combination thereof.
[00071 In some embodiments, the function activation parameter comprises an ability for user to activate speech-to-text software for an assessment (e.g., to enable speech to be recorded instead of typing); ability for user to access dictionary or thesaurus to be available for use; activation of subtitles; activation of audio descriptions; phonetic spell-correction software; a sign-language avatar (e.g., that uses British Sign Language) if video is used; an image based sign language substitute; or a combination thereof.
[00081 In some embodiments, the time-based parameters comprise enforcing a break during the assessment; time allowed for each question, section of questions, section of the assessment, and/or entire assessment, including for example, time allowed to read, type, and/or think; enforcing a break during the test at a predetermined time (e.g., prior to the commencement of the next question after 50% of time has lapsed and/or after a finite amount of time such as 40 mins); enforcing a break for a finite period (e.g., 20 mins), forcing slower progression by the user through a task (e.g., by monitoring how fast a user is responding and if beyond a predetermined maximum speed, to prevent a response within a specific timeframe from an event, such as showing a question and/or activating a pop-up recommending to slow); or a combination thereof.
[00091 In some embodiments, the marketing parameter comprises modifying the baseline question wherein the modification relies more on (“markets”) a determined cognition than the baseline question. In some embodiments, the marketing comprises presenting the question to rely more on the user’s quantitative / data cognition compared to the baseline question. For example, the question may market numbers or equations to present part or all of the baseline question. In some embodiments, the marketing comprises presenting the question to rely more on the user’s emotional cognition compared to the baseline question. For example, the question may market a story that appeals to the user’s emotions to present part or all of the baseline question. In some embodiments, the marketing comprises presenting the question to rely more on the user’s visual cognition compared to the baseline question. For example, the question may market an image to describe part or all of the baseline question. In some embodiments, the marketing parameters may be directed to commercial advertising wherein the advertising is modified to appeal to the user’s preferred cognition.
[0010] In some embodiments, the adjustment criteria defines a maximum and/or minimum threshold value for the cognitive domain of the one or more cognitive domains. In some embodiments, the adjustment criteria is based on 1) statistical properties relating to one or more cognitive domains, 2) relational attributes relating to two or more cognitive domains, or 3) both. In some embodiments, the relational attributes relating to two or more cognitive domains comprise a relationship between the two or more cognitive domains. In some embodiments, the adjustment of the parameter comprises a binary adjustment, wherein the parameter is either activated or not as presented to the user with the assessment. In some embodiments, the adjustment of the parameter further comprises 1) a graduated adjustment, 2) an options adjustment, or 3) both. In some embodiments, the options adjustment comprises two or more options for adjusting the parameter. In some embodiments, the graduated adjustment comprises any number of ranges a parameter can adjusted based on the corresponding cognitive domain for the user.
[0011] In some embodiments, the system further comprises an Evaluator Module configured to: a) receive results of the assessment from the user; b) evaluate the results of the assessment from the user and a plurality of other users; c) determine the impact of the parameter adjustment based on the results of the assessment from the plurality of other users and user; and d) modify the adjustment criteria if said impact corresponds to a negative impact or insufficient impact based on the results of the assessment from the plurality of users and the user as relating to the parameter adjustment. In some embodiments, the adjustment criteria is modified automatically, or manually by an administrator. [0012] Disclosed herein, in another aspect, is a method for automatically adjusting a parameter for an assessment to accommodate a user’s cognitive capabilities, comprising: a) providing an adjustment criteria for the parameter of the assessment stored on an administrator server, wherein the adjustment criteria corresponds to a cognitive capability; b) retrieving a cognitive domain of the user via a cognitive assessment data (CAD) database, wherein the CAD database is in communication with the administrator server; c) adjusting the parameter based on the adjustment criteria and cognitive domain, wherein the cognitive domain corresponds to the cognitive capability; and d) administering the assessment to the user via a user computing device in communication with the administrator server, wherein the assessment is administered with the adjusted parameter.
[00131 In some embodiments, the method further comprises: a) analyzing the results from the assessment from the user and a plurality of other users; b) determining an impact of the parameter adjustment; and c) modifying the adjustment criteria based on the impact.
[00141 In some embodiments, said modifying the adjustment criteria is based on the impact 1) having a negative impact on the results of the assessment as compared to without adjusting the parameter, or 2) having no impact or insufficient impact on the results of the assessment as compared to without adjusting the parameter. In some embodiments, the method further comprises receiving results of a cognitive assessment test by the CAD database, wherein said results define one or more cognitive domains for the user.
[00151 In some embodiments, the one or more parameters comprise a presentation parameter, a function activation parameter, a time-based parameter, a process/journey parameter, a marketing parameter, or a combination thereof. In some embodiments, the adjustment criteria defines a maximum and/or minimum threshold value for the cognitive domain of the one or more cognitive domains. In some embodiments, the adjustment criteria is based on 1) statistical properties relating to one or more cognitive domains, 2) relational attributes relating to two or more cognitive domains, or 3) both. In some embodiments, the relational attributes relating to two or more cognitive domains comprise a relationship between the two or more cognitive domains. In some embodiments, the adjustment of the parameter comprises a binary adjustment, wherein the parameter is either activated or not as presented to the user with the assessment. In some embodiments, the adjustment of the parameter further comprises 1) a graduated adjustment, 2) an options adjustment, or 3) both. In some embodiments, the options adjustment comprises two or more options for adjusting the parameter. In some embodiments, the graduated adjustment comprises any number of ranges a parameter can adjusted based on the corresponding cognitive domain for the user.
[00161 In some embodiments, the method further comprises evaluating each parameter of the assessment to identify one or more parameters associated with an adjustment criteria. In some embodiments, the method further comprises adjusting a plurality of parameters, wherein each adjustment is based on a corresponding adjustment criteria and cognitive domain, wherein each cognitive domain corresponds to a cognitive capability associated with the corresponding adjustment criteria.
[00171 Disclosed herein, in another aspect, is a system for a computer-implemented method for predicting a plurality of outcomes for an assessment taken by a user comprising: a) providing a cognitive dataset associated with the user; b) providing a 3rd party dataset associated with the user; c) transforming the cognitive dataset and the 3rd party dataset into a combined dataset; d) generating a predictive model from the combined dataset; and e) using the predictive model to predict the plurality of outcomes for the assessment. In some embodiments, the transforming the cognitive dataset and the 3rd party dataset into a combined dataset comprises auditing, merging, and/or cleaning the combined dataset. In some embodiments, the dataset comprises a plurality of search indexes. In some embodiments, the generating a predictive model from the combined dataset comprises updating the predictive model. In some embodiments, the predictive model is updated using a plurality of new cognitive data associated with user, a plurality of new 3rd party data associated with the user, or a combination thereof. In some embodiments, the plurality of outcomes for the assessment comprises a plurality of adjusted parameters for the assessment. In some embodiments, the plurality of adjusted parameters improves the outcome of the assessment compared to an assessment with no adjusted parameters.
BRIEF DESCRIPTION OF THE DRAWINGS
[00181 F°r a more complete understanding of the present disclosure, including features and advantages, reference is now made to the detailed description along with the accompanying figures: [00191 FIG. 1 depicts an exemplary computer based system comprising an administrator server, a cognitive ability data (CAD) database, and one or more user devices, in accordance with some embodiments.
[00201 FIG. 2 depicts another illustration of the components of the computer based system from FIG. 1, in accordance with some embodiments.
[00211 FIG. 3 depicts the steps and components for generating a user cognitive ability data, in accordance with some embodiments.
[00221 FIG. 4 depicts an exemplary method for adjusting parameters of an assessment, in accordance with some embodiments.
[00231 FIG. 5 depicts the method of FIG. 4, further comprising a machine learning process, in accordance with some embodiments.
[00241 FIG. 6 depicts an exemplary linear regression model for determining a graduated adjustment of a parameter, in accordance with some embodiments.
[00251 FIG. 7 depicts an exemplary illustration of a multiple regression equation and model for determining eligibility of a parameter adjustment, in accordance with some embodiments.
[00261 FIG. 8 depicts an exemplary illustration of a relationship between an individual user response time and frequency, in accordance with some embodiments. [0027] FIG. 9A depicts an exemplary illustration of a relationship between a frequency distribution of a relative ratio between a CAD score for two cognitive domains (a relational attribute), in accordance with some embodiments.
[0028] FIG. 9B depicts an activation parameter based on a threshold for the relational attribute from FIG. 9A, in accordance with some embodiments.
[0029] FIG. 10 depicts an exemplary illustration between extra time allotment and a cognitive domain percentile for a user based on three cognitive domains, in accordance with some embodiments.
[0030] FIG. 11 provides an exemplary flow chart depicting an exemplary relationship between various components of a system described in an embodiment herein.
[0031] FIG. 12 depicts an exemplary machine learning flow, in accordance with some embodiments. [0032] FIG. 13 provides an exemplary computing system capable of implementing the systems and methods of the present disclosure.
DETAILED DESCRIPTION
DEFINITIONS
[0027] Unless defined otherwise, all terms of art, notations and other technical and scientific terms or terminology used herein are intended to have the same meaning as is commonly understood by one of ordinary skill in the art to which the claimed subject matter pertains. In some cases, terms with commonly understood meanings are defined herein for clarity and/or for ready reference, and the inclusion of such definitions herein should not necessarily be construed to represent a substantial difference over what is generally understood in the art.
[0028] Throughout this application, various embodiments may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the disclosure. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
[0029] As used in the specification and claims, the singular forms “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a sample” includes a plurality of samples, including mixtures thereof.
[0030] The term “assessment” as used herein generally refers to a task or activity performed by a user, wherein the assessment comprises one or more parameters that can be adjusted to accommodate the cognitive abilities of the user. As used herein, in some embodiments, assessment may comprise a test from any type of organization (e.g., academic institution, government agency, employer, retailer, etc.), wherein the test may comprise any type of questions and/or formats and may be for any type of subject area or industry. As used herein, in some embodiments, assessment may also comprise an application or software for use by a user in connection with an employment, academia, any personal / leisure activity (e.g., a computer game), and/or business activity. As used herein, in some embodiments, assessment may also comprise an interactive platform (such as a website) for a product, such an e-commerce product.
[00311 The term “reasonable adjustment” as used herein generally refers to an adjustment of a parameter for an assessment to compensate for one or more cognitive capabilities of a user. As described herein, in some embodiments, such cognitive capabilities may place the user at a disadvantage, or be considered a disability, in comparison with a general user population. In some embodiments, such cognitive capabilities of the user may be more advanced or superior to a general user population.
[00321 The term “parameter” as used herein generally refers to a variable feature relating to an assessment. Each assessment may include any number of a parameters that can be adjusted, as described herein. In some embodiments, the categories of types of parameters include presentation parameters (e.g., font or background of an assessment as presented to a user as displayed through a computing device), time-based parameters (e.g., amount of time allotted for a test), functional availability parameters (e.g., option to turn on subtitles with an assessment presentation), process/journey parameters, and/or marketing parameters (e.g., modification or part or all of the question).
[00331 As used herein, the term “cognitive domain” generally refers to a particular cognitive ability or attribute (e.g., reading speed, verbal comprehension, etc.) identified with a user. In some embodiments, the cognitive domain covers an entire range of the cognitive ability or attribute. For example, the cognitive domain may be identified as being a disability or deficient cognition (via for example, the cognitive assessment data), and/or the cognitive domain may be identified as being superior or exceeding an average user ability. Each cognitive domain may be identified with a cognitive level, which can be obtained via a statistical output based on a cognitive assessment data for a user, or can be obtained based on a relationship between the cognitive level of two or more cognitive domains of the user.
[00341 As used herein, the term “user” generally refers to a test taker, an employee, an individual, a group of individuals, or any combination thereof. [0035] As used herein, the term “about” in some cases generally refers to an amount that is approximately the stated amount.
[0036] As used herein, the term “about” generally refers to an amount that is greater or less than the amount by 10%, 5%, or 1%, including increments therein.
[0037] As used herein, the term “about” in reference to a percentage generally refers to an amount that is greater or less the stated percentage by 10%, 5%, or 1%, including increments therein. [0038] The section headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described.
AUTOMATICALLY ADJUSTING ASSESSMENT PARAMETERS
[0039] Administrators of assessments (e.g., tests) provided to users are often required to manually adjust certain parameters relating to the assessments to compensate for cognitive abilities of individual users that would otherwise place these users at a disadvantage. Users seeking for such compensatory measures are required to take and submit results from a cognitive assessment test prior to taking an assessment (e.g., 2 weeks prior to the assessment date), so as to ensure an administrator can verify the validity of the cognitive assessment, review the results, and adjust assessment parameters as needed, and as available, in time for the user when taking the assessment. Such manual adjustment of parameters by one or more administrators on a case by case analysis for numerous individual users often results in inconsistent and inequitable adjustments that can vary by the different users. Moreover, manual adjustment of parameters for numerous individual users can become burdensome on the administrators, including being a lengthy process.
[0040] Disclosed herein, are systems and methods for automatically adjusting assessment parameters for individual users based on their cognitive capabilities. In some embodiments, the systems and methods comprise individual users taking a cognitive assessment test prior to taking an assessment (e.g., a test), wherein results from the cognitive assessment test are used to generate cognitive abilities data (CAD) for each user, and which is stored in a database so as to be retrieved for future assessments taken by the user. In some embodiments, the CAD for each user identifies one or more cognitive domains associated with the user for which a disability or deficient cognition is identified, and for which may be compensated through assessment parameter adjustments. In some embodiments, the CAD for each user identifies one or more cognitive domains associated with a user that are eligible, for assessment parameter adjustments (e.g., a more advanced journey module, a more advanced learning module, etc.). In some embodiments, an assessment administrator (“administrator”) does not need to review or validate the CAD for each user, but rather sets up and provides rules for a given assessment that specify what parameters are to be adjusted based on certain cognitive domains being present for a user. Accordingly, for each user, the corresponding CAD and associated cognitive domains are automatically compared against the rules for an assessment, wherein adjustments are made to the assessment parameters to compensate for the respective cognitive abilities and attributes of the user that 1) would otherwise place such users at a disadvantage for the assessment, as compared with a general population of users, and/or 2) would otherwise fail to adequately stimulate the user or provide an effective assessment (e.g., test, learning module, employment task). As described herein, such adjustments to compensate for such deficiencies and/or disabilities in cognitive abilities is referred to as reasonable adjustments.
[00411 In some embodiments, the system and methods disclosed herein comprise using a computer implemented system comprising an administrator server, a CAD database, and one or more user computing devices (“user device”). In some embodiments, the administrator server comprises one or more administrator computing devices, a Reasonable Adjustments (“RA”) module, an Assessment Module, and optionally an Evaluation Module. In some embodiments, the modules are stored on a memory on the server, or a memory on the one or more administrator computing devices. In some embodiments, an assessment is stored on the Assessment Module and configured to be accessed by one or more users via a user device. In some embodiments, the assessment comprises one or more parameters, as described herein, for which the RA module is configured to apply reasonable adjustments based on instructions provided by the administrator. In some embodiments, the parameters are automatically adjusted by the computer implemented system based on the user CAD retrieved from the CAD database and the instructions provided by the administrator for an assessment. Cognitive Ability Data (CAD)
[00421 In some embodiments, as described herein, the cognitive capabilities and attributes for a user are provided via cognitive ability data (CAD), and are identified as respective cognitive domains for the user. In some embodiments, cognitive capabilities and attributes for a user that are identified as being deficient per the respective CAD (e.g., deficient compared to a general user population) are identified as deficient cognitive domains for the user. In some embodiments, cognitive capabilities and attributes for a user that are identified as being superior per the respective CAD (e.g., superior compared to a general user population) are identified as superior cognitive domains for the user. In some embodiments, CAD is generated based on a cognitive assessment test taken by the user. In some embodiments, the cognitive assessment test comprises a format and/or questions as known in the art. In some embodiments, CAD is generated based on one or more cognitive assessment tests taken. In some embodiments, the one or more cognitive assessment tests are taken in series. In some embodiments, the cognitive assessment test is a computer based test. [0043] FIG. 3 provides an exemplary flow chart for generating CAD. In some embodiments, as described herein, a user takes one or more cognitive assessment tests 302. In some embodiments, the user also provides characteristics information 304, such as demographic data. In some embodiments, demographic data comprises gender, geography, ethnicity, handedness, educational attainment, date of birth, others, or a combination thereof. In some embodiments, a 3rd party or a user also provides historical information 305 through e.g., an application programming interface API. In some embodiments, historical information comprises noncognitive data e.g., the user’s school attendance record, prior courses, prior course grades, grade level, or grade level completion. In some embodiments, 3rd party data further comprises demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof. In some embodiments, the results from the one or more cognitive assessments, the characteristics information, and/or the historical information are processed to generate 306 said cognitive ability data (CAD). In some embodiments, such CAD is then provided and used by systems and methods described herein for determining what assessment parameters need to be adjusted.
[0044] In some embodiments, the CAD is obtained in digital format. As described herein, in some embodiments, the CAD comprises a pre-derived dataset that has been through processes that enable the scientific validity of said cognitive assessment and generation of CAD to be satisfied. In some embodiments, the systems and method disclosed herein do not alter the CAD as obtained, but only use the CAD in determining what adjustment to make to parameters for an assessment. In some embodiments, the CAD comprises an indication of a specific cognitive ability. In some embodiments, the CAD comprises a relative measure of a cognitive ability for a user, by providing such relative measure as one of a number of statistical formats such as T-scores, z-score, standard scores, percentiles, scale scores, scaled scores, standard deviations, stanines, percentage stanines, other such statistical mathematical method, or a combination thereof.
[0045] In some embodiments, each score from a CAD for a user represents a single attribute (or cognitive ability) of the individual user’s cognitive abilities, corresponding to a given cognitive domain. In some embodiments, one or more cognitive abilities will be dependent on more than attribute identified with a score (e.g., a given cognitive domain may be determined based on the score for two or more individual cognitive attributes). Accordingly, in some embodiments, there are relational attributes or relational abilities (between two or more cognitive attributes) which are functions of one another and may be represented as ratios, frequencies, fractions, proportions, mean, medians, modes. In some embodiments, the CAD comprises relational attributes, which may comprise intervals (or the difference) between two scores, ratios (division between two scores), and which may be labelled as “difference scores” or “discrepancy scores/analysis.” In some embodiments, multiple statistical techniques are available for determining such difference scores, wherein differences between a score for two cognitive attributes within a cognitive profile are analyzed to determine whether they’re in a normal or expected range, and thereby identify an adjustment (based on this difference). In some embodiments, such information is useful even when an individual does not qualify for a difficulty and/or disability diagnosis. In some embodiments, the unit of measure is usually a standardized form (as described herein), or simply difference scores with base-rate information. For example, standardized scores used for relational attributes such as difference scores may be as follows (as an example): “the difference between cognitive domain 1 and cognitive domain 2 falls into the 16th percentile which is an unusual difference.”
[00461 An exemplary formula for determining a relational attribute score is as follows:
(å(x) - min (x)) / (C(x) - 1)
[00471 Wherein n is a CAD score for a given cognitive domain, X) is the number of cognitive domains considered for this relational attribute, å(X) is the sum of each score of the cognitive domains considered for this relational attribute, and min(X) is the smallest score of all the scores for the cognitive domains considered for this relational attribute. As an example, a threshold may be specified for a relational attribute, wherein certain parameter adjustments are triggered if the calculated relational attribute score is greater than or less than the threshold. For example, the threshold by a score greater than 23 standard score. FIGs. 9A and 9B provide exemplary depictions of an output for a relational attribute. FIG. 9A provides a depiction as to the frequency of scores calculated for a relational attribute, as compared with a user population, with the threshold 23 identified. FIG. 9B depicts the triggering of a parameter that becomes available or on once the threshold of 23 is met.
[00481 In some embodiments, the threshold depends on how closely the cognitive domains correlate. For example, if cognitive domain 1 and cognitive domain 2 correlate very highly, even a small difference between scores on these two cognitive domains might be clinically or educationally relevant (e.g., trigger an adjustment of one or more parameters). By contrast, if cognitive domain 3 and cognitive domain 4 don’t correlate much at all, then a large difference in the corresponding cognitive scores may not be relevant at all.
[00491 In some embodiments, a computer implemented system, as described herein, processes raw results from a user’s cognitive assessment test to generate corresponding CAD and optionally stores said generated CAD in the CAD database. [0050] In some embodiments, the CAD is generated based on user responses to the cognitive assessment test, wherein content of the responses and/or other attributes related to the responses are considered for generating the CAD. For example, in some embodiments, cognitive data is based on response attributes such as response accuracy, speed of providing accurate responses, providing no response, and/or speed of providing inaccurate responses. In some embodiments, error rates and error types are relevant for some assessments, which may not always correlate to the inverse of correct responses, but can include the number of erroneous responses to an intentionally erroneous prompt (as opposed to errors on normal questions), or errors that are related in some way. For example, verbal learning tests will sometimes record words that were incorrectly recalled but semantically related to the target words. In some embodiments, such response attributes are further categorized based on specific cognitive abilities and/or cognitive bios, thereby generating cognitive data with regards to relational abilities (between cognitive abilities). In some embodiments, the cognitive assessment test output is a set of values for pre-determined cognitive abilities and attributes. In some embodiments, these values become the Cognitive Ability Data (CAD).
[0051] As described herein, in some embodiments, cognitive abilities and/or attributes identified as being deficient or as a disability are identified as deficient cognitive domains. For example, a user identified with a deficient cognitive domain for reading speed refers to the user requiring more time (e.g., compared to general user population) to read through material provided in an assessment. Another example of a deficient cognitive domain is verbal comprehension, wherein a user may have more difficulty (e.g., compared to general user population) in understanding verbal instructions (e.g., provided in an assessment), and thus an exemplary parameter adjustment would be to provide subtitles. Assessment Parameters
[0039] In some embodiments, as described herein, each assessment comprises one or more parameters that can be adjusted based on rules specified by an administrator. As described herein, in some embodiments, an assessment comprises a test from any type of organization (e.g., academic institution, government agency, employer, retailer, etc.), wherein the test may comprise any type of questions and formats and may be for any type of subject area or industry. In some embodiments, an assessment comprises an application or software for use by a user in connection with an employment, academia, any personal or leisure activity (e.g., a computer game), and/or business activity. For example, the assessment may refer to the software used and presentation of font as part of an employment, wherein productivity may be improved based on adjustment of certain parameters (e.g., font size, color, etc.). In some embodiments, an assessment refers to a product website or other software for presenting the product. For example, the assessment may refer to a website or interactive platform for an e-commerce product, wherein parameters relating to said website or interactive platform can be adjusted, based on a user’s CAD, so as to adapt specific features, journeys, and/or actions to support user experience.
[00521 In some embodiments, the one or more parameters for assessments can be categorized, such as presentation parameters, function activation parameters, time-based parameters, process/journey parameters, or marketing parameters. In some embodiments, a given assessment comprises presentation parameters, function activation parameters, time-based parameters, process/journey parameters, and/or marketing parameters, or a combination thereof.
[00531 In some embodiments, presentation parameters comprise parameters relating to the presentation of an assessment, including specific characteristics of the assessment. In some embodiments, presentation parameters comprise font type; font size; font color; spacing between letters; words, lines, and/or paragraphs; background color of an assessment (e.g., as displayed through a computing device as described herein); providing a digital screen overlay of any specific color; displaying the assessment (e.g., via a computing device as described herein) with an alternative template or Cascading Style Sheet (CSS) or other form of screen structure; limiting a maximum amount of text displayed to the user at any given time (e.g., max 5 lines) before moving on to the next segment of text; a change in tone of voice of presented text e.g., from descriptive to emotional, or analytical, or a combination thereof.
[00541 In some embodiments, function activation parameters comprise parameters relating to a function or operation that is activated or becomes available to be activated by the user based on a threshold adjustment criteria being met, as described herein. In some embodiments, function activation parameters comprise an ability for user to activate speech-to-text software for an assessment (e.g., to enable speech to be recorded instead of typing); ability for user to access dictionary or thesaurus to be available for use; activation of subtitles; activation of audio descriptions; phonetic spell-correction software; a sign-language avatar (e.g., that uses British Sign Language) if video is used; an image based sign language substitute; or a combination thereof.
[00551 In some embodiments, time-based parameters comprise parameters relating to time as it relates to an assessment. In some embodiments, time-based parameters comprise enforcing a break during the assessment; time allowed for each question, section of questions, section of the assessment, and/or entire assessment, including for example, time allowed to read, type, and/or think; enforcing a break during the test at a predetermined time (e.g., prior to the commencement of the next question after 50% of time has lapsed and/or after a finite amount of time such as 40 mins); enforcing a break for a finite period (e.g., 20 mins), forcing slower progression by the user through a task (e.g., by monitoring how fast a user is responding and if beyond a predetermined maximum speed, to prevent a response within a specific timeframe from an event, such as showing a question and/or activating a pop-up recommending to slow); or a combination thereof.
[0056] In an example of parameters being adjusted, an adjustment of background and text colors (e.g., typeface) for an assessment software application (e.g., Microsoft® Word) can be specified. In some embodiments, the systems and methods described herein are configured to determine the most appropriate background and text colors to improve comprehension and thereby instruct the software application (e.g., Microsoft® Word), which colors to adapt. In some embodiments, the systems and methods described herein are configured to instruct the software application to specify a font type, a font size, and letter and word and line spacing, all of which may improve a cognitive ability. In some embodiments, the software application corresponding to an assessment comprises a browser, wherein adjustment of typeface comprises changing a webpage’s cascading style sheets (CSS) formatting to better support a viewer.
[0057] In some embodiments, process/journey parameters are configured to change the experience journey that a user is taken on as the user moves through the software (e.g., through a learning module, a given set of tasks, etc.). In some embodiments, in an exemplary assessment environment, a user may have the questions asked in a different order, possibly due to where their cognitive strengths or weaknesses lie (e.g., identified via CAD as described herein) so that they are less cognitive exhausted as they progress through the test. In some embodiments, such questions are asked in a different order by staggering questions that are heavy on parts of their cognition where the user is identified as being weaker and spacing them out with questions the user is likely to find less intense due to a natural strength in the processing tasks involved. In some embodiments, the assessment environment is an e-learning course a school has bought to instruct about the speed of light. In such an exemplary embodiment, for a numerical learner with strong visual perception, vectors may be used to communicate speed, however for a language strong individual, comparisons to real life may be applied instead.
[0058] Accordingly, the process/journey parameters are not focused only for presentation adjustments, but also the process the software takes the user through that affects outcomes. In some embodiments, a system described herein identifies the optimum process parameters as well as other parameters to be able to markedly improve outcomes.
[0059] In some embodiments, the marketing parameter comprises modifying the baseline question wherein the modification relies more on (“markets”) a determined cognition than the baseline question. In some embodiments, the marketing comprises presenting the question to rely more on the user’s quantitative / data cognition compared to the baseline question. For example, the question may market numbers or equations to present part or all of the baseline question. In some embodiments, the marketing comprises presenting the question to rely more on the user’s emotional cognition compared to the baseline question. For example, the question may market a story that appeals to the user’s emotions to present part or all of the baseline question. In some embodiments, the marketing comprises presenting the question to rely more on the user’s visual cognition compared to the baseline question. For example, the question may market an image to describe part or all of the baseline question.
[0060] In some embodiments, the assessment refers to environments other than tests and assessments, such as at a place of employment. In some embodiments, with reference to the assessment software application example (e.g., Microsoft® Word), adjustment of certain parameters could be used to improve productivity. For example, the CAD could be used to alter the contrast ratio between font and background of an application software by changing the color of each to be more sympathetic to the structure of cones and rods in a user’s retina, thereby reducing strain and increasing the length of time the user is able to concentrate on the screen.
Administrator Rules and Adjustment Criteria
[0061] In some embodiments, as described herein, each assessment receives rules (e.g., instructions as described herein) that provide criteria for adjusting (adjustment criteria) one or more assessment parameters based on the CAD for a given user. In some embodiments, the adjustment criteria for an assessment parameter corresponds with a deficient cognitive domain or superior cognitive domain, as described herein. For example, in some embodiments, an administrator specifies an increase in time permitted for an assessment or a section of an assessment for those users identified with a deficient cognitive domain relating to reading speed. In some embodiments, the adjustment criteria may specify a threshold of the deficient cognitive domain, as described herein, to trigger an assessment parameter adjustment. For example, a user with a deficient cognitive domain correlating to reading speed, and wherein the user is in the bottom 15 percentile for said reading speed would trigger an assessment parameter adjustment. In another example, a user may be identified with two deficient cognitive domains, and one normal cognitive domain, that together are used to correlate an assessment parameter adjustment criteria. In this case, depending on the weight of each cognitive domain for the correlation, the assessment parameter is adjusted accordingly (e.g., see Example 3).
[0062] In some embodiments, for a given assessment, the administrator will identify all the available parameters. As described herein, there are a plurality of types of assessments that may be available, and thus, the administrator may consider the specific subject area and/or industry of the assessment when determining the parameters applicable for being associated with an adjustment criteria. In some embodiments, different assessments will have cognitive abilities that are more pertinent and important for the given subject area, and thus corresponding parameters to aid those pertinent and important cognitive abilities would not be provided with an adjustment criteria, or would be provided with a minimal adjustment criteria. For example, an assessment that is an English proficiency test for immigration would not provide parameter adjustments such as automatic spellchecks or dictionary support. In another example, assessments for industries where speed is a requirement (e.g., pilot, air- traffic controller), would not provide parameter adjustments for extra time.
[00631 In some embodiments, the administrator will evaluate each parameter one by one and assign rules (instructions) as needed. In some embodiments, administrators will provide rules to set parameters by establishing a minimum / maximum threshold of cognitive performance, which will likely be set by one or both of (1) statistical properties (e.g., standard deviations of the population for one or more cognitive abilities or attributes) (e.g., see FIG. 6), and/or (2) cognitive properties such as relationships between two or more cognitive data-points e.g., see FIG. 7. In some embodiments, a reasonable adjustment of a parameter is configured to be activated by just a statistical property, by just a cognitive property, or both.
[00641 In some embodiments, a reasonable adjustment module is used to calculate an adjustment for each parameter for an assessment with respect to a given user. As described herein, in some embodiments, and with reference to a given user, mathematical models are used to calculate whether an assessment parameter is to be adjusted and by how much. In some embodiments, metadata is also used in calculating a parameter adjustment. In some embodiments, metadata comprises Question Data and Assessment Data. In some embodiments, Question Data comprises data specifically related to questions / prompts as part of an assessment. In some embodiments, Question Data comprises a) question word count, b) question format (e.g., open text, multiple choice, equation, etc.), c) whether a question includes video, and/or d) whether a question includes audio. In some embodiments, Assessment Data comprises data specifically related to the assessment itself or a section of the assessment. In some embodiments, Assessment Data comprises a) total length of an assessment, b) number of questions as part of the assessment, and/or c) assessment modality (e.g., written, oral, multiple choice, etc.). As an example, if metadata for a question is n=number of words, and s=Average syllables per word, the calculation may be that a time based adjustment only applies where s>1.5 or n>20. In some embodiments, the adjustment criteria is targeted to specific areas of the assessment, where they are deemed most needed and which reduces any unintended benefits of a parameter adjustment. For instance, if an adjustment for speed of reading is applied (e.g., 30% extra time) and a question has <10 words but a diagram, then the adjustment may not apply. [0065] In some embodiments, there are a plurality of types of adjustments that may be available for a parameter. In some embodiments, a parameter is binary, in that it is either on (e.g., activated) or off (e.g., not activated). An adjustment criteria would specify what cognitive domain (e.g., deficient cognitive domain, superior cognitive domain) would either activate or deactivate said parameter. An example of a binary parameter is the option for a user to activate a speech-to-text option for the assessment, based on the corresponding cognitive domain (per the user CAD described herein) meeting the required threshold, and thereby enabling a user to provide a response orally and have it entered in text.
[0066] In some embodiments, a parameter is both binary and graduated, wherein the parameter becomes activated upon a cognitive domain meeting a required threshold, and once activated, the parameter adjustment is graduated. For example, an amount of a parameter (e.g., amount of extra time allowed for an assessment) may be dependent on the respective cognitive capability of the user. In some embodiments, mathematical models, such as linear regression, non-linear equations, or other types are used for determining a specific amount of parameter adjustment (e.g., a specific amount of extra time).
[0067] Compared to methods described herein, manual adjustment of graduated parameters by administrators limited the number of different graduated ranges available, considering the difficulty and burdensome task by administrators to evaluate a specific graduated adjustment for each eligible user. For example, regarding the parameter of extra allowed time for an assessment, the range available for adjustment may have been 25%, 50%, 75%, or 100% of extra time allowed. For example, if a test is allotted 1 hour, 50% extra time would provide an additional 30 minutes. Accordingly, a first user having a cognitive level of a deficient cognitive domain requiring only 26% of an amount of extra time would get the same amount of extra time as a second user requiring 49% of an amount of extra time (e.g., 50%), thereby providing an advantage to the first user not only compared with the second user, but with all users. By contrast, in some embodiments, the systems and method described herein enable the application of far more accurate graduations based on mathematical modelling of the population, in theory achieving almost infinite graduations as opposed to the current 4 graduations (25%, 50% 75% and 100%). For example, with reference to the extra time allotted parameter, in some embodiments, mathematical models, such as linear extrapolation, linear regression, non-linear equations, or other types are used to calculate a precise amount of extra time provided to a user that correlates with a specific cognitive level of a deficient cognitive domain (via the corresponding CAD described herein). [0068] In some embodiments, a parameter is both binary and provides multiple options, wherein the parameter is either available or not available, or wherein a parameter comprises multiple options, depending on a corresponding cognitive level of a cognitive domain of the user. Compared to methods described herein, manual adjustment of parameters by administrators limited the options available e.g., numerous options would become more burdensome on the administrator to evaluate each option. For example, in some embodiments, a parameter reflects a color displayed for a given question or section of an assessment, wherein the three options available for adjustment are one of red, green or blue, each option correlating with a cognitive level for a corresponding cognitive domain. Other options that may be available include a choice of font type, font size, playback speed of text-to-speech voice, etc. In some embodiments, instead of providing options, the system and/or methods described herein would automatically determine an optimum font type and/or size for the reader to process, and assigning it to the corresponding assessment, and/or the system automatically determining an optimum color for text color/ background color and assigning this to the assessment. By contrast, in some embodiments, the systems and methods described herein accommodate more options than would be practical for a human to consider, with these options also being more targeted to the specific needs of the individual user.
[0069] In some embodiments, the administrator will not need to apply adjustments on a case by case basis for the one or more users, as the system and methods described herein will pre-set the adjustment criteria, so as to automate the evaluation of each adjustment against the respective cognitive assessment data (CAD) of each user, and thereby determine if the parameter adjustment applies and optionally its graduation or option. In some embodiments, the adjustment of parameters is more rigorous through automation, as described herein, as opposed to manual adjustment wherein a human needs to perform the eligibility criteria.
[0070] In some embodiments, the adjustment criteria is configured to continuously be improved over time by using statistical computation. In some embodiments, the initial adjustment criteria (as provided by an administrator) for a parameter adjustment is data driven. In some embodiments, the impact of the parameter adjustment is reviewed once enough data is gathered, thereby allowing more complex feedback loops to be introduced over time to improve the parameter adjustment (e.g., machine learning process). In some embodiments, the machine learning process can be incorporated with the system and methods described herein, thereby fully automating said systems and methods over time, including totally removing the human from the initial set-up (e.g., setting of adjustment criteria).
[0071] The process, using the systems and methods described herein, would be able to be applied immediately close to the timing of the assessment and not delay any assessment whilst RAs are reviewed and applied. The cost of the process would be reduced by at least about 50%, 60%, 70%, 80%, 90%, 95%, or more. Human error would be removed from all but the initial set-up criteria.
Machine Learning Process
[0072] In some embodiments, as described herein, the systems and methods described herein comprise a machine learning process for modifying the specified adjustment criteria for one or more parameters provided by an administrator. In some embodiments, responses from a plurality of users for an assessment is analyzed to determine the effectiveness of the parameter adjustment for one or more parameters. In some embodiments, wherein the responses to an assessment do not indicate an improvement or have no impact related to a given parameter adjustment, the adjustment criteria can be modified to improve the performance related to the corresponding cognitive domain (e.g., a specific cognitive capability, such as reading speed, which may be deficient or superior compared to a general population of users). In some embodiments, no improvement or no impact corresponds to insufficient impact by the parameter adjustment (e.g., an insufficient improvement to adequately compensate for the identified cognitive domain of the user). In some embodiments, no improvement corresponds to a negative impact, such as a further worsening of a performance on an aspect of the assessment relating to an identified cognitive domain (e.g., compared to if the parameter had not been adjusted). In some embodiments, the adjustment criteria is automatically modified (e.g., by a machine learning process described herein) and further refined as responses from more users are received relating to the same assessment and/or different assessment with comparable parameter adjustments. In some embodiments, an administrator will manually update the adjustment criteria. In some embodiments, the machine learning process will identify other parameters that may be more impactful in improving the performance related to a corresponding cognitive domain.
[0073] In some embodiments, the machine learning process identifies aspects of an assessment that need to be improved. For example, in some embodiments, the system as described herein, may determine that a high proportion of users identified with a certain deficient cognitive domain continue to have difficulty with an aspect of an assessment despite corresponding parameters being adjusted. Accordingly, in some embodiments, the system, as described herein, may identify the aspects of the assessment that need to potentially be adjusted. For example, there may be one question out of ten questions wherein more than 50% of users with deficient cognitive domain related to reading comprehension are likely to perform poorly with this question. Accordingly, the system (e.g., evaluation module 111), as described herein, may prompt the question to be re-evaluated (e.g., the question may be worded poorly). [0074] In some embodiments, a computer-implemented method is provide for predicting a plurality of outcomes for an assessment taken by a user comprising: a) providing a cognitive dataset associated with the user; b) providing a 3rd party dataset associated with the user; c) transforming the cognitive dataset and the 3rd party dataset into a combined dataset; d) generating a predictive model from the combined dataset; and e) using the predictive model to predict the plurality of outcomes for the assessment. In some embodiments, the transforming the cognitive dataset and the 3rd party dataset into a combined dataset comprises auditing, merging, and/or cleaning the combined dataset. In some embodiments, the dataset comprises a plurality of search indexes. In some embodiments, the generating a predictive model from the combined dataset comprises updating the predictive model. In some embodiments, the predictive model is updated using a plurality of new cognitive data associated with user, a plurality of new 3rd party data associated with the user, or a combination thereof. In some embodiments, the plurality of outcomes for the assessment comprises a plurality of adjusted parameters for the assessment. In some embodiments, the plurality of adjusted parameters improves the outcome of the assessment compared to an assessment with no adjusted parameters.
Inferred Cognition
[0075] In some embodiments, a system described herein is configured to identify an individual’s cognition or cognitive abilities without the need to map out via CAD or other cognitive assessments. For example, in some embodiments, wherein a sufficient number of users with profiles / CAD have used a system, as described herein, and the corresponding behavior has been tracked (e.g., mouse clicks, time spent on a section, interactions with content, personal settings, etc.) to obtain behavioral data, the system is configured to infer that relationship by taking such behavioral data and build out inferred cognitive data about new users of the software without profiles / CAD. In some embodiments, the inferred cognitive data will initially be broad inferences such as more significant cognitive bias such as a heavy non-verbal bias or weak memory but could quickly become more detailed based on the engagement by the new user with the system and with the adaptions. In some embodiments, the system would iterate its inferred understanding of the new users’ cognition and adapt based on this improving understanding.
[0076] In some embodiments, this inferred cognition capability allows the system described herein to make adjustments and measure their effectiveness against expectations, and refine the Inferred dataset about a given user, over time building a more and more accurate inferred dataset on said user, while ensuring said user is continuing to have cognitively personalized experiences tailored to their respective needs.
Computer Implemented System [0077] FIG. 1 provides an exemplary depiction of a computer implemented system 100 described herein. In some embodiments, the system 100 comprises a network 102, a cognitive ability data (CAD) database 104, one or more administrator servers 106, and one or more user devices (110-1, 110-2, 110- n, wherein n denotes the number of users). Each of the components 104, 106, and 110 may be operatively connected to one another via network 102 or any type of communication links that allows transmission of data from one component to another.
[0078] In some embodiments, the administrator terminal 106 comprises a server. In some embodiments, one or more assessments and associated parameters are stored on the administrator server 107. In some embodiments, the administrator terminal 106 comprises an administrator computing device 107, wherein the one or more assessments are stored. In some embodiments, the administrator server comprises an Assessment Module 109 stored therein, or stored in an administrator computing device 107. In some embodiments, one or more assessments are stored in the Assessment Module 109. In some embodiments, each assessment taken by a user is stored by a user in their own cloud, or they may have bought or sourced in a software as a service (SaaS) based assessment platform and the assessment is then stored in the providers cloud. In some embodiments, the administrator server 106 comprises a Reasonable Adjustment (RA) module 108. In some embodiments, the RA module is configured to adjust one or more parameters of an assessment. In some embodiments, an administrator is able to use the administrator computing device 107 to provide instructions (e.g., rules as described herein) to be stored with the RA module for a given assessment. In some embodiments, the CAD database is in communication with the administrator server 106, such that the CAD for a user is accessible by the RA module. In some embodiments, the RA module is configured to adjust one or more parameters for an assessment based on the instructions stored therein (e.g., from the administrator), and based on the CAD retrieved from the CAD database for a given user. In some embodiments, the administrator server comprises a processor configured to execute the instructions provided by the administrator (e.g., instructions provided to the RA module). In some embodiments, each parameter of the assessment will be evaluated by the RA module, based on the administrator instructions, and adjusted as needed.
[0079] In some embodiments, a user device 110 comprises, for example, one or more computing devices configured to perform one or more operations consistent with the disclosed embodiments. For example, a user device may be a computing device that is capable of executing software or applications provided by the administrator server 106 (e.g., executing instructions for an assessment and corresponding adjusted parameters). In some embodiments, the assessment is hosted by the administrator server on one or more interactive webpages and accessed by the one or more users via the respective user devices 110. In some embodiments, the one or more users, as described herein, comprise employees of a company, job candidates, job-seekers, students, any individual, a group of individuals, etc. In some embodiments, the assessment are presented to a user (e.g., subject) via an interface (e.g., display for a computing device), wherein the assessment is further configured to receive input from the user (e.g., via input devices as described herein, such as keyboard, mouse, camera, microphone, etc.). FIG. 2 provides an exemplary relationship between the administrator server, assessment, and user. In some embodiments, the input received from the user for an assessment is received and analyzed by an Evaluation Module (“Eval Module”) 111 (see FIG. 1), stored on the administrator server. In some embodiments, the input received from the assessment are reported to one or more end users by the Eval Module. In some embodiments, the end users comprises administrators.
[00801 In some embodiments, a user device comprises, among other things, desktop computers, laptops or notebook computers, mobile devices (e.g., smart phones, cell phones, personal digital assistants (PDAs), and tablets), or wearable devices (e.g., smartwatches). In some embodiments, a user device can also include any other media content player, for example, a set-top box, a television set, a video game system, or any electronic device capable of providing or rendering data. In some embodiments, a user device comprises known computing components, such as one or more processors, and one or more memory devices storing software instructions executed by the processor(s) and data. [00811 In some embodiments, the system 100 comprises a plurality of user devices. In some embodiments, each user device is associated with a user. As described herein, users may include employees of a company, candidates for a job position, jobseekers, students, teachers, instructors, professors, administrators, individuals, groups of individuals, or a combination thereof. In some embodiments, more than one user is associated with a user device. Alternatively, more than one user device is associated with a user. In some embodiments, the users are located geographically at a same location, for example a school or testing center, or a particular geographical location. In some instances, some or all of the user and user devices are at remote geographical locations (e.g., different cities, countries, etc.).
[00821 As depicted in FIG. 1, in some embodiments, the system 100 comprises a plurality of nodes. In some embodiments, each user device in the system corresponds to a node. If a “user device 110” is followed by a number or a letter, it means that the “user device 110” may correspond to a node sharing the same number or letter. For example, as shown in FIG. 1 , user device 110-1 may correspond to node 1 which is associated with user 1, user device 110-2 may correspond to node 2 which is associated with user 2, and user device 110-n may correspond to node n which is associated with user n, where n may be any integer greater than 1.
[0083] In some embodiments, a node is a logically independent entity in the system. Therefore, in some embodiments, the plurality of nodes in the system can represent different entities. For example, each node may be associated with a user, a group of users, or groups of users.
[0084] In some embodiments, a user device is configured to receive input from one or more users. In some embodiments, a user provides an input to a user device using an input device, for example, a keyboard, a mouse, a touch-screen panel, voice recognition and/or dictation software, other such input methods, or any combination of the above. In some embodiments, different users provide different input, depending on their CAD and adjusted parameters for an assessment.
[0085] In the embodiment of FIG. 1, two-way data transfer capability may be provided between the network, administrator server, and each user device. In some embodiments, the user devices are configured to communicate with one another. In some embodiments, the user devices communicate directly with one another via a peer-to-peer communication channel. In some embodiments, the peer- to-peer communication channel helps to reduce workload on the server by utilizing resources (e.g., bandwidth, storage space, and/or processing power) of the user devices.
[0086] In some embodiments, the administrator server 106 comprises one or more server computing devices 107 configured to perform one or more operations consistent with disclosed embodiments. In some embodiments, an administrator server 106 is implemented as a single computing device (e.g., computer, tablet, smartphone, or others as described herein). In some embodiments, a user device communicates with the administrator server through the network 106.
[0087] In some embodiments, a user device may be directly connected to the administrator server through a separate link (not shown in FIG. 1). In certain embodiments, the administrator server may be configured to operate as a front-end device configured to provide access to one or more assessments consistent with certain disclosed embodiments. In some embodiments, the administrator server processes input data from a user device with respect to an assessment (e.g., via the Evaluator Module 111). In some embodiments, the administrator server analyzes input data from a user device with respect to an assessment to determine the impact of adjustments made to one or more parameters. In some embodiments, the administrator server is configured to store responses by users to one or more assessments in a memory module on the administrator sever. In some embodiments, the server is configured to search, retrieve, and analyze data and information stored in the memory module. In some embodiments, the data and information comprise user’s historical performance in one or more assessments, as well as adjustments made to corresponding parameters for all such assessments. [0088] In some embodiments, an administrator server comprises a web server, an enterprise server, or any other type of computer server, and can be computer programmed to accept requests (e.g., HTTP, or other protocols that can initiate data transmission) from a computing device (e.g., a user device) and to serve the computing device with requested data. In some embodiments, the administrator server is a server in a data network (e.g., a cloud computing network).
[0089] In some embodiments, the administrator server comprises known computing components, such as one or more processors, one or more memory devices storing software instructions executed by the processor(s), and data. In some embodiments, the administrator server comprises one or more processors and at least one memory for storing program instructions. In some embodiments, the processor(s) are a single or multiple microprocessors, field programmable gate arrays (FPGAs), or digital signal processors (DSPs) capable of executing particular sets of instructions. Computer- readable instructions can be stored on a tangible non-transitory computer-readable medium, such as a flexible disk, a hard disk, a CD-ROM (compact disk-read only memory), and MO (magneto-optical), a DVD-ROM (digital versatile disk-read only memory), a DVD RAM (digital versatile disk-random access memory), or a semiconductor memory. Alternatively, in some embodiments, the methods disclosed herein are implemented in hardware components or combinations of hardware and software such as, for example, ASICs, special purpose computers, or general purpose computers. While FIG. 1 illustrates the administrator server as a single server, in some embodiments, multiple devices may implement the functionality associated with the server.
[0090] In some embodiments, the network is configured to provide communication between various components of the system 100 depicted in FIG. 1. The network may be implemented, in some embodiments, as one or more networks that connect devices and/or components in the network layout for allowing communication between them. For example, as one of ordinary skill in the art will recognize, the network may be implemented as the Internet, a wireless network, a wired network, a local area network (LAN), a Wide Area Network (WANs), Bluetooth, Near Field Communication (NFC), or any other type of network that provides communications between one or more components of the network layout. In some embodiments, the network may be implemented using cell and/or pager networks, satellite, licensed radio, or a combination of licensed and unlicensed radio. The network may be wireless, wired, or a combination thereof.
[0091] In some embodiments, the RA module 108 is implemented as one or more computers storing instructions that, when executed by one or more processor(s), process administrator provided instructions for adjustment criteria of one or more parameters for an assessment, wherein cognitive ability data (CAD) is retrieved for each user from a CAD database and compared against the adjustment criteria, to determine, for each user, any adjustments to the parameters for a specific assessment. In some embodiments, the administrator server comprises the computing device 107 in which the RA module is implemented. In some embodiments, an administrator server computing device comprises, among other things, desktop computers, laptops or notebook computers, mobile devices (e.g., smart phones, cell phones, personal digital assistants (PDAs), and tablets), or wearable devices (e.g., smartwatches). In some embodiments, the RA module is implemented on separate computing devices from the administrator server. For example, an administrator may provide instructions to the computing device 107, and the administrator server then connects with a RA module, via the network, located on a different server or different computing device. In some embodiments, the RA module comprises software stored in memory accessible by the administrator server (e.g., in a memory local to the server or remote memory accessible over a communication link, such as the network). In some embodiments, the RA module comprises an algorithm for processing the adjustment criteria of one or more parameters.
[00921 In some embodiments, the user devices, the administrator server, and the RA module are connected or interconnected to a cognitive ability data (CAD) database 104. In some embodiments, the CAD database comprises one or more memory devices configured to store data (e.g., cognitive ability data for a plurality of users). Additionally, the CAD database may also, in some embodiments, be implemented as a computer system with a storage device. As described herein, in some embodiments, the CAD database is used by the RA module to retrieve corresponding CAD for a user. In some embodiments, a user uploads corresponding CAD from a previous cognitive assessment test to be stored in the CAD database. In some embodiments, a user takes a cognitive assessment test, wherein the results are processed and stored as CAD in the CAD database. In some embodiments, the CAD database is co-located with the administrator server. One of ordinary skill will recognize that the disclosed embodiments are not limited to the configuration and/or arrangement of the database(s). In some embodiments, the CAD database is cloud-based.
[00931 As described herein, any of the user devices, the administrator server, the CAD database and/or the RA module may, in some embodiments, be implemented as a computer system. Additionally, while the network is shown in FIG. 1 as a “central” point for communications between components of the system, the disclosed embodiments are not limited thereto. For example, one or more components of the system 100 may be interconnected in a variety of ways, and may in some embodiments be directly connected to, co-located with, or remote from one another, as one of ordinary skill will appreciate. Additionally, while some disclosed embodiments may be implemented on the server, the disclosed embodiments are not so limited. For instance, in some embodiments, other devices (such as one or more user devices) may be configured to perform one or more of the processes and functionalities consistent with the disclosed embodiments, including embodiments described with respect to the server and the RA module.
[0094] In some embodiments, the CAD is communicated from the CAD database to the administrator server through secure means or unsecure means. In some embodiments, the CAD is communicated from the CAD database to the administrator server through a csv, db, SQL, method, and/or through block-chain technology.
[0095] Although particular computing devices are illustrated and networks described, it is to be appreciated and understood that other computing devices and networks can be utilized without departing from the spirit and scope of the embodiments described herein. In addition, one or more components of the system may be interconnected in a variety of ways, and may in some embodiments be directly connected to, co-located with, or remote from one another, as one of ordinary skill will appreciate.
[0096] In some embodiments, the system 100 is at least partly presented and interacted through a website, cloud deployed software, virtual reality (VR) environments, augmented reality (AR) environments, extended reality (XR) environments, or a combination thereof.
[0097] Method for adjusting parameter
[0098] FIG. 4 provides a depiction of an exemplary method 400 for automatically adjusting one or more assessment parameters, as described herein, based on instructions provided from an administrator and CAD data corresponding to a user. In some embodiments, an administrator provides instructions 402 for adjusting one or more parameters for an assessment. In some embodiments, as described herein, said instructions comprise adjustment criteria for each parameter based on a specified cognitive domain and may further comprise thresholds of a cognitive level for the cognitive domain. In some embodiments, the instructions are provided to a RA module configured to store said instructions for the specific assessment and process said instructions to adjust the parameters for the assessment.
[0099] In some embodiments, for each user seeking to take the assessment, the RA module retrieves 404 the corresponding cognitive ability data (CAD) for the user. In some embodiments, the system, as described herein (e.g., Assessment module 109), will check if the user has CAD shared, and if this check shows the CAD is available, the system will inject the CAD into the RA module, wherein the RA module will perform a computation for each parameter (for an assessment) to determine if the CAD demonstrates eligibility for that parameter (based on the adjustment criteria provided by the administrator). For each parameter where eligibility is found, the RA module will activate 410 the parameter for the parts of the assessment that the parameter has been made available. In some embodiments, each assessment will have a finite number of possible parameter adjustments that can be enacted. In some embodiments, for each user who presents a valid cognitive assessment (via the CAD database), the administrator would have to review each parameter available and evaluate the cognitive assessment report (corresponding to the user) to determine if the user qualifies for a specific parameter, then move to the next available parameter, until all available parameters have been checked and adjustment applied accordingly or not. In some embodiments, the administrator will have a policy (e.g., adjustment criteria as described herein) for each parameter that they will review the cognitive assessment report against, to determine if the parameter applies for the user, and in the case of graduated parameters, the amount of the parameter to be applied, or the case of multiple options, the determined option for the parameter. As described herein, in some embodiments, the systems and method described herein enable a much faster and robust evaluation and application of parameter adjustments than traditional methods.
[01001 In some embodiments, the CAD is obtained from a CAD database in communication with the RA module. In some embodiments, wherein the CAD database does not have a corresponding CAD for the user (or a CAD is not shared), the user is able to upload 406 CAD from a past cognitive assessment test (CAT) to the CAD database. In some embodiments, wherein the CAD database does not have a corresponding CAD for the user (either in the CAD database or from a previous cognitive assessment test), the user is able to take a cognitive assessment test (CAT) 408, wherein the results are processed and stored into the CAD database. In some embodiments, a separate server and/or computing device is used to process the results from the CAT. In some embodiments, a 3rd party and/or the user optionally provides historical information 407. In some embodiments, 3rd party data comprises data from any source e.g., demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof. In some embodiments, the user is provided (e.g., via the Administrator server 106) with the option of uploading a compatible cognitive assessment or the option of taking a cognitive assessment test prior to the assessment. In some embodiments, a user will be prompted to elect a permissions to share CAD function when taking an assessment. Accordingly, in some embodiments, the method allows for a digital cognitive assessment to be placed at the start of the process, or for previously completed digital cognitive assessments, to share (e.g., via an application programming interface API) the output values of the previously completed cognitive assessment, with a system described herein. In some embodiments, the cognitive assessment output is a set of values for pre-determined cognitive abilities and attributes. In some embodiments, these values become the Cognitive Ability Data (CAD).
[0101] In some embodiments, once each parameter for an assessment has been evaluated and adjusted (e.g., activated) as needed for a corresponding user, the assessment is stored in the administrator server (e.g., Assessment Module). In some embodiments, the Assessment Module receives and stores the assessment parameter adjustment applied to a given assessment for a user. The assessment can then be administered 412 to the user via a user device, which is in communication with the administration server. The user device will provide the assessment with the respective parameters adjusted (e.g., activated). In some embodiments, the assessment is provided via an interface on the user device (e.g., a display for a computing device). In some embodiments, the user is able to provide input to the assessment via any input means as described herein.
[0102] FIG. 5 provides a depiction of an exemplary method 500 for automatically adjusting one or more assessment parameters, wherein steps 502 - 512 are the same as described for steps 402-412 in FIG. 4, but method 500 further comprises a machine algorithm process. In some embodiments, the administrator server, as described herein, is configured to receive the results of the assessment by a user. In some embodiments, the administrator server receives and stores the input received from a user for an assessment in an Evaluator Module, as described herein, stored on the server. In some embodiments, once a sufficient amount of responses from a plurality of users for an assessment has been stored, the Evaluator Module is configured to evaluate the input from the users 514 and review the impact by adjusting the one or more parameters. In some embodiments, if little or no impact, or an undesired impact, is observed, the Evaluator Module will automatically update the adjustment criteria 516 to correct for the undesired impact or lack of impact. In some embodiments, the Evaluator Module will provide an alert or notification to an administrator, who will then provide new instructions, thereby manually modifying the adjustment criteria. In some embodiments, little or no impact corresponds to insufficient impact by the parameter adjustment (e.g., an insufficient improvement to adequately compensate for the identified cognitive domain (e.g., deficient cognitive domain) of the user. In some embodiments, undesired impact (or negative impact) corresponds to a further worsening of a performance on an aspect of the assessment relating to an identified cognitive domain. For example, if extra time is allotted for a user based on the corresponding CAD, and the results showed the user was still unable to complete a section or entire assessment on time and/or if the user completed even less than had the parameter not being adjusted, this may suggest additional time is needed, or that there is another parameter that should be adjusted. In some embodiments, if a positive impact is shown (e.g., user is able to finish assessment on time), no further adjustment of the adjustment criteria is needed.
[0103] FIG. 12 provides another view of an exemplary flow for the machine learning algorithmic process. In an example, a problem or a plurality of problems are defined by any type of organization (e.g., academic institution, government agency, employer, retailer, etc.) for which the organization wishes to use cognitive and/or noncognitive data to solve the one or more problems. A combined data set, for the algorithm’s use, is selected from a plurality of datasets described herein e.g., a cognitive dataset or a 3rd party data set. The plurality of datasets may comprise structured data, unstructured data, or semi-structured data. Structured data may include data in a relational database such as dates, names, phone numbers, addresses, etc. Unstructured data may include text files (e.g., word processing, spreadsheets, presentations, emails, or data logs), emails, social media (e.g., data from Facebook®, Twitter®, or Linkedln®), websites (e.g., YouTube®, Instagram®, or photosharing sites), mobile data (e.g., text messages or locations), communications (e.g., instant messaging, phone recordings, or collaboration software), media (e.g., MP3, digital photos, or audio and video files), or business applications (e.g., MS® Office and productivity applications). Unstructured data may be transformed into structured data. For example, the present disclosure may assess usage of a particular word and/or phrase (e.g., a personal pronoun, emotionally negative words, diagnosis terms, etc.) in the unstructured data retrieved from e.g., online conversations. The algorithm transforms the plurality of usages into a plurality of structured datapoints e.g., occurrences and/or prevalence of personal pronoun usage. Semi- structured data may include machine markup language XML, open standard JavaScript® Object Notation (JSON), or not-only structured language (NoSQL).
[0104] FIG. 12 further provides curation of the combined data set, by the algorithm, comprising auditing, cleaning, and or/merging the combined datasets and/or also to create searchable indexes wherein the combined dataset is structured for the algorithm’s use. Data comprising 3rd party data may be audited after retrieval, for example, from a plurality of organizations different from the organization desiring to solve the one or more problems. Auditing may comprise standardizing using standard conventions such as government standards, trade standards, and/or certifying organization standards e.g., standard ethnicity lists or codes, standard nationality lists or codes, or census conventions, etc. Data comprising 3rd party data may be cleaned after retrieval, for example, from a plurality of organizations different from the organization desiring to solve the one or more problems. Cleaning may comprise removal of outliers, determination of missing data, and/or presence of anomalous data e.g., a user’s age is 205 years. Cleaning may further comprise removal of datasets wherein the user improperly completed the cognitive assessment e.g., less than full effort by the user, intentional malingering by the user, and/or feigning cognitive difficulty. A determination of improper completion may also be determined during the assessment by e.g., a below chance performance on a forced choice test. A determination of improper completion may also be determined using a standalone assessment e.g., an assessment presented to the user as the cognitive assessment but instead assesses performance validity. Data comprising 3rd party data may be merged after retrieval, for example, from a plurality of organizations different from the organization desiring to solve the one or more problems. Merging may comprise associating cognition data from each individual user with their data from the 3rd party. Merging may further comprise associating cognitive profiles with specific learning outcomes from the 3rd party. Associating cognitive profiles with specific learning outcomes may comprise a unique identifier that is common between both the user’s cognition dataset and the 3rd party dataset. For example, a unique identifier may be the Unique Learner Number (ULN) used in the education sector and government.
[0105] The algorithm may use the combined dataset to create one or more predictive models. The predictive models may be validated, by the algorithm, against a training dataset. After validation of the one or more models, the predictive models may be deployed to the one or more organizations for solving the one or more problems. The predictive models may be updated before, during, or after deployment through a feedback loop with addition of new cognitive data or new 3rd party data. From the predictive models, by the algorithm, one or more outcomes may be determined for one or more users wherein the one or more outcomes may solve the one or more problems of the one or more organizations.
Computer Systems
[0106] The present disclosure provides computer control systems that are programmed to implement methods of the disclosure. FIG. 13 shows a computer system 1301 that is programmed or otherwise configured for automatically adjusting a parameter of an assessment to accommodate individual user cognitive capabilities. The computer system 1301 can regulate various aspects of the present disclosure, such as, for example, receive or generate sequence reads, correlate sequences to specific epitopes or autoantibodies, output a result for the user as to the presence of an autoantibody or profile, or an expected progression of a disease. The computer system 1301 can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device. The electronic device can be a mobile electronic device.
[0107] The computer system 1301 includes a central processing unit (CPU, also “processor” and “computer processor” herein) 1305, which can be a single core or multi core processor, or a plurality of processors for parallel processing. The computer system 1301 also includes memory or memory location 1310 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 1315 (e.g., hard disk), communication interface 1320 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 1325, such as cache, other memory, data storage or electronic display adapters. The memory 1310, storage unit 1315, interface 1320 and peripheral devices 1325 are in communication with the CPU 1305 through a communication bus (solid lines), such as a motherboard. The storage unit 1315 can be a data storage unit (or data repository) for storing data. The computer system 1301 can be operatively coupled to a computer network (“network”) 1330 with the aid of the communication interface 1320. The network 1330 can be the Internet, an internet or extranet, or an intranet and extranet that is in communication with the Internet. The network 1330 in some cases is a telecommunication or data network. The network 1330 can include one or more computer servers, which can enable distributed computing, such as cloud computing. The network 1330, in some cases with the aid of the computer system 1301, can implement a peer-to-peer network, which may enable devices coupled to the computer system 1301 to behave as a client or a server.
[01081 The CPU 1305 can execute a sequence of machine-readable instructions, which can be embodied in a program or software. The instructions may be stored in a memory location, such as the memory 1310. The instructions can be directed to the CPU 1305, which can subsequently program or otherwise configure the CPU 1305 to implement methods of the present disclosure. Examples of operations performed by the CPU 1305 can include fetch, decode, execute, and writeback.
[01091 The CPU 1305 can be part of a circuit, such as an integrated circuit. One or more other components of the system 1301 can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC).
[01101 The storage unit 1315 can store files, such as drivers, libraries and saved programs. The storage unit 1315 can store user data, e.g., user preferences and user programs. The computer system 1301 in some cases can include one or more additional data storage units that are external to the computer system 1301, such as located on a remote server that is in communication with the computer system 1301 through an intranet or the Internet.
[01111 The computer system 1301 can communicate with one or more remote computer systems through the network 1330. For instance, the computer system 1301 can communicate with a remote computer system of a user. Examples of remote computer systems include personal computers (e.g., portable PC), slate or tablet PC’s (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants. The user can access the computer system 1301 via the network 1330. [0112] Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 1301, such as, for example, on the memory 1310 or electronic storage unit 1315. The machine executable or machine-readable code can be provided in the form of software. During use, the code can be executed by the processor 1305. In some cases, the code can be retrieved from the storage unit 1315 and stored on the memory 1310 for ready access by the processor 1305. In some situations, the electronic storage unit 1315 can be precluded, and machine-executable instructions are stored on memory 1310.
[0113] The code can be pre-compiled and configured for use with a machine having a processer adapted to execute the code or can be compiled during runtime. The code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as- compiled fashion.
[0114] Aspects of the systems and methods provided herein, such as the computer system 1301, can be embodied in programming. Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code or associated data that is carried on or embodied in a type of machine readable medium. Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk. “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
[0115] Hence, a machine readable medium, such as computer-executable code, may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
[0116] The computer system 1301 can include or be in communication with an electronic display 1335 that comprises a user interface (UI) 1340 for providing, for example, selecting autoantibodies for analysis, interacting with graphs correlating autoantibodies to specific generated profiles. Examples of ET’s include, without limitation, a graphical user interface (GET) and web-based user interface. [0117] Methods and systems of the present disclosure can be implemented by way of one or more algorithms. An algorithm can be implemented by way of software upon execution by the central processing unit 1305. The algorithm can, for example, calculate statistics measurements to identify autoantibodies and generate profiles or predict efficacy and toxicity of a treatment.
EXAMPLES
Example 1 Parameter Adjustment
[0118] An exemplary adjustment criteria provided by an administrator may comprise providing extra time for reading tasks on an assessment. The administrator may specify an extra time parameter be activated for a user that is in the 15th percentile for reading speed (e.g., a deficient cognitive domain), as determined via the corresponding CAD of the user. As described herein, in some embodiments, CAD is provided as a statistical measure and in relation to a population (e.g., percentile). The administrator may set a maximum amount of extra allotted time (e.g., 50%). The adjustment criteria may further specify a graduated parameter adjustment from about 0% to 50% of extra allotted time (or any value from 0% to the maximum extra allotted time). Here, the adjustment criteria is specified according to a statistical measure, wherein the parameter will be graduated between 0% and 50% of extra allotted time depending on where the user falls according to the statistical measure. Thus, the parameter for extra time will be activated if the cognitive level of the reading speed deficient cognitive domain for the user is within the 15th percentile, and the amount of extra time provided will be determined based on what the exact percentile is. In this example, the adjustment criteria may further model the “tail” of the population (e.g., based on all the CAD in the CAD database) against a normal standard deviation. As such, the bottom 15% of the users (e.g., for a plurality of users taking an assessment) can be identified, and have their parameters adjusted accordingly. Another option of determining which users are eligible to receive a parameter adjustment is through scores, such as a Std. Score. Accordingly, a user will be eligible to receive a parameter adjustment (e.g., that relates to reading speed) if their corresponding Std. Score is below a certain threshold.
[01191 FIG. 6 provides an exemplary linear regression model for determining the extra time provided (y-axis) based on the cognitive level for reading speed, wherein the threshold is depicted as being less than 16th percentile (per the below provided linear regression equation). An exemplary prompt by a system described herein (e.g. Administrator Server described herein) may be: “what is the maximum additional time you would like to allow?”, wherein an administrator may input “50%” (as an example). The system may then prompt: “what threshold would you like to use to trigger additional time?”, wherein the administrator may input: “percentile <16” or “standard score <85”. For a linear allocation of extra time, a system described herein (e.g., Administrator Server described herein) would then generate the regression equation:
[01201 y = a + bx (e.g., y = 50 + (-3.125x))
[01211 Where: y = the additional time allotted; a (50) = the intercept of the y axis (or maximum additional time allowed); b (-3.125) = the relationship between x and y (automatically calculated by the system described herein); x = the individual’s percentile rank.
[01221 FIG. 6 provides a depiction of the above linear regression equation, wherein x-axis = percentile, y-axis = additional time in %. The above equation would change depending on the administrator inputs above (e.g., 50% max allotted time & 16th percentile threshold).
[01231 As depicted in FIG. 6, a CAD corresponding to a reading speed cognitive domain of less than 16th percentile rank, which corresponds to a deficient cognitive domain for reading speed, will result in extra time provided.
[01241 As described herein, linear regression is one example of a model, but there are several other relationships between a parameter and cognitive level (e.g., percentile rank, relational attribute) of cognitive domain (based on the CAD) that can be used, several of which are non-linear (e.g., power, log, inverse standard deviation, probability distribution, etc.). A linear example is used for simplicity of explanation; real world examples may require non-linear allocations of additional time.
Example 2 Parameter Adjustment
[0125] In another example for determining a parameter adjustment, a parameter may be dependent on multiple cognitive domains (e.g., multiple cognitive capabilities) or a single cognitive domain (as described herein, which may be for example statistical based or a relational attribute). Accordingly, the relationship between the cognitive domains may be taken into account when determining a parameter adjustment. Individual regression equations could be used, but using a multiple regression equation, as depicted in FIG. 7, provides an exemplary relationship between the parameter and cognitive domains. For example, FIG. 7 depicts a multiple regression equation and curve for the parameter relating to subtitles, and the following three cognitive domains: working memory test, verbal comprehension test, and verbal memory test. Here, the parameter adjustment is simply binary, in that the subtitles function is either activated or not. Accordingly, the mathematical model is not linear, but follows a step-wise curve wherein a maximum threshold (herein 85) will not be provided the subtitle function for the assessment. Herein, the mathematical model follows the equation:
[0126] y = a + b1x1 + b2x2 + b3x3
[0127] wherein, xi corresponds to a standard score for working memory test, X2 corresponds to a standard score verbal comprehension test, and X3 corresponds to a standard score for verbal memory test. Accordingly, xi through X3 could be calculated using individual regression equations, however including them in a multiple regression equation means their relationship to each other is taken into account. In some cases, Administrator input might be a standard score but this, may be converted to percentiles by the system for ease of regression modelling.
[0128] As per FIG. 7, for example, the threshold for activation requires a Std. Score of at most 84 in order for subtitles to be activated. For example, a first user Tom may not qualify for subtitles while Harry would quality, based on the below cognitive domain data: x1 = 120
Tom = x2 = 125 .·. Does not need RA . x3 = 80 x1 = 90
Harry = x2 = 87 .·. Needs RA .x3 = 85
Example 3 Parameter Adjustment [0129] FIG. 10 provides an exemplary linear regression model for determining the extra time provided (y-axis) based on a plurality of cognitive domains, wherein a threshold is specified to trigger the allotted time. An exemplary prompt by a system described herein (e.g., Administrator Server described herein) may be: “which cognitive domains may have an impact on this parameter adjustment?”, wherein the Administrator may input “Cognitive Domain 1 (xl), Cognitive Domain 2 (x2), and Cognitive Domain 3 (x3)” (as an example). The system may then prompt: “what threshold would you like to be taken into account for the combination of the three domains?”, wherein the administrator may input: “below average, or <50th percentile, or <below Std. Score 100”. The system may then prompt: “what is the maximum additional time you would like to allocate?”, which the administrator may input: “100%”. In some embodiments, the threshold for each domain is prompted.
[0130] For a linear allocation of extra time, the system would then generate the regression equation: [0131] y = a + bx (e.g., y = 100 + (-2x))
[0132] Where: y = the additional time allotted; x = the relevant cognitive threshold (or cognitive level) as percentile; a (100) = the intercept of the y axis (or maximum additional time allowed); b (-2) = the relationship between x and y (automatically calculated by a system described herein).
[0133] FIG. 10 provides a depiction of the above linear regression equation, wherein x-axis = percentile and y-axis = additional time in %. The above equation would change depending on the Administrator inputs above (100% max allotted time & 50th percentile threshold).
[0134] As described herein, the multiple cognitive domains may all be weighted equally or differently when determining the relevant cognitive threshold. For example, the system may prompt: “would you like all domains weighted equally?”. In a first scenario, an administrator may input: “yes”, wherein a system (as described herein) will then equally partials out b (-2) to generate the following regression equation:
[01351 y = 100 + (-0.66*xl) + (-0.66*x2) + (-0.66*x3)
[0136] Wherein, each domain corresponds to the respective cognitive domain standard score.
[0137] In a second scenario, where one domain has more of an impact on determining the parameter adjustment, when prompted by a system described herein: “would you like all domains weighted equally?”, an administrator may input: “no”, and provide appropriate weights, for example: “xl = 50%, x2 = 25%, x3 = 25%”. A system (e.g., Administrator server as described herein) will then generate the following regression equation:
[01381 y = 100 + (-.l*xl) + (-.5*x2) + (-.5*x3)
Example 4 Response Time [0139] FIG. 8 provides a relationship between Response Time (in seconds) for a given task on an assessment with the Frequency of an individual user’s response time distribution. The response time variability is commonly used with users having attentional issues, such as Attention Deficient Hyperactivity Disorder (ADHD). Here, frequency stands for the number of questions that they responded to in that time. Variability in response times can be worked out mathematically (e.g. Std. Dev or variance) and can also be depicted in a graph such as in FIG. 8. Higher variability in response times can indicate attentional issues or lower overall cognitive performance. Response times can be used as indications of many things including cognitive performance (e.g., response time for correctly answered items), impulse control (response times to incorrectly answered but intuitive items e.g. Stroop task) and overall variability which can relate to attentional issues such as ADHD or lower overall cognitive performance.
Example 5 Machine Learning
[0140] FIG. 12 provides another view of an exemplary flow for the machine learning algorithmic process describe herein. In an example, a problem or a plurality of problems are defined by any type of organization (e.g., academic institution, government agency, employer, retailer, etc.) for which the organization wishes to use cognitive or noncognitive data to solve the one or more problems. A combined data set, for the algorithm’s predictive use, is selected from a plurality of datasets described herein e.g., a cognitive dataset or a 3rd party data set. A 3rd party dataset may comprise data from any source e.g., demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof. The plurality of datasets may comprise structured data, unstructured data, or semi-structured data. Structured data may include data in a relational database such as dates, names, phone numbers, or addresses. Unstructured data may include text files (e.g., word processing, spreadsheets, presentations, emails, or data logs), emails, social media (e.g., data from Facebook®, Twitter®, or Linkedln®), websites (e.g., YouTube®, Instagram®, or photo sharing sites), mobile data (e.g., text messages or locations), communications (e.g., instant messaging, phone recordings, or collaboration software), media (e.g., MP3, digital photos, or audio and video files), or business applications (e.g., MS® Office and productivity applications). Semi-structured data may include machine markup language XML, open standard JavaScript® Object Notation (JSON), or not- only structured language (NoSQL).
[0141] FIG. 12 further provides for curation of the combined data set, by the algorithm, comprising auditing, merging, or cleaning the combined datasets or also to create searchable indexes wherein the combined dataset is structured for the algorithm’s predictive use. The algorithm may use the combined dataset to create one or more predictive models using one or more machine learning approaches e.g., logistic regression analysis, discriminant function analysis, decision tree analysis, neural network analysis, etc. The predictive models may be validated, by the algorithm, against one or more independent datasets. After validation of the one or more predictive models, the predictive models may be deployed to the one or more organizations for solving the one or more problems. The predictive models may be updated before, during, or after deployment through a feedback loop with addition of new cognitive data or new 3rd party data. From the predictive models, by the algorithm, one or more outcomes may be determined for one or more users wherein the one or more outcomes may solve the one or more problems of the one or more organizations.
Example 6 Prediction Performance
[0142] Table 1 and Table 2 depict improved predictive performance of the present disclosure. The machine learning algorithm classified users as likely or not likely of withdrawing from a program e.g., an academic program. Table 1 depicts classification without using cognitive data. Here, the algorithm correctly classified only 68.9% of users. Table 2 depicts classification using cognitive data described herein. Here, the algorithm correctly classified 74.4% of users. The improved predictive performance is about an 8% improvement. Such information may be used, for example, to proactively identify users as at risk for dropping out of a program. Further, additional support may be provided for the user to reduce the likelihood that the user drops out of the program. Additionally, past data may be used to predict future outcomes for the user and/or other users.
Table 1 - Confusion matrix without cognition as predicting variables
Figure imgf000040_0001
Table 2 - Confusion matrix with cognition as predicting variables
Figure imgf000040_0002
Example 7
[0143] Another application of the present disclosure provides solutions for any type of organization (e.g., academic institution, government agency, employer, retailer, etc.) desiring to use cognitive data or 3rd party data (e.g., noncognitive data), described herein, to improve marketing strategies. A 3rd party dataset may comprise data from any source e.g., demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, etc., or a combination thereof. Improved marketing strategies may comprise more effective advertisements for products or services from the one or more organizations. For example, an organization may use cognitive data or 3rd party data to improve its advertising. The organization may present its advertising to appeal more to a customer’s quantitative cognition using data e.g., numbers and statistics about the product or service. The organization may present its advertising to appeal more to a customer’s emotional cognition using personal stories e.g., a personal testimonies about the product or service. The organization may present its advertising to appeal more to a customer’s visual cognition using imagery e.g., images or videos of the product or service.
[0144] The foregoing has been a description of certain non-limiting embodiments of the subject matter described within. Accordingly, it is to be understood that the embodiments described in this specification are merely illustrative of the subject matter reported within. Reference to details of the illustrated embodiments is not intended to limit the scope of the claims, which themselves recite those features regarded as essential.
[0145] It is contemplated that systems and methods of the claimed subject matter encompass variations and adaptations developed using information from the embodiments described within. Adaptation, modification, or both, of the systems and methods described within may be performed by those of ordinary skill in the relevant art.
[0146] Throughout the description, where systems are described as having, including, or comprising specific components, or where methods are described as having, including, or comprising specific steps, it is contemplated that, additionally, there are systems encompassed by the present subject matter that consist essentially of, or consist of, the recited components, and that there are methods encompassed by the present subject matter that consist essentially of, or consist of, the recited processing steps. [0147] It should be understood that the order of steps or order for performing certain action is immaterial so long as any embodiment of the subject matter described within remains operable. Moreover, two or more steps or actions may be conducted simultaneously.
[0148] While preferred embodiments of the present disclosure have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the present disclosure. It should be understood that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure. It is intended that the following claims define the scope of the disclosure and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A system for automatically adjusting a parameter of an assessment to accommodate individual user cognitive capabilities, comprising:
(a) a cognitive ability data (“CAD”) database comprising a memory for storing results from a cognitive assessment test for the user, wherein said results define one or more cognitive domains of the user;
(b) an administrator server in communication with the CAD database, the administrator server comprising 1) an administrator computing device, 2) an assessment module wherein an assessment is stored therein, the assessment comprising one or more questions or processes for the user, and one or more parameters; and 3) a reasonable adjustment (RA) module, wherein the administrator computing device is configured to provide a first set of software instructions to the RA module, wherein the administrator server comprises one or more processors to execute the first set of instructions to:
(i) provide an adjustment criteria to a parameter of the one or more parameters, wherein the adjustment criteria corresponds to a cognitive capability;
(ii) retrieve the one or more cognitive domains for the user via the CAD database; and
(iii) adjust the parameter according to the adjustment criteria and the cognitive domain for the user, wherein the cognitive domain corresponds to the cognitive capability; and
(c) a user computing device in communication with the administrator server, the user computing device configured to provide the assessment to the user with the corresponding adjusted parameter.
2. The system of claim 1, wherein the one or more parameters comprise a presentation parameter, a function activation parameter, a time-based parameter, a process/journey parameter, and/or a marketing parameter, or a combination thereof.
3. The system of claim 2, wherein the presentation parameter comprises font type; font size; font color; spacing between letters, words, lines, and/or paragraphs; background color of an assessment; providing a digital screen overlay of any specific color; displaying the assessment with an alternative template or Cascading Style Sheet (CSS) or other form of screen structure; limiting a maximum amount of text displayed to the user at any given time before moving on to a next segment of text; a change in tone of voice of presented text; or a combination thereof.
4. The system of claim 2, wherein the function activation parameter comprises an ability for user to activate speech-to-text software for an assessment; ability for user to access dictionary or thesaurus to be available for use; activation of subtitles; activation of audio descriptions; phonetic spell-correction software; a sign-language avatar if video is used; an image based sign language substitute; or a combination thereof.
5. The system of claim 2, wherein the time-based parameters comprise enforcing a break during the assessment; time allowed for each question, section of questions, section of the assessment, and/or entire assessment; enforcing a break during the assessment at a predetermined time; enforcing a break for a finite period; forcing slower progression by the user through a task; or a combination thereof.
6. The system of claim 2, wherein the marketing parameters comprise transforming the question to rely more on the user’s cognitive capabilities than the question before transformation, wherein the transforming the question comprises relying on the user’s quantitative cognition, emotional cognition, visual cognition, or a combination thereof.
7. The system of claim 1, wherein the adjustment criteria defines a maximum and/or minimum threshold value for the cognitive domain of the one or more cognitive domains.
8. The system of claim 1, wherein the adjustment criteria is based on 1) statistical properties relating to one or more cognitive domains, 2) relational attributes relating to two or more cognitive domains, or 3) both.
9. The system of claim 8, wherein the relational attributes relating to two or more cognitive domains comprise a relationship between the two or more cognitive domains.
10. The system of claim 1, wherein the adjustment of the parameter comprises a binary adjustment, wherein the parameter is either activated or not activated as presented to the user with the assessment.
11. The system of claim 10, wherein the adjustment of the parameter further comprises 1) a graduated adjustment, 2) an options adjustment, or 3) both.
12. The system of claim 11, wherein the options adjustment comprises two or more options for adjusting the parameter.
13. The system of claim 11, wherein the graduated adjustment comprises any number of ranges a parameter can be adjusted based on the corresponding cognitive domain for the user.
14. The system of claim 1, further comprising an Evaluator Module configured to:
(a) receive results of the assessment from the user;
(b) evaluate the results of the assessment from the user and a plurality of other users; (c) determine the impact of the parameter adjustment based on the results of the assessment from the user and the plurality of other users; and
(d) modify the adjustment criteria if said impact corresponds to a negative impact or insufficient impact based on the results of the assessment from the user and the plurality of other users as relating to the parameter adjustment.
15. The system of claim 14, wherein the adjustment criteria is modified automatically or modified manually by an administrator.
16. A computer-implemented method for automatically adjusting a parameter for an assessment to accommodate a user’s cognitive capabilities, comprising:
(a) providing an adjustment criteria for the parameter of the assessment stored on an administrator server, wherein the adjustment criteria corresponds to a cognitive capability;
(b) retrieving a cognitive domain of the user via a cognitive assessment data (CAD) database, wherein the CAD database is in communication with the administrator server;
(c) adjusting the parameter based on the adjustment criteria and cognitive domain, wherein the cognitive domain corresponds to the cognitive capability; and
(d) administering the assessment to the user via a user computing device in communication with the administrator server, wherein the assessment is administered with the adjusted parameter.
17. The method of claim 16, further comprising:
(a) analyzing the results from the assessment from the user and a plurality of other users;
(b) determining an impact of the parameter adjustment; and
(c) modifying the adjustment criteria based on the impact.
18. The method of claim 16, further comprising:
(a) providing a second cognitive domain of the user; and
(b) updating the parameter for the assessment based on the second cognitive domain.
19. The method of claim 17, wherein said modifying the adjustment criteria is based on the impact 1) having a negative impact on the results of the assessment as compared to without adjusting the parameter or 2) having no impact or insufficient impact on the results of the assessment as compared to without adjusting the parameter.
20. The method of claim 16, further comprising receiving results of a cognitive assessment test by the CAD database, wherein said results define one or more cognitive domains for the user.
21. The method of claim 16, wherein the one or more parameters comprise a presentation parameter, a function activation parameter, a time-based parameter, a process/journey parameter, a marketing parameter, or a combination thereof.
22. The method of claim 16, wherein the adjustment criteria defines a maximum and/or minimum threshold value for the cognitive domain of the one or more cognitive domains.
23. The method of claim 16, wherein the adjustment criteria is based on 1) statistical properties relating to one or more cognitive domains, 2) relational attributes relating to two or more cognitive domains, or 3) both.
24. The method of claim 23, wherein the relational attributes relating to two or more cognitive domains comprise a relationship between the two or more cognitive domains.
25. The method of claim 16, wherein the adjustment of the parameter comprises a binary adjustment, wherein the parameter is either activated or not activated as presented to the user with the assessment.
26. The method of claim 25, wherein the adjustment of the parameter further comprises 1) a graduated adjustment, 2) an options adjustment, or 3) both.
27. The method of claim 26, wherein the options adjustment comprises two or more options for adjusting the parameter.
28. The method of claim 26, wherein the graduated adjustment comprises any number of ranges a parameter can be adjusted based on the corresponding cognitive domain for the user.
29. The method of claim 16, further comprising evaluating each parameter of the assessment to identify one or more parameters associated with an adjustment criteria.
30. The method of claim 29, further comprising adjusting a plurality of parameters, wherein each adjustment is based on a corresponding adjustment criteria and cognitive domain, wherein each cognitive domain corresponds to a cognitive capability associated with the corresponding adjustment criteria.
31. A computer-implemented method for predicting a plurality of outcomes for an assessment taken by a user comprising:
(a) providing a cognitive dataset associated with the user;
(b) providing a 3rd party dataset associated with the user;
(c) transforming the cognitive dataset and the 3rd party dataset into a combined dataset;
(d) generating a predictive model from the combined dataset;
(e) using the predictive model to predict the plurality of outcomes for the assessment.
32. The method of claim 31, wherein the transforming in (c) comprises auditing, merging, and/or cleaning the combined dataset.
33. The method of claim 32, wherein the combined dataset comprises a plurality of search indexes.
34. The method of claim 32, wherein the 3rd party dataset comprises demographic data, learner outcomes data, product adoption data, health outcome data, scientific data, publicly available data, consumer behavior data, attitudinal response data, social media data, or a combination thereof.
35. The method of claim 31, wherein the generating in (d) further comprises updating the predictive model.
36. The method of claim 35, wherein the predictive model is updated using a plurality of new cognitive data associated with user, a plurality of new 3rd party data associated with the user, or a combination thereof.
37. The method of claim 31, wherein the plurality of outcomes in (e) comprises a plurality of adjusted parameters for the assessment.
38. The method of claim 37, wherein the plurality of adjusted parameters improves the outcome of the assessment compared to an assessment with no adjusted parameters.
39. The method of claim 37, wherein the predictive model comprises logistic regression analysis, discriminant function analysis, decision tree analysis, neural network analysis, or a combination thereof.
PCT/GB2022/050904 2021-04-13 2022-04-12 System and methods for automatically applying reasonable adjustments WO2022219313A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163174428P 2021-04-13 2021-04-13
US63/174,428 2021-04-13

Publications (1)

Publication Number Publication Date
WO2022219313A1 true WO2022219313A1 (en) 2022-10-20

Family

ID=81384999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2022/050904 WO2022219313A1 (en) 2021-04-13 2022-04-12 System and methods for automatically applying reasonable adjustments

Country Status (1)

Country Link
WO (1) WO2022219313A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080102435A1 (en) * 2006-09-11 2008-05-01 Rogers Timothy A Using testing metadata for test question timing and selection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080102435A1 (en) * 2006-09-11 2008-05-01 Rogers Timothy A Using testing metadata for test question timing and selection

Similar Documents

Publication Publication Date Title
US10938738B2 (en) Resource allocation in distributed processing systems
US11709875B2 (en) Prioritizing survey text responses
US11102276B2 (en) System and method for providing more appropriate question/answer responses based upon profiles
Alexander III et al. Using big data and machine learning in personality measurement: Opportunities and challenges
US10803104B2 (en) Digital credential field mapping
US10452984B2 (en) System and method for automated pattern based alert generation
US10516691B2 (en) Network based intervention
US20190147760A1 (en) Cognitive content customization
US20230080674A1 (en) Systems and Methods for Automated Generation of Passage-Based Items for Use in Testing or Evaluation
US12008317B2 (en) Summarizing information from different sources based on personal learning styles
US10909869B2 (en) Method and system to optimize education content-learner engagement-performance pathways
US10956822B1 (en) Identification and management of frequently asked questions
US20210390263A1 (en) System and method for automated decision making
Khan et al. A meta-analysis of mobile learning adoption in higher education based on unified theory of acceptance and use of technology 3 (UTAUT3)
US11037049B2 (en) Determining rationale of cognitive system output
Lakmali et al. Effectiveness of customer social participation for academic purposes: a case of informal WhatsApp groups
CN114942944A (en) Training content generation and data processing method, device, equipment and storage medium
US20170243166A1 (en) System and method for employee placement prediction using machine learning
WO2022219313A1 (en) System and methods for automatically applying reasonable adjustments
US20160162922A1 (en) Determining incentive for crowd sourced question
US20200394933A1 (en) Massive open online course assessment management
US20210073664A1 (en) Smart proficiency analysis for adaptive learning platforms
JP6475565B2 (en) Apparatus, system, program and method capable of classifying scoring targets
US20190179970A1 (en) Cognitive human interaction and behavior advisor
US20220335555A1 (en) Systems and methods for personality analysis for selection of courses, colleges, and/or careers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22718269

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22718269

Country of ref document: EP

Kind code of ref document: A1