WO2019234736A1 - Systems and methods for dynamic adaptation of a graphical user interface - Google Patents

Systems and methods for dynamic adaptation of a graphical user interface Download PDF

Info

Publication number
WO2019234736A1
WO2019234736A1 PCT/IL2019/050632 IL2019050632W WO2019234736A1 WO 2019234736 A1 WO2019234736 A1 WO 2019234736A1 IL 2019050632 W IL2019050632 W IL 2019050632W WO 2019234736 A1 WO2019234736 A1 WO 2019234736A1
Authority
WO
WIPO (PCT)
Prior art keywords
gui
user
objects
target action
dynamically
Prior art date
Application number
PCT/IL2019/050632
Other languages
French (fr)
Inventor
Aviv RIFTIN
Original Assignee
Comvert Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Comvert Ltd. filed Critical Comvert Ltd.
Publication of WO2019234736A1 publication Critical patent/WO2019234736A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus

Definitions

  • the present invention in some embodiments thereof, relates to graphical user interfaces (GUIs) and, more specifically, but not exclusively, to systems and methods for dynamic adaptation of a GUI.
  • GUIs graphical user interfaces
  • GUIs enable presentation of a large amount of data together on a screen, for example, for presenting a user with one of many possible actions.
  • GUIs include multiple elements, some of which are designed for interaction with a user. For example, a user may click on an icon, click on a hyperlink, click within a checkbox to make a selection, and manually enter text within a box.
  • Other elements of the GUI are designed for aesthetic purposes, for example, pictures, videos, images, and sound.
  • a method for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session comprises: presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI, monitoring at least one interactive action performed on the GUI by a user during a current session, analyzing the at least one interactive action performed on the GUI during the current session, and creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
  • GUI graphical user interface
  • a system for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session comprises: a non-transitory memory having stored thereon a code for execution by at least one hardware processor, the code comprising: code for presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI, code for monitoring at least one interactive action performed on the GUI by a user during a current session, code for analyzing the at least one interactive action performed on the GUI during the current session, and code for creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing
  • a computer program product for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session comprises: a non-transitory memory having stored thereon a code for execution by at least one hardware processor, the code comprising: instructions for presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI, instructions for monitoring at least one interactive action performed on the GUI by a user during a current session, instructions for analyzing the at least one interactive action performed on the GUI during the current session, and instructions for creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the
  • the monitoring, the analyzing, and the creating are iterated to increase the computed probability of the user performing the at least one target action on the current dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on a previously adapted GUI.
  • the analyzing comprises classifying the at least one interactive action performed on the GUI into one of a plurality of behavior profiles, and dynamically adapting the at least one object of the plurality of objects based on the classified behavior profile.
  • the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected according to the behavior profile, and according to the at least one target action.
  • the classification is iteratively performed based on at least one interactive action performed on the GUI obtained during sequential time intervals until a probability of the classification into one of a plurality of behavior profiles is above a threshold.
  • the classification is performed based on a mapping of each of the objects of the GUI designed for user interaction to one of the behavior profiles.
  • the classification is performed according to a layout of the plurality of objects of the GUI.
  • the plurality of behavior profiles are indicative of a current state of a dynamic state of the user, wherein the dynamic state may vary during the current session.
  • the plurality of behavior profiles are indicative of different possible personas of the user.
  • the plurality of behavior profiles are selected from the group consisting of: a Methodical type denoting a user favoring a GUI presenting logically organized details, a Spontaneous type denoting a user favoring a personalized GUI, a Humanistic type denoting a user favoring a GUI associated with a human touch, and a Competitive type denoting a user favoring a GUI that provides control features to the user.
  • the dynamic adaptation of the at least one object of the plurality of objects of the GUI is according to the behavior profiles, comprising: Methodical type: adding additional detailed data objects and/or re organizing the presentation of the presented objects to increase order, Spontaneous type: personalizing at least one of the presented objects of the GUI, Humanistic type: adapting at least one of the presented objects for human interaction and/or based on human reactions, and Competitive type: adding objects that provide for control over at least one other object in the GUI.
  • the classification is performed by at least one classifier trained on a training dataset comprising a plurality of records, each record including a respective label indicative of a psychographic analysis of a respective user and interactions of the respective user with the respective GUI.
  • the plurality of behavior profiles are indicative of different states of the process of performing the at least one target action.
  • the plurality of behavior profiles are selected from the group consisting of: Accidental type denoting a user that accessed the GUI without a goal, a Know-Exactly type denoting a user that has a specific purposed in accessing the GUI, a Knows- Approximately type denotes users that know approximately what they want in using the GUI, and a Just-Browsing type that are in a browsing mode in using the GUI.
  • the dynamic adaptation of the at least one object of the plurality of objects of the GUI is according to the behavior profiles, comprising: Accidental type: maintaining the objects without adapting the GUI, Know-Exactly type: presenting a list of specific models of products for selection, Knows- Approximately type: presenting images of general categories of products for selection for further details, and Just-Browsing type: presenting a catalogue of products available for sale.
  • the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected by at least one classifier that receives the classified behavior profile and the at least one target action as input, wherein the at least one classifier is trained on training data that includes a plurality of records, each record storing a certain behavior profile of a plurality of behavior profiles, a certain adaptation of a plurality of possible adaptations, and an indication of whether or not the respective user performed the at least one target action when the dynamic GUI is adapted according to the certain adaptation.
  • the GUI comprises at least one of a web page, and an application.
  • the at least one target action is selected from the group consisting of: clicking on a certain icon, selecting a certain graphical element, clicking on a certain link, registering as a user, performing a financial transaction, making a purchase, leaving contact details, and watching a video.
  • the at least one interactive action performed on the GUI includes at least one member selected from the group consisting of: active actions performed by the user, negative actions performed by the user, and lack of action by the user.
  • the at least one interactive action performed on the GUI includes at least one member selected from the group consisting of: clicking on a certain objects of the plurality of objects of the GUI, entering data, movement patterns of a cursor across the GUI, physical user interface for interacting with the GUI, user touch patterns on a touchscreen presenting the GUI, gestures, voice activation patterns, adjustment of the GUI relative to the screen, adjustment of volume, selection of muting, selection of disabling of pop-ups, and no movement at all over a time interval.
  • the at least one interactive action performed on the GUI is stored in at least one data structure selected from the group consisting of: an image denoting movement of the cursor over the screen over a time interval, a vector denoting locations on the screen where the user touched and/or moved the cursor to, and metadata associated with the plurality of objects of the GUI indicating the actions performed by the user.
  • dynamically adapting at least one object of the plurality of objects of the GUI is selected from the group consisting of: adding a layer over at least one existing object, removing at least one objects, adding at least one objects, changing the color of at least one object, adjusting the position of at least one object within the GUI, changing the size of at least one object, and/or changing the orientation of at least one object.
  • the dynamically adapting at least one object of the plurality of objects is performed while maintaining existing content.
  • the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected according to at least one member of the group consisting of: a hardware of a screen on which the GUI is presented, a context of the plurality of objects of the GUI, content available within the boundaries of the GUI, tolerance of each object for being adapted, and graphical compatibility with the current GUI.
  • FIG. 1 is a flowchart of a process for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention
  • FIG. 2 is a block diagram of components of a system for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention
  • FIG. 3A is a schematic of a GETI of a gaming web site, prior to dynamic adaptation, in accordance with some embodiments of the present invention
  • FIG. 3B is a schematic of a dynamically adapted GETI based on dynamic adaptation of the GETI of FIG. 3A according to the analysis of monitored user interactions, in accordance with some embodiments of the present invention
  • FIG. 4 is a schematic depicting an exemplary architecture of a client-server architecture for dynamic adaption of a GETI according to user interactions with the GETI, in accordance with some embodiments of the present invention.
  • FIG. 5 is a block diagram depicting an exemplary dataflow for adapting a GETI according to user interactions with the GETI, in accordance with some embodiments of the present invention.
  • the present invention in some embodiments thereof, relates to graphical user interfaces (GETIs) and, more specifically, but not exclusively, to systems and methods for dynamic adaptation of a GUI.
  • GETIs graphical user interfaces
  • An aspect of some embodiments of the present invention relates to systems, an apparatus, methods, and/or code instructions (stored in a data storage device, executable by one or more hardware processors) for dynamically updated a GUI based on a dynamic behavioral analysis of interactive actions performed on a GUI by a user during a current session.
  • the GUI for example, a web page and/or a screen of an application, in presented on a display of a client terminal.
  • the GUI includes multiple objects (also referred to herein as GUI dements , or elements ), for example, icons, graphical elements, text entry boxes, and multi-media data objects.
  • the GUI is associated with one or more target actions for performance by the user, for example, clicking on an icon, clicking on a hyperlink, and/or selecting an item.
  • One or more interactive actions performed on the GUI by a user during the current session are monitored.
  • the interactive actions performed on the GUI are indicative of the behavior of the user interacting with the GUI during the current session.
  • Interactive actions include, for example, motion of a cursor across the GUI, clicking and/or selection of objects of the GUI, and patterns of contact of the user’s finger on a touchscreen.
  • the interactive action(s) performed on the GUI are monitored in real-time, during the current session when the GUI is presented on the display of the client terminal.
  • Prior interactive actions performed by the same user on different GUIs, and/or prior interactive actions performed by the same user on the same GUI during a different previous session i.e., which has been interrupted by a time interval and/or by the user visiting other GUIs) are not necessarily considered.
  • the monitored interactive action(s) performed on the GUI is analyzed in real time, during the current session.
  • a dynamically adapted GUI is created by adapting one or more objects of the GUI according to the analysis.
  • the adaptation occurs in real-time as the user is interacting with the GUI.
  • the dynamic adaptation of the GUI is performed according to a computed (e.g., predicted) increase in probability of the user performing the target action on the dynamically adapted GUI, in comparison to a computed probability of the user performing the target action on the GUI (i.e., prior to the adaptation, or an earlier version of the adapted GUI prior to the current adaptation).
  • the GUI may be iteratively dynamically adapted multiple times in real time, as the user interacts with the GUI.
  • Each adaptation is designed to increase the probability of the user performing the target action on the current version of the adapted GUI in comparison to the probability of the user performing the action on the previous version of the adapted GUI.
  • the monitored interactive action(s) performed on the GUI is classified into one of multiple behavior profiles, for example, 3, 4, 6, or other number of behavior profiles.
  • the GUI is adapted according to the classified behavior profile of the user.
  • the behavior profiles may indicate, for example, a current mood of the user, where the current mood of the user may vary during the same session of interacting with the GUI. In such cases, the user may switch moods during the GUI interaction session.
  • the GUI is dynamically adapted accordingly.
  • the behavior profiles are based on different personas of users. The persona of the user is expected to remain static during the current session.
  • the dynamic adaption of the GUI is not based on the programmed features of the objects themselves, for example, when the user presses a play video button and the video plays on the GUI, the playing of the vide is due to the video playing object being activated by the user.
  • the dynamic adaption of the GUI described herein is independent of the featured programmed into the object themselves.
  • the adaptation of the GUI may include presenting a layer with certain text over a certain icon. The layer with certain text placed over the certain icon is based on the analysis of the interactive action(s) performed on the GUI, which may include the user pressing the play video button optionally along with other user interactions.
  • the selection of the adaptation of adding the layer is performed based on the analysis, optionally according to the classified behavior profile.
  • the selection of the adaptation of adding the layer is performed to increase the probability of the user performing a target action, for example, filling out a form.
  • the adaptation is selected and implemented by the code, which is independent of the features programmed into the object (i.e., the play video button), since the play video button is designed to play the video and not to add a layer to a certain icon.
  • At least some implementations of systems, methods, apparatus, and/or code instructions described herein relate to the technical problem of designing a GUI for increasing the probability of a user performing a target action.
  • At least some implementations of systems, methods, apparatus, and/or code instructions described herein improve the technology of GUIs.
  • the improvement relates to the process of designing GUIs for increasing the probability of a user performing a target action.
  • At least some implementations of systems, methods, apparatus, and/or code instructions described herein dynamically adapt the GUI, in real time according to the behavior exhibited by the user in interacting with the GUI in the current session (e.g., starting from when the GUI is presented on the display, excluding interruptions such as closing of the GUI), i.e., interactive action(s) performed on the GUI.
  • GUI for example, to designing a single GUI for different users (i.e., irrespective of the users) and/or selecting a GUI according to a user profile created based on defined user parameters (e.g., age, geographic location, gender, income, topics of interest), and/or according to previously observed interactive action(s) performed on the GUI in previous sessions accessing the GUI and/or previously observed interactive action(s) performed on the GUI in accessing other GUIs.
  • defined user parameters e.g., age, geographic location, gender, income, topics of interest
  • the improvement at least in some implementations, adapts the GUI according to the interactive action(s) performed on the GUI during the current sessions, and/or the interactive action(s) performed on the current version of the adapted GUI, which captures dynamic behavior changes of the user (i.e., dynamic interactive action(s) performed on the GUI), arising, for example, from mood changes and/or adaptive behavior of the user. For example, the same user may start of accessing the GUI is a hesitant manner, not knowing what he/she is looking for.
  • the GUI is adapted according to the current behavior, i.e., the current interactive action(s) performed on the GUI.
  • the user may be accessing the GUI with a Spontaneous focus.
  • the GUI may be adapted to help the user quickly narrow down choices and quickly make a selection.
  • the same user may access the GUI with a humanistic focus.
  • the GUI may be adapted to provide more human dimensions to the GUI, for example, presenting a chat session with an administrator associated with the GUI, and/or presenting a video of the people associated with the GUI.
  • At least some implementations of systems, methods, apparatus, and/or code instructions described herein improve the computing device hosting and/or presenting the GUI, for example, in terms of relatively reduced data storage requirements, and/or relatively reduced processing requirements, and/or relatively reduced network utilization in transmitting data from a server storing the GUI to the client terminal presenting the GUI.
  • the improvement in performance may be obtained, for example, by the dynamic adaptation of the GUI, which adapts one or several objects of the GUI while leaving the remaining objects intact.
  • the adaption of the one or several objects requires a less amount of storage space, fewer processing resources to compute, and lower bandwidth to transmit, for example, in comparison to selecting a certain GUI from multiple available full GUIs.
  • the improvement in performance is based on classifying the interactive action(s) performed on the GUI into one of multiple behavioral profiles, which may include a small set of profiles, for example, about 3-6 or other number of profiles. Once the interactive action(s) performed on the GUI is classified into the profile, the GUI is adjusted according to the profile.
  • the amount of data storage space, processing resources, and/or network bandwidth required to classify the interactive action(s) performed on the GUI into one of the profiles and then adjust the GUI according to the profile may be smaller in comparison to adapting the GUI according to the interactive action(s) performed on the GUI.
  • mapping each combination to a GUI adjustment may require significantly more memory and/or processing resources and/or bandwidth in comparison to mapping interactions to a small number of behavior profiles, and then mapping the small number of behavior profiles to GUI adaptations.
  • At least some implementations of systems, methods, apparatus, and/or code instructions described herein improve the display of a client terminal presenting the GUI.
  • the improvement may be based on improving the efficient usage of the limited space available on the display presenting the GUI, for the user to perform the target action.
  • Such efficient use of space may be especially significant for mobile devices in which the available screen space is relatively small.
  • the GUI is dynamically adapted according to the interactive action(s) performed on the GUI by adding a layer over existing objects, selectively displaying additional content according to the interactive action(s) performed on the GUI, changing the color of the object, adjusting the position of the object within the GUI, and/or changing the size of the object, to increase the probability of the user performing the target action.
  • the dynamic adaptations may be performed with minimal effect on the usage of the screen space, and/or minimal impact on the existing GUI, for example, avoiding clutter of the screen.
  • the classification of the monitored interactive action(s) performed on the GUI into one of multiple behavior profiles may improve computational performance of the computing device performing the classification, for example, the process of classifying the monitored interactive action(s) performed on the GUI into one of four behavior profiles may be performed more quickly, with less processing resources, and/or with less memory requirements, in comparison to, for example, classifying the monitored interactive action(s) performed on the GUI into one of a large number of possible adaptations. Since the number of possible monitored interactive action(s) may be very large (e.g., large number of possible combinations to interact with the GUI), classifying one out of a large number of possible combinations into one of a small set of possibilities may be computationally more efficient than classifying one out of a large number of possible combinations into another one of a large number of combinations.
  • the accuracy of predicting the adaptation of GUI object(s) that will increase the probability of the user performing the target action may be increased based on the classification of the interactive action(s) performed on the GUI into one of a small number of behavior profiles in comparison to classifying the user interactive action(s) performed on the GUI into one of many possible adaptations.
  • different users that display similar behavior in terms of similar interaction action(s) performed on the GUI may respond similarly to similar adaptations of the GUI.
  • different users that display different interactive action(s) performed on the GUI may not respond to different adaptations of the GUI.
  • two people with the same personality type may appear to interact differently with the GUI, but may respond similarly to the same GUI adaptation when classified into the same behavior profile. For example, one person clicking on a link on more information for a certain product, and another person watching a video showing how to build the product may be classified into the same behavior profile.
  • the two people may be more likely to buy the product when the GUI is adapted to highlight the technical specifications of the product.
  • the same two people may respond differently to different GUI adaptations that are created based on the different interaction types, since the GUI adaptations may not accurately reflect the intent of the respective user. For example, the first person clicking on the link may be presented with more links for different data unrelated to detail of the product, and the second person watching the video may be presented with additional videos unrelated to the product, in which case the two people may not be more likely to purchase the product.
  • the classification more accurately reflects the real time user interactive action(s) performed on the GUI, which may change from session to session or during the session itself, in comparison to for example, determining GUI adaptations based on past user interactive action(s) performed on the GUI and/or a static user profile. For example, the same user may behave differently during different sessions, resulting in different GUI adaptations with the same goal. Such user may not respond to a static GUI adaptation based on the past user history and/or user profile.
  • Improvement in performance of the computing device may be obtained, for example, by setting the accuracy of classifying the user interactions into the behavior profiles at a relatively low probability, which is sufficient for adapting of the GUI to statistically increase the probability of the user performing the target action.
  • the threshold for classifying the user interactions into the behavior profile may be, for example, about 60%, or about 70%, or about 80%, or other values.
  • the relatively low threshold may be sufficiently accurate, for example, when the prediction is correct, the adapted GUI may remain static when additional user interactions are classified into the same behavior profile, effectively increasing the probability that the classification is correct.
  • the adapted GUI is re-adapted when additional user interactions are classified into a different behavior profile, effectively correcting the initial error in classification.
  • Each relatively inaccurate classifications may be performed with relatively fewer computational resources (e.g., processor utilization) and/or relatively fewer data storage requirements, since the iterations provide a correction mechanism for incorrect classifications.
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhau stive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk, and any suitable combination of the foregoing.
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • ISA instruction-set-architecture
  • machine instructions machine dependent instructions
  • microcode firmware instructions
  • state-setting data or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • FPGA field-programmable gate arrays
  • PLA programmable logic arrays
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • behavioral analysis refers to the analysis of the interactive action(s) performed on the GUI by the user.
  • FIG. 1 is a flowchart of a process for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention.
  • FIG. 2 is a block diagram of components of a system 200 for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention.
  • System 200 may implement the acts of the method described with reference to FIG. 1, by processor(s) 202 of a computing device 204 executing code instructions stored in a memory 206 (also referred to as a program store).
  • a memory 206 also referred to as a program store
  • Computing device 204 may be implemented as, for example, a client terminal, a server, a virtual server, a computing cloud, a virtual machine, a desktop computer, a thin client, and/or a mobile device (e.g., a Smartphone, a Tablet computer, a laptop computer, a wearable computer, glasses computer, and a watch computer).
  • a mobile device e.g., a Smartphone, a Tablet computer, a laptop computer, a wearable computer, glasses computer, and a watch computer.
  • Computing device 204 may be implemented as a standalone device (e.g., kiosk, client terminal, smartphone) that include locally stored code instructions 206A that implement one or more of the acts described with reference to FIG. 1.
  • the locally stored instructions may be obtained from another server, for example, by downloading the code over the network, and/or loading the code from a portable storage device.
  • computing device 204 may dynamically adapt a locally stored GUI 208 A (e.g., stored in a data storage device 208) without necessarily communicating with an external server.
  • GUI 208 A is part of an application (e.g., game) downloaded from a server and locally executed by computing device 204.
  • the web page i.e., GUI
  • the web page is presented on a display (e.g., physical user interface 214) of the computing device 204 and dynamically updated according to user interactions with the web page performed by the user manipulating one or more physical user interfaces 214 (e.g., user moving a cursor controlled by a mouse connected to computing device 204 and/or contacting a touchscreen of computing device 204).
  • Computing device 204 executing stored code instructions 206A may be implemented as one or more servers (e.g., network server, web server, a computing cloud, a virtual server) that host GUI 208A, which is remotely accessed by one or more client terminals 210 over a network 212.
  • server e.g., network server, web server, a computing cloud, a virtual server
  • client terminal 210 uses a locally stored web browser application 210A to accesses a web page of a web site hosted by computing device 204, where the web page includes GUI 208A stored by computing device 204.
  • the web page (i.e., GUI) is presented on a display of the client terminal and dynamically updated according to user interactions with the web page (e.g., user moving a cursor controlled by a mouse connected to the client terminal and/or contacting a touchscreen of the client terminal).
  • the GUI 208A may be locally updated by web browser 210A (and/or other code locally stored on client terminal 210) based on instructions received from computing device 204, for example, via a plug-in installed in web browser 210A, and/or other software interfaces (e.g., application programming interface (API), and/or software development kit (SDK)).
  • API application programming interface
  • SDK software development kit
  • Computing device 204 executing stored code instructions 206A may be implemented as one or more servers (e.g., network server, web server, a computing cloud, a virtual server) that that provides services (e.g., one or more of the acts described with reference to FIG. 1) to one or more servers 216 over network 212.
  • Server(s) 216 may be web servers hosting web sites 216A that are accessed by client terminal(s) 210 over network 212.
  • Computing device 204 may provide, for example, software as a service (SaaS) to the server(s) 216, provide software services to server(s) 216 via a software interface (e.g., API, SDK), and/or provide functions using a remote access session to servers 216.
  • SaaS software as a service
  • computing device 204 provides servers 216 hosting web pages with dynamic adaptation of their respective web pages in response to interactive action(s) performed on the GUI by user of client terminals 210 interacting with the web pages over network 212.
  • Computing device 204 may act as a server that provides, for example, an application for local download to the client terminal(s) 210 for local adaptation of GUIs executed on the respective client terminal(s) 210, an add-on to a web browser running on client terminal(s) 210 for local adaptation of web sites hosted by other web servers.
  • Hardware processor(s) 202 of computing device 204 may be implemented, for example, as a central processing unit(s) (CPU), a graphics processing unit(s) (GPU), field programmable gate array(s) (FPGA), digital signal processor(s) (DSP), and application specific integrated circuit(s) (ASIC).
  • Processor(s) 202 may include a single processor, or multiple processors (homogenous or heterogeneous) arranged for parallel processing, as clusters and/or as one or more multi core processing devices.
  • Memory 206 stores code instructions executable by hardware processor(s) 202, for example, a random access memory (RAM), read-only memory (ROM), and/or a storage device, for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).
  • RAM random access memory
  • ROM read-only memory
  • Storage device for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).
  • Memory 206A stores code 206A that implements one or more features and/or acts of the method described with reference to FIG. 1 when executed by hardware processor(s) 202.
  • Computing device 204 may include data storage device 208 for storing data, for example, storing one or more GUIs 208A that are adapted as described herein, and/or one or more classifiers(s) 208B that are used in the process of adapting the GUI, as described herein.
  • Data storage device 208 may be implemented as, for example, a memory, a local hard-drive, virtual storage, a removable storage unit, an optical disk, a storage device, and/or as a remote server and/or computing cloud (e.g., accessed using a network connection).
  • Network 212 may be implemented as, for example, the internet, a local area network, a virtual network, a wireless network, a cellular network, a local bus, a point to point link (e.g., wired), and/or combinations of the aforementioned.
  • Computing device 204 may include a network interface 218 for connecting to network 212, for example, one or more of, a network interface card, a wireless interface to connect to a wireless network, a physical interface for connecting to a cable for network connectivity, a virtual interface implemented in software, network communication software providing higher layers of network connectivity, and/or other implementations.
  • Computing device 204 and/or client terminal(s) 210 include and/or are in communication with one or more physical user interfaces 214 that include a mechanism for a user to interact with the GUI and/or view the GUI.
  • Exemplary physical user interfaces 214 include, for example, one or more of, a touchscreen, a display, gesture activation devices, a keyboard, a mouse, and voice activated software using speakers and microphone.
  • Client terminal(s) 210 may be implemented as, for example, as a desktop computer, a server, a virtual machine, and a mobile device.
  • Exemplary mobile devices include for example, a Smartphone, a Tablet computer, a laptop computer, a wearable computer, smart glasses, smart watches, and smart wearable.
  • a GUI including multiple data objects is presented on the screen of the client terminal, for example, text boxes presenting text, multi-media boxes presenting images, pictures and/or videos, layers added over existing boxes, and/or data entry boxes designed for manual entry of data by a user (e.g., text entry, menu for selection of items, and/or check boxes).
  • the GUI and/or each box thereof may be formatted, for example, using certain fonts, certain font sizes, certain colors, certain shapes, layers, certain sizes, and/or certain arrangement on the screen.
  • the GUI may be, for example, a web page, and/or a screen of an application (e.g., game, banking application, online store purchase assistant application, medical record application, social media application).
  • the GUI may be locally rendered by code executing on the client terminal based on data received from a server (e.g., a web browser rendering a script and/or code provided by a web server) and/or the computing device, may be rendered by the server and/or computing device, may be stored on the server and/or computing device, and/or may be locally stored on the client terminal.
  • a server e.g., a web browser rendering a script and/or code provided by a web server
  • the computing device may be rendered by the server and/or computing device, may be stored on the server and/or computing device, and/or may be locally stored on the client terminal.
  • the presentation of the GUI may denote the start of the current session.
  • the GUI is associated with one or more target actions for the user to perform via the GUI, for example, clicking on a certain icon and/or graphical element (e.g., ad, link to another web site), registering as a user (e.g., to user a service provided by the web site), performing a financial transaction, making a purchase (e.g., purchasing a good and/or service offered by an online merchant operating the web page), watching a video, and leaving contact details (e.g., to be contact in the future by a representative and/or agreement to receive emails and/or ads).
  • a certain icon and/or graphical element e.g., ad, link to another web site
  • registering as a user e.g., to user a service provided by the web site
  • performing a financial transaction e.g., making a purchase (e.g., purchasing a good and/or service offered by an online merchant operating the web page)
  • watching a video e.g., to be contact in the future
  • the target action may be, for example, manually defined by an administrator, and/or automatically defined by code analyzing the GUI (e.g., a GUI is automatically analyzed and determined to be of an online stored, and the target action is automatically determined to be a purchase of a product of the online store).
  • the target action may be stored, for example, as metadata associated with the GUI, and/or stored in association with the code that analyzes the user interactions with the GUI.
  • the GUI may be designed using standard methods, for example, a single GUI for all visitors to the web site and/or all users of the application.
  • the GUI may be selected from a set of GUIs and/or initially designed according to a stored user profile of the user accessing the web site and/or application.
  • the stored user profile may denote substantially static parameters of the user. For example, based on the user geographic location, gender, income, and interests. It is noted that the initial GUI selected and/or generated according to the user profile is then dynamically adapted according to the interactive action(s) performed on the GUI by the user interacting with the GUI during the current session. Alternatively or additionally, the GUI may be selected based on a prior classification of the user interactions into the behavior profile, during a prior session.
  • the presented GUI may be the adapted GUI that was presented when the user (or other users) performed the target action during the previous session.
  • Such selection of a previously adapted GUI and/or an initial GUI based on a prior classification of the user (or other users) may increase the efficiency of computation of the current session in adaption of the GUI, for example, fewer and/or simpler adaptations may be required to increase the probability of the user performing the target action.
  • the GUI may be selected according to the screen on which it is presented, for example, a different version of the GUI for a desktop than a mobile device.
  • the interactive action(s) performed on the GUI by the user is monitored.
  • the interactive action(s) performed on the GUI are indicative of the behavior of the user interacting with the GUI.
  • Monitoring may be performed locally at the client terminal (e.g., by code executing locally at the client terminal, which may communicate with the computing device), locally at the server (e.g., web server), and/or by the computing device.
  • the interactions of the user with the GUI may be locally monitored by a plug-in of a web browser that displays the GUI (e.g., web site) and/or by code locally installed and executed on the client terminal that monitors the user interactions with the GUI.
  • the monitoring may be performed dynamically in real time.
  • the monitoring may include what the user does, including active actions and/or positive actions, for example, selection of icons, and/or clicking on links.
  • the monitoring may include negative actions of the user, for example, avoidance of certain icons, and/or closing of objects, for example, clicking on an ad to close the ad, and/or clicking on a playing video to close the video.
  • the monitoring may include lack of action of the user, for example, hesitation whether to perform a selection or not, or lack of activity.
  • the monitoring may be performed per event (e.g., per user click on an icon, and/or per user manual data entry, and/or per movement of the cursor on the screen).
  • the monitoring may be performed per action, where each detected action is one of several possible actions (e.g., each icon is associated with a different action, and the action is determined according to which icon was clicked on).
  • the monitoring is performed over a time interval, which may be absolute and/or relative, for example, every about 5 seconds, every about 10 seconds, every about 15 seconds, until the first selection (e.g., mouse click), until the web page fully loads (e.g., until each multimedia object is loaded), until a video completes playing.
  • Exemplary interactive action(s) performed on the GUI by the user interacting with the GUI includes one or more of: clicking on one or more of the objects of the GUI (e.g., icon, menu, link), entering data (e.g., entering a username into a field), movement patterns of a cursor across the GUI (e.g., hovering over an object, direct movement to an object, random movement across the screen, navigation between different locations on the GUI), physical user interface for interacting with the GUI (e.g., does the user use a mouse, voice activation, a touch screen, a keyboard, or a combination of the aforementioned), user touch patterns on a touchscreen presenting the GUI (e.g., direct contact on a location, touching in patterns), gestures, voice activation patterns, adjustment of the GUI relative to the screen (e.g., zoom in on certain areas of the GUI, scrolling along the GUI when the GUI is too large to fit on the screen at once, setting the size of the GUI to match the size of the screen so that the entire GUI is visible), adjustment of
  • the monitored interactive action(s) performed on the GUI may include a contextual recognition of the objects of the GUI that the user interacted with, for example, text and/or images appearing on the object (e.g., button stating“Sign-in”, icon showing a thumbs up), and/or purpose of the object (e.g., registration button, purchase button), and/or type of media associated with the object (e.g., video, image, link to another web site, text).
  • a contextual recognition of the objects of the GUI that the user interacted with for example, text and/or images appearing on the object (e.g., button stating“Sign-in”, icon showing a thumbs up), and/or purpose of the object (e.g., registration button, purchase button), and/or type of media associated with the object (e.g., video, image, link to another web site, text).
  • the monitored interactive action(s) performed on the GUI of the user interacting with the GUI may be stored for example, as one or more of the following data structures: an image denoting movement of the cursor over the screen over a time interval (e.g., lines denoting paths taken by the cursor during the time interval), a vector denoting locations on the screen where the user touched and/or moved the cursor to (e.g., array of pixel coordinates), and/or metadata associated with the objects of the GUI indicating the actions performed by the user (e.g., click on hyperlink, hovering over an icon, data entry into a field).
  • a time interval e.g., lines denoting paths taken by the cursor during the time interval
  • a vector denoting locations on the screen where the user touched and/or moved the cursor to e.g., array of pixel coordinates
  • metadata associated with the objects of the GUI indicating the actions performed by the user e.g., click on hyperlink, hovering over an icon, data entry into a field.
  • the monitored interactive action(s) performed on the GUI is analyzed.
  • the analysis may be performed according to the data structure denoting the user interactions with the GUI.
  • the analysis is performed by classifying the monitored interactive action(s) performed on the GUI into one of multiple behavior profiles.
  • One or more objects of the GUI are adapted according to the classified behavior profiles, as described herein.
  • the analysis is performed by classifying the monitored interactive action(s) performed on the GUI into one or more object adaptations of the GUI.
  • the analysis attempts to identify a real time objective and/or question of the user accessing the GUI, to answer the question,“Why is this user browsing the site”?
  • the classification is performed by one or more classifiers.
  • exemplary classifiers include: Multiple Instance Learning (MIL) based methods, one or more neural networks which may include an individual neural network and/or an architecture of multiple neural networks (e.g., convolutional neural network (CNN), fully connected neural network), deep learning based methods, support vector machine (SVM), logistic regression, k-nearest neighbor, decision trees, and a mapping function.
  • MIL Multiple Instance Learning
  • CNN convolutional neural network
  • SVM support vector machine
  • the classification may be performed based on a single set of monitored user interactive action(s) performed on the GUI, for example, a data structure denoting a certain target interaction performed by the user is mapped into a corresponding behavior profile. Different target interactions are mapped to different corresponding behavior profiles.
  • a neural network that receives as input an image including line(s) denoting movement of the cursor across the GUI, and optional receives the GUI as input, classifies the movement image and optionally the GUI into one of the behavior profiles.
  • the classification is iteratively performed using sequentially acquired data indicative of user interactive action(s) performed on the GUI over sequential time intervals until a threshold is met, for example, a probability threshold.
  • the interaction data is mapped into a first behavior profile with a probability of 20%.
  • the interaction data is mapped into a second profile with a probability of 40%.
  • the interaction data is mapped into the first profile with a probability of 85%, which is an above a classification threshold of 80%. Therefore, the interactive action(s) performed on the GUI is classified into the first profile.
  • the most recent interaction data may be classified, and/or the cumulative interaction data from the start (or using a sliding window of several time intervals) may be classified.
  • the classification may be performed by mapping each of the objects of the GUI designed for user interaction to one of the behavior profiles.
  • One of the behavior profiles is selected according to which one of the objects of the GUI the user interacts with.
  • each interaction is associated with a certain probability of the corresponding behavior profile. For example, each interaction increases the probability of the corresponding profile by 2%.
  • the behavior profile is selected when the user has interacted sufficiently with one or more objects to reach or exceed a probability threshold (e.g., 70%, 80%, or other value). For example, multiple interactions with different objects increase the probabilities of the corresponding profile accordingly, until the probability of one of the profiles reaches or exceeds the threshold.
  • a probability threshold e.g. 70%, 80%, or other value
  • the classification may be performed according to the user interactions based on a layout of the objects of the GUI.
  • the analysis may include determining the distance that the user moved the cursor along the screen to make a selection from a current position, and/or number of searches and/or clicks the user preformed to reach a target web page.
  • the layout of the GUI is fed into the classifier for analysis in association with the interaction data, for example, the image of the GUI is fed into a convolutional neural network.
  • the behavior profiles denote a current state of a dynamic state of the user, for example, the mood of the user.
  • the dynamic state of the user may vary from session to session, and/or may vary during the session itself.
  • the behavior profiles are indicative of different possible personas of a user.
  • the interactions of each user with the GUI is reflecting of the persona of the user.
  • Each user is associated with one personality that main remain static throughout the session, and/or may dynamically change throughout the session.
  • the persona is determined from the analysis of the interaction of the user with the GUI (i.e., interactive action(s) performed on the GUI), which is indicative of behavior of the user, rather than being based on static user data which may be manually entered by the user, for example, past user use of the web page and/or a user profile.
  • Such past use of the web page and/or user profile may not capture the persona of the user in a manner suitable for adaptation of the GUI to increase the probability of the user performing the target action.
  • exemplary behavior profiles are listed below.
  • the exemplary interactions that may be indicative of the respective persona type are not necessarily limiting.
  • user interactions may not“fit” into persona types based on human logic, but such“human illogical” associations may be found by the classifier, for example, by a neural network:
  • Methodical type denoting a user favoring a GUI presenting logically organized details.
  • the interactions classified into the methodical type may include, for example, clicking on an icon indicative of details.
  • Spontaneous type denoting a user favoring a personalized GUI The interactions classified into the spontaneous type may include, for example, avoidance of objects that provide additional details.
  • Humanistic type denoting a user favoring a GUI associated with a human touch.
  • the interactions classified into the humanistic type may include, for example, clicking a sign-in icon for a social media site, and/or posting a comment.
  • the interactions classified into the competitive type may include, for example, the user adjusting the GUI and/or adjusting one or more objects of the GUI (e.g., moving objects, resizing the GUI, closing objects, turning videos off/on, and adjusting color and/or sound).
  • objects of the GUI e.g., moving objects, resizing the GUI, closing objects, turning videos off/on, and adjusting color and/or sound.
  • the above described exemplary behavior profiles may be based on a psychographic analysis of classification categories of possible users (e.g., customers) accessing the GUI.
  • the behavior profiles are indicative of behavior of users that are classified into the customer categories, which may be determined based on a psychographic analysis.
  • the interactions classified into each type may not be obvious and/or necessarily make logical sense.
  • the classifier may be trained based on a training dataset that includes multiple records, each record including a respective label indicative of a psychographic analysis of a respective user (e.g., manually determined by an expert in psychographics, and/or based on each user filing out a questionnaire (e.g., validated tool) that classifies each user according to their answers) and the interactions of the respective user with the respective GUI (e.g., based on the data structure storing the interactions).
  • a training a neural network individual interactions may not necessarily map to behavior categories (e.g., psychographic categories) based on human recognized logic, however, the set of interactions may be learned by the classifier for classifying the interactions into the behavior profiles.
  • Methodical types feel a need to be prepared and organized to act. For them, task completion is its own reward. These individuals appreciate facts, hard data, and information presented in a logical manner as documentation of truth. They enjoy organization and completion of detailed tasks. They do not appreciate the“personal touch,” and they abhor disorganization. They fear negative surprises and irresponsibility above all. Those who are Methodical have a strong internal frame of reference. They prefer to think and speak about details and specifics. They compare everything to a standard ideal and look for mismatches (what’s wrong or what’s missing).
  • Spontaneous types feel a need to live in the moment. Their sensing preference makes them most grounded in the immediate world of the senses. This, coupled with their perceiving preference, helps them to remain poised and present in any situation. They are available, flexible, and engaged in a personal quest for action and impact, which defines who they are. For the Spontaneous, integrity means the unity of impulse with action. These individuals appreciate the personalized touch and are in search of new and exciting experiences. They dislike dealing with traditional details and are usually quick to reach a decision. They fear“missing out” on whatever life has offer.
  • Humanistic types have a tendency to put others’ needs before their own and are often uncomfortable accepting gifts or allowing others to do anything for them. They are very creative and entertaining. They enjoy helping others and highly value the quality of relationships. They are usually slow to reach a decision. They fear separation. Those who are Humanistic are good listeners and are generally willing to lend a sympathetic ear. They focus on acceptance, freedom, and helping. They generally prefer the big picture. They greatly value human development, including their own.
  • each user may be classified into one of the psychographic categories based on real time interactions of the user with the GUI.
  • the GUI is adapted according to the psychographic category.
  • Another set of exemplary behavior profiles is described below.
  • the exemplary behavior profiles represent categories of potential customers at different stages of the target action performing process (e.g., converting process, buying) process. It is noted that none of the behavior profiles below are necessarily more likely to perform the target action in comparison to other behavior profiles. For example, the person who knows exactly what he/she wants may be easily distracted by other offers, whereas the person who is simply browsing may become an immediate buyer.
  • the behavior profiles may described where people may be within their own minds and/or within the target action taking (e.g., buying) cycle. It is the adaptation of the GUI according to the classified behavior profile that is performed to increase the probability of the user in performing the target action, in comparison to the probability of the user performing the target action on the non-adapted GUI and/or the earlier version of the adapted GUI.
  • the Accidental types include those who just lowered upon the GUI (e.g., website) by mistake without any relevant goal or question.
  • the interactions classified into the accidental type may include, for example, pressing the back button, taking no action, and/or pressing a link to another web site.
  • the Know-Exactly types know exactly what they want, down to the model number (or its equivalent). Included in this category are those who might not be able to pinpoint a unique identifier but can describe exactly what they need.
  • the interactions classified into the know- exactly type may include, for example, entering specific data (e.g., model number) into a search engine, and/or clicking on a specific icon to access specific data (e.g., clicking on an image of a specific product to learn more about it).
  • Knows- Approximately type denotes users that know approximately what they want in using the GUI.
  • the Knows- Approximately types are in the market to buy, or in the service system to perform a certain action, but they have not made their final decision on exactly what they want to do.
  • the interactions classified into the knows-approximately type may include, for example, entering general and/or vague data (e.g., key words not specific to one product) into a search engine, and/or clicking on a general icon to access general data (e.g., clicking on an image of a category of multiple products to present additional more specific products.
  • the Just-Browsing type represents window shoppers who aren’t necessarily planning to take any specific action. In many ways, these individuals can be difficult to distinguish from the previous two categories of potential customer, since these are people who, when they run across just the right thing, will take action.
  • the interactions classified into the just-browsing type may include, for example, random pattern of interacting with the GUI.
  • one or more adaptations of one or more objects of the GUI are selected according to the analysis of the monitored user interaction with the GUI, optionally in view of the target action associated with the GUI.
  • the adaption may be selected according to the classified behavior profile, optionally in view of the target action associated with the GUI.
  • the adaptations may be computed by the classifier that classifies the user interactions into the adaptations, as described herein.
  • the classifier may further receive the target action and the user interactions for classification into the adaptation.
  • the adaptations may be computed based on the classified behavior profile. Each behavior profile may be mapped to a set of possible adaptations. The certain adaptation to be performed may be selected according to the target action associated with the GUI.
  • the adaptation may be selected according to the hardware of the screen on which the GUI is presented, for example, a different adaptation of the GUI for a desktop than a mobile device due to differences in screen size.
  • the adaptation may be selected according to the context of the objects of the GUI, for example, to maintain the“feel” and/or“look” of the GUI.
  • the size and/or color and/or fonts of the additional layer may be selected according to the size and/or color and/or fonts of existing objects of the GUI.
  • the adaptations are performed based on a qualification of subject experience of the user in comparison to the objective question (as described with reference to act 106). For example,“Does the user feel his/her question(s) is/are answered?”,“Does the user feel comfortable enough to make progress?”
  • the adaptations are performed to the GUI to better meet the user’s objectives, based on the assumption that the probability of the user performing the target action is increased when the user’s objectives are met in comparison to the probability of the user performing the target action when the user’s objectives are not met.
  • the following represent exemplary questions, reflecting information priorities and/or pace of deliberations by each of the exemplary psychographic categories described with reference to act 106.
  • the adaptations may be performed to answer the questions.
  • the adaptations may be computed by another classifier that receives the classified behavior profile and the target action as input, and outputs a certain adaptation.
  • the classifier may be trained based on training data that includes multiple records, each record storing a certain behavior profile of the possible behavior profiles, a certain adaptation of the possible adaptations, and an indication of whether or not the respective user performed the target action(s) when the dynamic GET is adapted according to the certain adaptation.
  • the classifier may be trained based each iteration described with reference to act 114, in which adaptations are made to the GUI in an effort to increase the probability that the user performs the target action.
  • the classifier may be locally dynamically trained, and/or centrally trained (e.g., stored on a server) by transmitting the data collected during the iterations from the client terminal to the server.
  • Exemplary adaptations include: adding a layer over existing objects, removing one or more objects, adding one or more objects, changing the color of the object, adjusting the position of the object within the GUI, changing the size of the object, and/or changing the orientation of the object.
  • Exemplary adaptations according to behavior type to increase the probability of the user performing the target action include:
  • Humanistic type adapting one or more of the presented objects for human interaction and/or based on human reactions. For example, adding a chat window to chat with customer support of the web site, and/or presenting feedback provided by other users.
  • the adaptations are selected according to the content available within the boundaries of the GUI, and/or within the web site itself, including links from the web page (i.e., GUI) to other web pages which may be part of the same site.
  • the content may be automatically collected and/or analyzed, for example, by code that crawls the content of the GUI (e.g., web page) and optionally follows links to other web pages which may be part of the same site. For example, popularity messages are provided as useful answers for humanistic types.
  • the adaptations are selected according to the tolerance of each object for being adapted, for example, for adding additional layer(s) over the respective object.
  • a button in a gaming GUI may accept an additional Ribbon element in proximity, without causing interference, while the same button within an e-commerce GUI requires a different GUI approach, such as a rectangular banner.
  • the tolerance of each object for being adapted, optionally in view of the GUI may be stored, for example, as a set of rules, in a database, a function based on a machine learning algorithm, and/or manually entered by a user.
  • the adaptations are selected according to graphical compatibility with the current GUI, for example, based on contextual graphical blocks that form the GUI.
  • the adaptations is selected to be accepted by the user as a natural part of the current GUI. For example, a blue ribbon of a social media site is automatically identified by the code as being held within a pink container.
  • the following adaptations may be computed accordingly: selecting a pre-created ribbon shape (e.g., from a set of stored pre-created ribbon shapes), inserting additional text into the selected ribbon, coloring the selected ribbon according to colors that exist in the currently presented GUI, and positioning the ribbon relative to the pink container.
  • the adaptation of each object according to graphical compatibility may be stored, for example, as a set of rules, in a database, a function based on a machine learning algorithm, and/or manually entered by a user.
  • the set of possible GUI adaptations may be automatically computed and/or defined in advance, and stored for real time adaption of the GUI by selecting the adaption from the set.
  • Each possible GUI adaptation may be manually defined, for example, by the administration of the GUI and/or automatically created GUI adaptations may be manually approved, for example, by the administration of the GUI.
  • a dynamically adapted GUI is created by dynamically adapting one or more objects of the GUI according to the analysis of the monitored interactive action(s) performed on the GUI, optionally in view of the target action associated with the GUI.
  • the dynamic adaptation of object(s) of the GUI is performed to increase the probability of the user performing the target action(s) on the dynamically adapted GUI in comparison to the user performing the target action(s) on the GUI prior to the dynamic adaptation.
  • the adaptation of the object(s) of the GUI is performed while maintaining existing content.
  • the size, shape, color, and/or location of objects are adjusted while the content itself (e.g., text, images, pictures, links, videos, other multimedia objects) are maintained.
  • content may be adjusted, for example, objects storing irrelevant content (e.g., according to the behavior profile) are removed and/or additional objects storing relevant content (e.g., according to the behavior profile) are added. For example, for a detail oriented user, additional detail objects are added. For an organized user, excess detail is removed.
  • the target action(s) performed by the user on one or more of the GUI objects is detected. For example, the user makes a purchase, clicks on a target icon, enters personal information into a form, and/or views an advertisement.
  • the monitoring of the user interactions is performed based on the dynamically adapted GUI.
  • the user interactions are analyzed (as described with reference to act 106), which may result in a re-classification into another behavior profile, or maintenance of the existing behavior profile.
  • Another adaptation of the same object and/or another object of the GUI may be performed.
  • the current dynamically adapted GUI may undergo another adaptation by dynamically adapting the currently dynamically adapted GUI according to the additional adaptation.
  • the iterative adaptations to the GUI are performed to increase the probability (i.e., with each subsequent iteration) that the user performs the target action.
  • the iterations may be dynamically performed during the same user session (i.e., current session), according to changes in the interactive actions performed on the GUI (e.g., changing user behavior), and/or according to increasing knowledge gained about the current user interactions.
  • Each iteration is designed to increase the probability that the user performs the target action using the current version of the dynamically adapted GUI over the probability of performing the target action using the previous version of the dynamically adapted GUI.
  • the rate of the dynamic adjustment of the user may be set, for example, manually by an administration and/or automatically by code, and/or determined in real time for example according to changes in the behavior profile.
  • the rate of adjustment may be set, for example, to once every 5 minutes to avoid user confusion.
  • the rate may be automatically set according to the user interactive action(s) performed on the GUI. Users that change their mind by performing different interactions (which may be mapped to different behavior profiles) are presented with dynamically adapted GUIs that keep us with the changing interactive action(s) performed on the GUI by the user.
  • the iterations may be performed based on an adaptation of the GUI that involves a target user interaction, which is different than the target action.
  • the adaption may be to present an object in which the user may enter key words to be fed into a search engine, and the target action is the user making a purchase.
  • the search engine is presented within the GUI to aid the user in searching for a product to buy.
  • the reaction of the user to the adaptation of the GUI i.e., whether or not the user performed the target user interaction (user interactions), may be analyzed to determine the next adaption of the GUI. For example, when the user has not used the search engine, or when the user attempted to use the search engine but the search engine is not working, another adaptation of the GUI may be performed, for example, presenting suggested products to the user rather than a search engine.
  • the iterations may be performed until a stop criteria is met, for example, a time limit and/or number of adaptations.
  • the stop criteria may be indicative that the adaptations are not effective in the user performing the target action.
  • a default adaptation may be performed when the stop criteria is met, for example, a chat service window for the user to directly contact the administrator of the GUI (e.g., to ask about a specific product in the case of an online store).
  • the classifier(s), used in act 106 is trained. It is noted that training of the classifier may be performed on a different computing device and/or different server, independently of execution of acts 102-114. For example, the classifier may be trained prior to execution of acts 102. The classifier may be updated based on the results of execution of acts 102-114, as described with reference to act 118.
  • the classifier may be trained based on a training dataset that includes a label for each user classifying the user into one of the possible behavioral categories.
  • the label may be created, for example, manually determined by an expert in the behavior categories, manually determined by the user themselves, based on each user filing out a questionnaire (e.g., validated tool) that classifies each user according to their answers) and/or based on code that automatically analyzes other aspects of the user (e.g., user profile, demographics, past shopping history, comments made on social media sites).
  • the training dataset includes the user interaction with the GUI, optionally stored in a suitable data structure as described herein.
  • the classifier is trained according to the training dataset, to classify a new user into one of the possible behavior categories based on an input of user interactions with the GUI.
  • the classifier(s) are updated according to the data collected during the current session. For example, the user interactions, the classification results (e.g., behavior profile(s)), the target action(s), the GUI, and/or the selected adaptation(s) may be collected by the client terminals, computing device, and/or web server, and transmitted to a host of the classifier for training the classifier (e.g., the computing device, a remote server). Updating the classifier based on additional data collected from different users using different client terminals and/or different user interfaces to interact with different GUIs to perform different target actions increases the accuracy of the classifier.
  • the classification results e.g., behavior profile(s)
  • the target action(s) e.g., the GUI
  • the selected adaptation(s) may be collected by the client terminals, computing device, and/or web server, and transmitted to a host of the classifier for training the classifier (e.g., the computing device, a remote server). Updating the classifier based on additional data collected from different
  • GUI 302 denotes a registration page for the gaming web site.
  • the user interactions are analyzed as described herein.
  • the analysis based on real time user interactions determines that the user objective is interest in popular social gaming, and the question the user has is“Are many people using this game”?
  • FIG. 3B is a schematic of a dynamically adapted GUI 304 based on dynamic adaptation of GUI 302 of FIG. 3A according to the analysis of monitored user interactions, in accordance with some embodiments of the present invention.
  • a new banner 306 is added to GUI 302 to create GUI 304 based on the analysis of the monitored user interactions, to increase the probability that the user performs the target action, which is registering to play the game.
  • the analysis quantifies the user experience as“Question not answered”.
  • the GUI is adapted by inserting banner 306, which answers the user question that the game is indeed a popular game, to increase the probability that the user performs the target action of registering to user the game (i.e., conversion).
  • FIG. 4 is a schematic depicting an exemplary architecture of a client-server architecture for dynamic adaption of a GUI according to user interactions with the GUI, in accordance with some embodiments of the present invention.
  • Client terminal 410 communicates with computing device 404 acting as a server over a network 412, as described herein.
  • the GUI is presented on a display of client terminal 410.
  • the user interactions may be monitored by code executing on client terminal 410.
  • the classification of the user interactions into the behavior profile is performed by server 404 according to the user interactions provided by client terminal over network 412.
  • Server generates instructions for dynamic adaptation of the GUI presented on the display of client terminal 410.
  • Architecture 400A denotes a standard network architecture, in which communication is one-way over network 412. For example, based on AJAX, where communication is based on client terminal 410 request, with server 402 unable to initiate communication. Server 402 is effectively stateless with limited or no memory, effectively dependent on communication from client terminal 410.
  • Architecture 400B denotes a real-time network architecture, for example based on the WebSocket protocol and/or the Distributed Data Protocol (DDP).
  • Architecture 400B provides bi directional communication, where both client terminal 410 and server 402 may communicate with each other at the same time over network 412.
  • Architecture 400B reduces the time required to classify the user interactions into the behavior profile and/or update the GUI, by distributing the classification of the user interaction and/or selection of the adaptation of the object(s) of the GUI between client terminal 410 and server 402.
  • the WebSocket protocol enables interaction between a web client (such as a browser) and a web server with lower overheads, facilitating real-time data transfer from and to the server.
  • a standardized way is provided for the server to send data to the client without being first requested by the client and allowing messages to be passed back and forth between the client and server while keeping the connection open. In this way, a two-way ongoing conversation can take place between the client and the server.
  • the communications are done over TCP port number 80 (or 443 in the case of TLS -encrypted connections), which is of benefit for those environments which block non- web Internet connections using a firewall.
  • DDP is a client-server protocol for querying and updating a server- side database and for synchronizing such updates among clients. DDP uses the publish-subscribe messaging pattern. It was created for use by the Meteor JavaScript framework.
  • FIG. 5 is a block diagram depicting an exemplary dataflow for adapting a GUI according to user interactions with the GUI, in accordance with some embodiments of the present invention.
  • the dataflow described with reference to FIG. 5 is based on one or more features and/or component described with reference to one or more of FIGs. 1-4.
  • the user session is initiated, for example, the web page is loaded by the web browser, and/or the application is loaded and executed.
  • the GUI is presented on a display, for example, of the client terminal. The user interactions with the GUI during user session 502, as described herein.
  • the GUI may be registered to use the services provided by the computing device for dynamic adaptation, for example, provided via a BackOffice REST API 501.
  • basic site information may be stored within a database storing Basic Site info definitions (e.g., URL, site context, target action (e.g., conversion goals).
  • Select performance pixel code 504 may be integrated into the GUI, optionally the web site, for example, as a plug-in into the web browser, API, SDK, library files, and/or other software interfaces.
  • pixel code 504 lists the session as a subscriber to the GUI adaptation service when the GUI is registered.
  • a script builder server 506 (which may be implemented as code instructions executed by computing device 204 described with reference to FIG. 2) calculates relevant modules based on the site input definitions 508, which are loaded to the client terminal (e.g., client terminal 210 described with reference to FIG. 2) for example using socket.io, and a cache database (DB) representation is initialized (e.g., MiniMongo), for example, based on DDP.
  • client terminal e.g., client terminal 210 described with reference to FIG. 2
  • DB cache database
  • the monitored interactions are provided to an analytics server 510 (which may be implemented as code instructions executed by computing device 204 described with reference to FIG. 2), which may analyze the monitored interactions and stores the monitored interactions in a site benchmark info database 512.
  • the monitored interactions stored by site benchmark info 512 are fed into a neural network 514.
  • Neural network 512 analyzes and/or classified the monitored user interactions with the GUI during the user session, as described herein. Neural network 512 may output a probability of the analysis and/or classification, as described herein.
  • the behavior classification optionally an identification of a persona type of the user (as described herein) is stored in a persona identifier database 516.
  • the DDP updates the client cache DB with the classification when required.
  • a response generation/optimization server 518 (which may be implemented as code instructions executed by computing device 204 described with reference to FIG. 2) computes and/or stores possible GUI adaptations as described herein. Adaptations are sometimes referred to herein as responses.
  • the adaptation(s) are selected according to the behavior profile classification (e.g., identified persona type), as described herein.
  • the possible GUI adaptations may be applied to a web site (e.g., based on HTML and/or CSS).
  • the selected adaptations may be loaded as modules to the client terminal.
  • the monitored user interactions are classified.
  • An adaptation is selected and applied to adapt the GUI, optionally when a probability of the classification result is above a threshold, for example, over 80%.
  • results of whether the user performed the target action on the adapted GUI may be stored in a response and effectiveness database 520, which may be used to update and/or train neural network 514, as described herein.
  • MongoDB is a free and open-source cross-platform document-oriented database program. Classified as a NoSQLdatabase program, MongoDB uses JSON-like documents with schemas. MongoDB is developed by MongoDB Inc., and is published under a combination of the GNU Affero General Public License and the Apache License. Min Mongo is a client-side MongoDB implementation which supports basic queries, including some geospatial ones. Code from Meteor .js minimongo package is used, reworked to support more geospatial queries and made npm+browserify friendly. The code is either IndexedDb backed (IndexedDb), WebSQL backed (WebSQLDb), Local storage backed (LocalStorageDb) or in memory only (MemoryDb).
  • GUI GUI
  • the term“about” refers to ⁇ 10 %.
  • the terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. This term encompasses the terms “consisting of” and “consisting essentially of”.
  • composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Abstract

There is provided a method for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprises: presenting a GUI comprising objects on a display of a client terminal, wherein the GUI is associated with target action(s) performed by a user on the GUI, monitoring interactive action(s) performed on the GUI by a user during a current session, analyzing interactive action(s) performed on the GUI during the current session, and creating a dynamically adapted GUI by dynamically adapting object(s) of the GUI according to the analysis, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the target action(s) on the dynamically adapted GUI in comparison to a computed probability of the user performing the target action(s) on the GUI prior to the dynamic adaptation.

Description

SYSTEMS AND METHODS FOR DYNAMIC ADAPTATION OF
A GRAPHICAL USER INTERFACE
RELATED APPLICATION
This application claims the benefit of priority from U.S. Provisional Patent Application No. 62/681,109 filed on June 6, 2018, the contents of which are incorporated herein by reference in their entirety.
BACKGROUND
The present invention, in some embodiments thereof, relates to graphical user interfaces (GUIs) and, more specifically, but not exclusively, to systems and methods for dynamic adaptation of a GUI.
GUIs enable presentation of a large amount of data together on a screen, for example, for presenting a user with one of many possible actions. GUIs include multiple elements, some of which are designed for interaction with a user. For example, a user may click on an icon, click on a hyperlink, click within a checkbox to make a selection, and manually enter text within a box. Other elements of the GUI are designed for aesthetic purposes, for example, pictures, videos, images, and sound.
SUMMARY
According to a first aspect, a method for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprises: presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI, monitoring at least one interactive action performed on the GUI by a user during a current session, analyzing the at least one interactive action performed on the GUI during the current session, and creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
According to a second aspect, a system for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprises: a non-transitory memory having stored thereon a code for execution by at least one hardware processor, the code comprising: code for presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI, code for monitoring at least one interactive action performed on the GUI by a user during a current session, code for analyzing the at least one interactive action performed on the GUI during the current session, and code for creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
According to a third aspect, a computer program product for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprises: a non-transitory memory having stored thereon a code for execution by at least one hardware processor, the code comprising: instructions for presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI, instructions for monitoring at least one interactive action performed on the GUI by a user during a current session, instructions for analyzing the at least one interactive action performed on the GUI during the current session, and instructions for creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
In a further implementation form of the first, second, and third aspects, the monitoring, the analyzing, and the creating are iterated to increase the computed probability of the user performing the at least one target action on the current dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on a previously adapted GUI.
In a further implementation form of the first, second, and third aspects, the analyzing comprises classifying the at least one interactive action performed on the GUI into one of a plurality of behavior profiles, and dynamically adapting the at least one object of the plurality of objects based on the classified behavior profile. In a further implementation form of the first, second, and third aspects, the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected according to the behavior profile, and according to the at least one target action.
In a further implementation form of the first, second, and third aspects, the classification is iteratively performed based on at least one interactive action performed on the GUI obtained during sequential time intervals until a probability of the classification into one of a plurality of behavior profiles is above a threshold.
In a further implementation form of the first, second, and third aspects, the classification is performed based on a mapping of each of the objects of the GUI designed for user interaction to one of the behavior profiles.
In a further implementation form of the first, second, and third aspects, the classification is performed according to a layout of the plurality of objects of the GUI.
In a further implementation form of the first, second, and third aspects, the plurality of behavior profiles are indicative of a current state of a dynamic state of the user, wherein the dynamic state may vary during the current session.
In a further implementation form of the first, second, and third aspects, the plurality of behavior profiles are indicative of different possible personas of the user.
In a further implementation form of the first, second, and third aspects, the plurality of behavior profiles are selected from the group consisting of: a Methodical type denoting a user favoring a GUI presenting logically organized details, a Spontaneous type denoting a user favoring a personalized GUI, a Humanistic type denoting a user favoring a GUI associated with a human touch, and a Competitive type denoting a user favoring a GUI that provides control features to the user.
In a further implementation form of the first, second, and third aspects, the dynamic adaptation of the at least one object of the plurality of objects of the GUI is according to the behavior profiles, comprising: Methodical type: adding additional detailed data objects and/or re organizing the presentation of the presented objects to increase order, Spontaneous type: personalizing at least one of the presented objects of the GUI, Humanistic type: adapting at least one of the presented objects for human interaction and/or based on human reactions, and Competitive type: adding objects that provide for control over at least one other object in the GUI.
In a further implementation form of the first, second, and third aspects, the classification is performed by at least one classifier trained on a training dataset comprising a plurality of records, each record including a respective label indicative of a psychographic analysis of a respective user and interactions of the respective user with the respective GUI. In a further implementation form of the first, second, and third aspects, the plurality of behavior profiles are indicative of different states of the process of performing the at least one target action.
In a further implementation form of the first, second, and third aspects, the plurality of behavior profiles are selected from the group consisting of: Accidental type denoting a user that accessed the GUI without a goal, a Know-Exactly type denoting a user that has a specific purposed in accessing the GUI, a Knows- Approximately type denotes users that know approximately what they want in using the GUI, and a Just-Browsing type that are in a browsing mode in using the GUI.
In a further implementation form of the first, second, and third aspects, the dynamic adaptation of the at least one object of the plurality of objects of the GUI is according to the behavior profiles, comprising: Accidental type: maintaining the objects without adapting the GUI, Know-Exactly type: presenting a list of specific models of products for selection, Knows- Approximately type: presenting images of general categories of products for selection for further details, and Just-Browsing type: presenting a catalogue of products available for sale.
In a further implementation form of the first, second, and third aspects, the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected by at least one classifier that receives the classified behavior profile and the at least one target action as input, wherein the at least one classifier is trained on training data that includes a plurality of records, each record storing a certain behavior profile of a plurality of behavior profiles, a certain adaptation of a plurality of possible adaptations, and an indication of whether or not the respective user performed the at least one target action when the dynamic GUI is adapted according to the certain adaptation.
In a further implementation form of the first, second, and third aspects, the GUI comprises at least one of a web page, and an application.
In a further implementation form of the first, second, and third aspects, the at least one target action is selected from the group consisting of: clicking on a certain icon, selecting a certain graphical element, clicking on a certain link, registering as a user, performing a financial transaction, making a purchase, leaving contact details, and watching a video.
In a further implementation form of the first, second, and third aspects, the at least one interactive action performed on the GUI includes at least one member selected from the group consisting of: active actions performed by the user, negative actions performed by the user, and lack of action by the user. In a further implementation form of the first, second, and third aspects, the at least one interactive action performed on the GUI includes at least one member selected from the group consisting of: clicking on a certain objects of the plurality of objects of the GUI, entering data, movement patterns of a cursor across the GUI, physical user interface for interacting with the GUI, user touch patterns on a touchscreen presenting the GUI, gestures, voice activation patterns, adjustment of the GUI relative to the screen, adjustment of volume, selection of muting, selection of disabling of pop-ups, and no movement at all over a time interval.
In a further implementation form of the first, second, and third aspects, the at least one interactive action performed on the GUI is stored in at least one data structure selected from the group consisting of: an image denoting movement of the cursor over the screen over a time interval, a vector denoting locations on the screen where the user touched and/or moved the cursor to, and metadata associated with the plurality of objects of the GUI indicating the actions performed by the user.
In a further implementation form of the first, second, and third aspects, dynamically adapting at least one object of the plurality of objects of the GUI is selected from the group consisting of: adding a layer over at least one existing object, removing at least one objects, adding at least one objects, changing the color of at least one object, adjusting the position of at least one object within the GUI, changing the size of at least one object, and/or changing the orientation of at least one object.
In a further implementation form of the first, second, and third aspects, the dynamically adapting at least one object of the plurality of objects is performed while maintaining existing content.
In a further implementation form of the first, second, and third aspects, the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected according to at least one member of the group consisting of: a hardware of a screen on which the GUI is presented, a context of the plurality of objects of the GUI, content available within the boundaries of the GUI, tolerance of each object for being adapted, and graphical compatibility with the current GUI.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIG. 1 is a flowchart of a process for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention;
FIG. 2 is a block diagram of components of a system for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention;
FIG. 3A is a schematic of a GETI of a gaming web site, prior to dynamic adaptation, in accordance with some embodiments of the present invention;
FIG. 3B is a schematic of a dynamically adapted GETI based on dynamic adaptation of the GETI of FIG. 3A according to the analysis of monitored user interactions, in accordance with some embodiments of the present invention;
FIG. 4 is a schematic depicting an exemplary architecture of a client-server architecture for dynamic adaption of a GETI according to user interactions with the GETI, in accordance with some embodiments of the present invention; and
FIG. 5 is a block diagram depicting an exemplary dataflow for adapting a GETI according to user interactions with the GETI, in accordance with some embodiments of the present invention.
DETAILED DESCRIPTION
The present invention, in some embodiments thereof, relates to graphical user interfaces (GETIs) and, more specifically, but not exclusively, to systems and methods for dynamic adaptation of a GUI.
An aspect of some embodiments of the present invention relates to systems, an apparatus, methods, and/or code instructions (stored in a data storage device, executable by one or more hardware processors) for dynamically updated a GUI based on a dynamic behavioral analysis of interactive actions performed on a GUI by a user during a current session. The GUI, for example, a web page and/or a screen of an application, in presented on a display of a client terminal. The GUI includes multiple objects (also referred to herein as GUI dements , or elements ), for example, icons, graphical elements, text entry boxes, and multi-media data objects. The GUI is associated with one or more target actions for performance by the user, for example, clicking on an icon, clicking on a hyperlink, and/or selecting an item. One or more interactive actions performed on the GUI by a user during the current session are monitored. The interactive actions performed on the GUI are indicative of the behavior of the user interacting with the GUI during the current session. Interactive actions include, for example, motion of a cursor across the GUI, clicking and/or selection of objects of the GUI, and patterns of contact of the user’s finger on a touchscreen. The interactive action(s) performed on the GUI are monitored in real-time, during the current session when the GUI is presented on the display of the client terminal. Prior interactive actions performed by the same user on different GUIs, and/or prior interactive actions performed by the same user on the same GUI during a different previous session (i.e., which has been interrupted by a time interval and/or by the user visiting other GUIs) are not necessarily considered. The monitored interactive action(s) performed on the GUI is analyzed in real time, during the current session. A dynamically adapted GUI is created by adapting one or more objects of the GUI according to the analysis. The adaptation occurs in real-time as the user is interacting with the GUI. The dynamic adaptation of the GUI is performed according to a computed (e.g., predicted) increase in probability of the user performing the target action on the dynamically adapted GUI, in comparison to a computed probability of the user performing the target action on the GUI (i.e., prior to the adaptation, or an earlier version of the adapted GUI prior to the current adaptation).
The GUI may be iteratively dynamically adapted multiple times in real time, as the user interacts with the GUI. Each adaptation is designed to increase the probability of the user performing the target action on the current version of the adapted GUI in comparison to the probability of the user performing the action on the previous version of the adapted GUI.
Optionally, the monitored interactive action(s) performed on the GUI is classified into one of multiple behavior profiles, for example, 3, 4, 6, or other number of behavior profiles. The GUI is adapted according to the classified behavior profile of the user. The behavior profiles may indicate, for example, a current mood of the user, where the current mood of the user may vary during the same session of interacting with the GUI. In such cases, the user may switch moods during the GUI interaction session. The GUI is dynamically adapted accordingly. In another example, the behavior profiles are based on different personas of users. The persona of the user is expected to remain static during the current session. It is noted that the dynamic adaption of the GUI is not based on the programmed features of the objects themselves, for example, when the user presses a play video button and the video plays on the GUI, the playing of the vide is due to the video playing object being activated by the user. The dynamic adaption of the GUI described herein is independent of the featured programmed into the object themselves. For example, when the user presses a play video button and the video plays on the GUI, the adaptation of the GUI may include presenting a layer with certain text over a certain icon. The layer with certain text placed over the certain icon is based on the analysis of the interactive action(s) performed on the GUI, which may include the user pressing the play video button optionally along with other user interactions. The selection of the adaptation of adding the layer is performed based on the analysis, optionally according to the classified behavior profile. The selection of the adaptation of adding the layer is performed to increase the probability of the user performing a target action, for example, filling out a form. The adaptation is selected and implemented by the code, which is independent of the features programmed into the object (i.e., the play video button), since the play video button is designed to play the video and not to add a layer to a certain icon.
At least some implementations of systems, methods, apparatus, and/or code instructions described herein relate to the technical problem of designing a GUI for increasing the probability of a user performing a target action.
At least some implementations of systems, methods, apparatus, and/or code instructions described herein improve the technology of GUIs.
The improvement, at least in some implementations, relates to the process of designing GUIs for increasing the probability of a user performing a target action. At least some implementations of systems, methods, apparatus, and/or code instructions described herein dynamically adapt the GUI, in real time according to the behavior exhibited by the user in interacting with the GUI in the current session (e.g., starting from when the GUI is presented on the display, excluding interruptions such as closing of the GUI), i.e., interactive action(s) performed on the GUI. In contrast, for example, to designing a single GUI for different users (i.e., irrespective of the users) and/or selecting a GUI according to a user profile created based on defined user parameters (e.g., age, geographic location, gender, income, topics of interest), and/or according to previously observed interactive action(s) performed on the GUI in previous sessions accessing the GUI and/or previously observed interactive action(s) performed on the GUI in accessing other GUIs.
The improvement, at least in some implementations, adapts the GUI according to the interactive action(s) performed on the GUI during the current sessions, and/or the interactive action(s) performed on the current version of the adapted GUI, which captures dynamic behavior changes of the user (i.e., dynamic interactive action(s) performed on the GUI), arising, for example, from mood changes and/or adaptive behavior of the user. For example, the same user may start of accessing the GUI is a hesitant manner, not knowing what he/she is looking for.
The GUI is adapted according to the current behavior, i.e., the current interactive action(s) performed on the GUI. In response to the adapted GUI, the user’s manner may change, becoming more specific as the user realizes what the user is looking for. The GUI is re-adapted according to the new behavior, i.e., the new interactive action(s) performed on the GUI. In another example, the user may be accessing the GUI with a Spontaneous focus. The GUI may be adapted to help the user quickly narrow down choices and quickly make a selection. During another session of accessing the GUI (e.g., a week later), the same user may access the GUI with a humanistic focus. The GUI may be adapted to provide more human dimensions to the GUI, for example, presenting a chat session with an administrator associated with the GUI, and/or presenting a video of the people associated with the GUI.
At least some implementations of systems, methods, apparatus, and/or code instructions described herein improve the computing device hosting and/or presenting the GUI, for example, in terms of relatively reduced data storage requirements, and/or relatively reduced processing requirements, and/or relatively reduced network utilization in transmitting data from a server storing the GUI to the client terminal presenting the GUI. The improvement in performance may be obtained, for example, by the dynamic adaptation of the GUI, which adapts one or several objects of the GUI while leaving the remaining objects intact. The adaption of the one or several objects requires a less amount of storage space, fewer processing resources to compute, and lower bandwidth to transmit, for example, in comparison to selecting a certain GUI from multiple available full GUIs. In another example, the improvement in performance is based on classifying the interactive action(s) performed on the GUI into one of multiple behavioral profiles, which may include a small set of profiles, for example, about 3-6 or other number of profiles. Once the interactive action(s) performed on the GUI is classified into the profile, the GUI is adjusted according to the profile.
The amount of data storage space, processing resources, and/or network bandwidth required to classify the interactive action(s) performed on the GUI into one of the profiles and then adjust the GUI according to the profile may be smaller in comparison to adapting the GUI according to the interactive action(s) performed on the GUI. For example, since the number of possible ways in which the user may interact with the GUI may be very large (e.g., arising from the different possible combinations of ways of interacting), mapping each combination to a GUI adjustment may require significantly more memory and/or processing resources and/or bandwidth in comparison to mapping interactions to a small number of behavior profiles, and then mapping the small number of behavior profiles to GUI adaptations.
At least some implementations of systems, methods, apparatus, and/or code instructions described herein improve the display of a client terminal presenting the GUI. The improvement may be based on improving the efficient usage of the limited space available on the display presenting the GUI, for the user to perform the target action. Such efficient use of space may be especially significant for mobile devices in which the available screen space is relatively small. For example, the GUI is dynamically adapted according to the interactive action(s) performed on the GUI by adding a layer over existing objects, selectively displaying additional content according to the interactive action(s) performed on the GUI, changing the color of the object, adjusting the position of the object within the GUI, and/or changing the size of the object, to increase the probability of the user performing the target action. The dynamic adaptations may be performed with minimal effect on the usage of the screen space, and/or minimal impact on the existing GUI, for example, avoiding clutter of the screen.
The classification of the monitored interactive action(s) performed on the GUI into one of multiple behavior profiles may improve computational performance of the computing device performing the classification, for example, the process of classifying the monitored interactive action(s) performed on the GUI into one of four behavior profiles may be performed more quickly, with less processing resources, and/or with less memory requirements, in comparison to, for example, classifying the monitored interactive action(s) performed on the GUI into one of a large number of possible adaptations. Since the number of possible monitored interactive action(s) may be very large (e.g., large number of possible combinations to interact with the GUI), classifying one out of a large number of possible combinations into one of a small set of possibilities may be computationally more efficient than classifying one out of a large number of possible combinations into another one of a large number of combinations.
Moreover, the accuracy of predicting the adaptation of GUI object(s) that will increase the probability of the user performing the target action may be increased based on the classification of the interactive action(s) performed on the GUI into one of a small number of behavior profiles in comparison to classifying the user interactive action(s) performed on the GUI into one of many possible adaptations.
For example, different users that display similar behavior in terms of similar interaction action(s) performed on the GUI (which are grouped into the same behavior profile), even when performing what appears to be different interactions, may respond similarly to similar adaptations of the GUI. In contrast, different users that display different interactive action(s) performed on the GUI (which would otherwise be grouped into the same behavior profile) may not respond to different adaptations of the GUI. For example, two people with the same personality type may appear to interact differently with the GUI, but may respond similarly to the same GUI adaptation when classified into the same behavior profile. For example, one person clicking on a link on more information for a certain product, and another person watching a video showing how to build the product may be classified into the same behavior profile. The two people may be more likely to buy the product when the GUI is adapted to highlight the technical specifications of the product. In contrast, the same two people may respond differently to different GUI adaptations that are created based on the different interaction types, since the GUI adaptations may not accurately reflect the intent of the respective user. For example, the first person clicking on the link may be presented with more links for different data unrelated to detail of the product, and the second person watching the video may be presented with additional videos unrelated to the product, in which case the two people may not be more likely to purchase the product.
However, it should be noted that it may be possible to design and/or train the classifier to directly map user interactions to the GUI adaptation, optionally at the cost of increased complexity and/or lower accuracy in comparison to classifying into the behavior profiles.
In the case of classifying the user interactions into the GUI adaptation (i.e., without the intermediate process of classifying into the behavior profile), the classification more accurately reflects the real time user interactive action(s) performed on the GUI, which may change from session to session or during the session itself, in comparison to for example, determining GUI adaptations based on past user interactive action(s) performed on the GUI and/or a static user profile. For example, the same user may behave differently during different sessions, resulting in different GUI adaptations with the same goal. Such user may not respond to a static GUI adaptation based on the past user history and/or user profile.
Improvement in performance of the computing device may be obtained, for example, by setting the accuracy of classifying the user interactions into the behavior profiles at a relatively low probability, which is sufficient for adapting of the GUI to statistically increase the probability of the user performing the target action. For example, the threshold for classifying the user interactions into the behavior profile may be, for example, about 60%, or about 70%, or about 80%, or other values. When the classification of the user interactions into the behavior profiles is performed iteratively as the user continues to interact with the GUI, the relatively low threshold may be sufficiently accurate, for example, when the prediction is correct, the adapted GUI may remain static when additional user interactions are classified into the same behavior profile, effectively increasing the probability that the classification is correct. When the initial classification is incorrect, the adapted GUI is re-adapted when additional user interactions are classified into a different behavior profile, effectively correcting the initial error in classification. Each relatively inaccurate classifications may be performed with relatively fewer computational resources (e.g., processor utilization) and/or relatively fewer data storage requirements, since the iterations provide a correction mechanism for incorrect classifications.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
A non-exhau stive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
As used herein, the term behavioral analysis, refers to the analysis of the interactive action(s) performed on the GUI by the user.
Reference is now made to FIG. 1, which is a flowchart of a process for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention. Reference is also made to FIG. 2, which is a block diagram of components of a system 200 for automatically dynamically updating a graphical user interface based on a dynamic behavioral analysis of a user, in accordance with some embodiments of the present invention. System 200 may implement the acts of the method described with reference to FIG. 1, by processor(s) 202 of a computing device 204 executing code instructions stored in a memory 206 (also referred to as a program store). Computing device 204 may be implemented as, for example, a client terminal, a server, a virtual server, a computing cloud, a virtual machine, a desktop computer, a thin client, and/or a mobile device (e.g., a Smartphone, a Tablet computer, a laptop computer, a wearable computer, glasses computer, and a watch computer).
Multiple architectures of system 200 based on computing device 204 may be implemented. For example:
* Computing device 204 may be implemented as a standalone device (e.g., kiosk, client terminal, smartphone) that include locally stored code instructions 206A that implement one or more of the acts described with reference to FIG. 1. The locally stored instructions may be obtained from another server, for example, by downloading the code over the network, and/or loading the code from a portable storage device. Once code instructions 206A are stored on computing device 204, computing device 204 may dynamically adapt a locally stored GUI 208 A (e.g., stored in a data storage device 208) without necessarily communicating with an external server.
For example, GUI 208 A is part of an application (e.g., game) downloaded from a server and locally executed by computing device 204. The web page (i.e., GUI) is presented on a display (e.g., physical user interface 214) of the computing device 204 and dynamically updated according to user interactions with the web page performed by the user manipulating one or more physical user interfaces 214 (e.g., user moving a cursor controlled by a mouse connected to computing device 204 and/or contacting a touchscreen of computing device 204).
* Computing device 204 executing stored code instructions 206A, may be implemented as one or more servers (e.g., network server, web server, a computing cloud, a virtual server) that host GUI 208A, which is remotely accessed by one or more client terminals 210 over a network 212. For example, client terminal 210 uses a locally stored web browser application 210A to accesses a web page of a web site hosted by computing device 204, where the web page includes GUI 208A stored by computing device 204. The web page (i.e., GUI) is presented on a display of the client terminal and dynamically updated according to user interactions with the web page (e.g., user moving a cursor controlled by a mouse connected to the client terminal and/or contacting a touchscreen of the client terminal). The GUI 208A may be locally updated by web browser 210A (and/or other code locally stored on client terminal 210) based on instructions received from computing device 204, for example, via a plug-in installed in web browser 210A, and/or other software interfaces (e.g., application programming interface (API), and/or software development kit (SDK)).
* Computing device 204 executing stored code instructions 206A, may be implemented as one or more servers (e.g., network server, web server, a computing cloud, a virtual server) that that provides services (e.g., one or more of the acts described with reference to FIG. 1) to one or more servers 216 over network 212. Server(s) 216 may be web servers hosting web sites 216A that are accessed by client terminal(s) 210 over network 212. Computing device 204 may provide, for example, software as a service (SaaS) to the server(s) 216, provide software services to server(s) 216 via a software interface (e.g., API, SDK), and/or provide functions using a remote access session to servers 216. In such an implementation, computing device 204 provides servers 216 hosting web pages with dynamic adaptation of their respective web pages in response to interactive action(s) performed on the GUI by user of client terminals 210 interacting with the web pages over network 212.
* Computing device 204 may act as a server that provides, for example, an application for local download to the client terminal(s) 210 for local adaptation of GUIs executed on the respective client terminal(s) 210, an add-on to a web browser running on client terminal(s) 210 for local adaptation of web sites hosted by other web servers.
Hardware processor(s) 202 of computing device 204 may be implemented, for example, as a central processing unit(s) (CPU), a graphics processing unit(s) (GPU), field programmable gate array(s) (FPGA), digital signal processor(s) (DSP), and application specific integrated circuit(s) (ASIC). Processor(s) 202 may include a single processor, or multiple processors (homogenous or heterogeneous) arranged for parallel processing, as clusters and/or as one or more multi core processing devices.
Memory 206 stores code instructions executable by hardware processor(s) 202, for example, a random access memory (RAM), read-only memory (ROM), and/or a storage device, for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM). Memory 206 stores code 206A that implements one or more features and/or acts of the method described with reference to FIG. 1 when executed by hardware processor(s) 202.
Computing device 204 may include data storage device 208 for storing data, for example, storing one or more GUIs 208A that are adapted as described herein, and/or one or more classifiers(s) 208B that are used in the process of adapting the GUI, as described herein. Data storage device 208 may be implemented as, for example, a memory, a local hard-drive, virtual storage, a removable storage unit, an optical disk, a storage device, and/or as a remote server and/or computing cloud (e.g., accessed using a network connection).
Network 212 may be implemented as, for example, the internet, a local area network, a virtual network, a wireless network, a cellular network, a local bus, a point to point link (e.g., wired), and/or combinations of the aforementioned. Computing device 204 may include a network interface 218 for connecting to network 212, for example, one or more of, a network interface card, a wireless interface to connect to a wireless network, a physical interface for connecting to a cable for network connectivity, a virtual interface implemented in software, network communication software providing higher layers of network connectivity, and/or other implementations.
Computing device 204 and/or client terminal(s) 210 include and/or are in communication with one or more physical user interfaces 214 that include a mechanism for a user to interact with the GUI and/or view the GUI. Exemplary physical user interfaces 214 include, for example, one or more of, a touchscreen, a display, gesture activation devices, a keyboard, a mouse, and voice activated software using speakers and microphone.
Client terminal(s) 210 may be implemented as, for example, as a desktop computer, a server, a virtual machine, and a mobile device. Exemplary mobile devices include for example, a Smartphone, a Tablet computer, a laptop computer, a wearable computer, smart glasses, smart watches, and smart wearable.
Referring now back to FIG. 1, at 102, a GUI including multiple data objects is presented on the screen of the client terminal, for example, text boxes presenting text, multi-media boxes presenting images, pictures and/or videos, layers added over existing boxes, and/or data entry boxes designed for manual entry of data by a user (e.g., text entry, menu for selection of items, and/or check boxes). The GUI and/or each box thereof may be formatted, for example, using certain fonts, certain font sizes, certain colors, certain shapes, layers, certain sizes, and/or certain arrangement on the screen.
The GUI may be, for example, a web page, and/or a screen of an application (e.g., game, banking application, online store purchase assistant application, medical record application, social media application). The GUI may be locally rendered by code executing on the client terminal based on data received from a server (e.g., a web browser rendering a script and/or code provided by a web server) and/or the computing device, may be rendered by the server and/or computing device, may be stored on the server and/or computing device, and/or may be locally stored on the client terminal.
The presentation of the GUI (e.g., loading of the GUI and displaying the GUI on the screen) may denote the start of the current session.
The GUI is associated with one or more target actions for the user to perform via the GUI, for example, clicking on a certain icon and/or graphical element (e.g., ad, link to another web site), registering as a user (e.g., to user a service provided by the web site), performing a financial transaction, making a purchase (e.g., purchasing a good and/or service offered by an online merchant operating the web page), watching a video, and leaving contact details (e.g., to be contact in the future by a representative and/or agreement to receive emails and/or ads).
The target action may be, for example, manually defined by an administrator, and/or automatically defined by code analyzing the GUI (e.g., a GUI is automatically analyzed and determined to be of an online stored, and the target action is automatically determined to be a purchase of a product of the online store). The target action may be stored, for example, as metadata associated with the GUI, and/or stored in association with the code that analyzes the user interactions with the GUI.
The GUI may be designed using standard methods, for example, a single GUI for all visitors to the web site and/or all users of the application. Alternatively, the GUI may be selected from a set of GUIs and/or initially designed according to a stored user profile of the user accessing the web site and/or application. The stored user profile may denote substantially static parameters of the user. For example, based on the user geographic location, gender, income, and interests. It is noted that the initial GUI selected and/or generated according to the user profile is then dynamically adapted according to the interactive action(s) performed on the GUI by the user interacting with the GUI during the current session. Alternatively or additionally, the GUI may be selected based on a prior classification of the user interactions into the behavior profile, during a prior session. For example, the presented GUI may be the adapted GUI that was presented when the user (or other users) performed the target action during the previous session. Such selection of a previously adapted GUI and/or an initial GUI based on a prior classification of the user (or other users) may increase the efficiency of computation of the current session in adaption of the GUI, for example, fewer and/or simpler adaptations may be required to increase the probability of the user performing the target action.
The GUI may be selected according to the screen on which it is presented, for example, a different version of the GUI for a desktop than a mobile device.
At 104, interactive action(s) performed on the GUI by the user is monitored. The interactive action(s) performed on the GUI are indicative of the behavior of the user interacting with the GUI. Monitoring may be performed locally at the client terminal (e.g., by code executing locally at the client terminal, which may communicate with the computing device), locally at the server (e.g., web server), and/or by the computing device. For example, the interactions of the user with the GUI may be locally monitored by a plug-in of a web browser that displays the GUI (e.g., web site) and/or by code locally installed and executed on the client terminal that monitors the user interactions with the GUI.
The monitoring may be performed dynamically in real time. The monitoring may include what the user does, including active actions and/or positive actions, for example, selection of icons, and/or clicking on links. The monitoring may include negative actions of the user, for example, avoidance of certain icons, and/or closing of objects, for example, clicking on an ad to close the ad, and/or clicking on a playing video to close the video. The monitoring may include lack of action of the user, for example, hesitation whether to perform a selection or not, or lack of activity.
The monitoring may be performed per event (e.g., per user click on an icon, and/or per user manual data entry, and/or per movement of the cursor on the screen). The monitoring may be performed per action, where each detected action is one of several possible actions (e.g., each icon is associated with a different action, and the action is determined according to which icon was clicked on). Alternatively or additionally, the monitoring is performed over a time interval, which may be absolute and/or relative, for example, every about 5 seconds, every about 10 seconds, every about 15 seconds, until the first selection (e.g., mouse click), until the web page fully loads (e.g., until each multimedia object is loaded), until a video completes playing.
Exemplary interactive action(s) performed on the GUI by the user interacting with the GUI includes one or more of: clicking on one or more of the objects of the GUI (e.g., icon, menu, link), entering data (e.g., entering a username into a field), movement patterns of a cursor across the GUI (e.g., hovering over an object, direct movement to an object, random movement across the screen, navigation between different locations on the GUI), physical user interface for interacting with the GUI (e.g., does the user use a mouse, voice activation, a touch screen, a keyboard, or a combination of the aforementioned), user touch patterns on a touchscreen presenting the GUI (e.g., direct contact on a location, touching in patterns), gestures, voice activation patterns, adjustment of the GUI relative to the screen (e.g., zoom in on certain areas of the GUI, scrolling along the GUI when the GUI is too large to fit on the screen at once, setting the size of the GUI to match the size of the screen so that the entire GUI is visible), adjustment of volume, selection of muting, selection of disabling of pop-ups, no movement at all over a time interval (e.g., user is reading the GUI, watching a multimedia presentation on the GUI).
The monitored interactive action(s) performed on the GUI may include a contextual recognition of the objects of the GUI that the user interacted with, for example, text and/or images appearing on the object (e.g., button stating“Sign-in”, icon showing a thumbs up), and/or purpose of the object (e.g., registration button, purchase button), and/or type of media associated with the object (e.g., video, image, link to another web site, text).
The monitored interactive action(s) performed on the GUI of the user interacting with the GUI may be stored for example, as one or more of the following data structures: an image denoting movement of the cursor over the screen over a time interval (e.g., lines denoting paths taken by the cursor during the time interval), a vector denoting locations on the screen where the user touched and/or moved the cursor to (e.g., array of pixel coordinates), and/or metadata associated with the objects of the GUI indicating the actions performed by the user (e.g., click on hyperlink, hovering over an icon, data entry into a field).
At 106, the monitored interactive action(s) performed on the GUI is analyzed. The analysis may be performed according to the data structure denoting the user interactions with the GUI.
Optionally, the analysis is performed by classifying the monitored interactive action(s) performed on the GUI into one of multiple behavior profiles. One or more objects of the GUI are adapted according to the classified behavior profiles, as described herein. Alternatively or additionally, the analysis is performed by classifying the monitored interactive action(s) performed on the GUI into one or more object adaptations of the GUI.
Conceptually, the analysis attempts to identify a real time objective and/or question of the user accessing the GUI, to answer the question,“Why is this user browsing the site”?
The classification is performed by one or more classifiers. Exemplary classifiers include: Multiple Instance Learning (MIL) based methods, one or more neural networks which may include an individual neural network and/or an architecture of multiple neural networks (e.g., convolutional neural network (CNN), fully connected neural network), deep learning based methods, support vector machine (SVM), logistic regression, k-nearest neighbor, decision trees, and a mapping function.
The classification may be performed based on a single set of monitored user interactive action(s) performed on the GUI, for example, a data structure denoting a certain target interaction performed by the user is mapped into a corresponding behavior profile. Different target interactions are mapped to different corresponding behavior profiles. In another example, a neural network that receives as input an image including line(s) denoting movement of the cursor across the GUI, and optional receives the GUI as input, classifies the movement image and optionally the GUI into one of the behavior profiles. Alternatively or additionally, the classification is iteratively performed using sequentially acquired data indicative of user interactive action(s) performed on the GUI over sequential time intervals until a threshold is met, for example, a probability threshold. Lor example, during a first time interval, the interaction data is mapped into a first behavior profile with a probability of 20%. During the second time interval, the interaction data is mapped into a second profile with a probability of 40%. During the third time interval the interaction data is mapped into the first profile with a probability of 85%, which is an above a classification threshold of 80%. Therefore, the interactive action(s) performed on the GUI is classified into the first profile. During each classification, the most recent interaction data may be classified, and/or the cumulative interaction data from the start (or using a sliding window of several time intervals) may be classified.
Alternatively or additionally, the classification may be performed by mapping each of the objects of the GUI designed for user interaction to one of the behavior profiles. One of the behavior profiles is selected according to which one of the objects of the GUI the user interacts with. Alternatively or additionally, each interaction is associated with a certain probability of the corresponding behavior profile. For example, each interaction increases the probability of the corresponding profile by 2%. The behavior profile is selected when the user has interacted sufficiently with one or more objects to reach or exceed a probability threshold (e.g., 70%, 80%, or other value). For example, multiple interactions with different objects increase the probabilities of the corresponding profile accordingly, until the probability of one of the profiles reaches or exceeds the threshold.
Alternatively or additionally, the classification may be performed according to the user interactions based on a layout of the objects of the GUI. For example, the analysis may include determining the distance that the user moved the cursor along the screen to make a selection from a current position, and/or number of searches and/or clicks the user preformed to reach a target web page. In another example, the layout of the GUI is fed into the classifier for analysis in association with the interaction data, for example, the image of the GUI is fed into a convolutional neural network.
Optionally, the behavior profiles denote a current state of a dynamic state of the user, for example, the mood of the user. The dynamic state of the user may vary from session to session, and/or may vary during the session itself.
Alternatively or additionally, the behavior profiles are indicative of different possible personas of a user. Conceptually, the interactions of each user with the GUI is reflecting of the persona of the user. Each user is associated with one personality that main remain static throughout the session, and/or may dynamically change throughout the session. The persona is determined from the analysis of the interaction of the user with the GUI (i.e., interactive action(s) performed on the GUI), which is indicative of behavior of the user, rather than being based on static user data which may be manually entered by the user, for example, past user use of the web page and/or a user profile. Such past use of the web page and/or user profile may not capture the persona of the user in a manner suitable for adaptation of the GUI to increase the probability of the user performing the target action. One set of exemplary behavior profiles are listed below. The exemplary interactions that may be indicative of the respective persona type are not necessarily limiting. As discussed herein, user interactions may not“fit” into persona types based on human logic, but such“human illogical” associations may be found by the classifier, for example, by a neural network:
* Methodical type denoting a user favoring a GUI presenting logically organized details. The interactions classified into the methodical type may include, for example, clicking on an icon indicative of details.
* Spontaneous type denoting a user favoring a personalized GUI. The interactions classified into the spontaneous type may include, for example, avoidance of objects that provide additional details.
* Humanistic type denoting a user favoring a GUI associated with a human touch. The interactions classified into the humanistic type may include, for example, clicking a sign-in icon for a social media site, and/or posting a comment.
* Competitive type denoting a user favoring a GUI that provides control features to the user. The interactions classified into the competitive type may include, for example, the user adjusting the GUI and/or adjusting one or more objects of the GUI (e.g., moving objects, resizing the GUI, closing objects, turning videos off/on, and adjusting color and/or sound).
The above described exemplary behavior profiles may be based on a psychographic analysis of classification categories of possible users (e.g., customers) accessing the GUI. The behavior profiles are indicative of behavior of users that are classified into the customer categories, which may be determined based on a psychographic analysis. The interactions classified into each type may not be obvious and/or necessarily make logical sense. The classifier may be trained based on a training dataset that includes multiple records, each record including a respective label indicative of a psychographic analysis of a respective user (e.g., manually determined by an expert in psychographics, and/or based on each user filing out a questionnaire (e.g., validated tool) that classifies each user according to their answers) and the interactions of the respective user with the respective GUI (e.g., based on the data structure storing the interactions). In such cases, for example, training a neural network, individual interactions may not necessarily map to behavior categories (e.g., psychographic categories) based on human recognized logic, however, the set of interactions may be learned by the classifier for classifying the interactions into the behavior profiles. The following may represent criteria for classifying users into behavior categories for training the classifier, and then used by the trained classifier for classifying user interactions into the psychographic categories. * Methodical: Methodical types feel a need to be prepared and organized to act. For them, task completion is its own reward. These individuals appreciate facts, hard data, and information presented in a logical manner as documentation of truth. They enjoy organization and completion of detailed tasks. They do not appreciate the“personal touch,” and they abhor disorganization. They fear negative surprises and irresponsibility above all. Those who are Methodical have a strong internal frame of reference. They prefer to think and speak about details and specifics. They compare everything to a standard ideal and look for mismatches (what’s wrong or what’s missing).
* Spontaneous: Spontaneous types feel a need to live in the moment. Their sensing preference makes them most grounded in the immediate world of the senses. This, coupled with their perceiving preference, helps them to remain poised and present in any situation. They are available, flexible, and engaged in a personal quest for action and impact, which defines who they are. For the Spontaneous, integrity means the unity of impulse with action. These individuals appreciate the personalized touch and are in search of new and exciting experiences. They dislike dealing with traditional details and are usually quick to reach a decision. They fear“missing out” on whatever life has offer.
* Humanist: Humanistic types have a tendency to put others’ needs before their own and are often uncomfortable accepting gifts or allowing others to do anything for them. They are very creative and entertaining. They enjoy helping others and highly value the quality of relationships. They are usually slow to reach a decision. They fear separation. Those who are Humanistic are good listeners and are generally willing to lend a sympathetic ear. They focus on acceptance, freedom, and helping. They generally prefer the big picture. They greatly value human development, including their own.
* Competitive: Competitive types seek competence in themselves and others. They want to understand and control life. Driven by curiosity, a Competitive is often preoccupied with learning and has a deep appreciation for challenges. They enjoy being in control, are goal-oriented, and are looking for methods for completing tasks. Once their vision is clear, they usually reach decisions quickly. They fear loss of control. Those who are Competitive are highly motivated, success- and goal-oriented, hardworking, image-conscious, good planners, and good at promoting their ideas. They are able to subordinate their present needs to develop future success. They can be intense, very persuasive about getting their own way, and are particularly irritated by inefficiency.
Based on the above, each user may be classified into one of the psychographic categories based on real time interactions of the user with the GUI. The GUI is adapted according to the psychographic category. Another set of exemplary behavior profiles is described below. The exemplary behavior profiles represent categories of potential customers at different stages of the target action performing process (e.g., converting process, buying) process. It is noted that none of the behavior profiles below are necessarily more likely to perform the target action in comparison to other behavior profiles. For example, the person who knows exactly what he/she wants may be easily distracted by other offers, whereas the person who is simply browsing may become an immediate buyer. The behavior profiles may described where people may be within their own minds and/or within the target action taking (e.g., buying) cycle. It is the adaptation of the GUI according to the classified behavior profile that is performed to increase the probability of the user in performing the target action, in comparison to the probability of the user performing the target action on the non-adapted GUI and/or the earlier version of the adapted GUI.
* Accidental type denoting a user that accessed the GUI without a goal. The Accidental types include those who just stumbled upon the GUI (e.g., website) by mistake without any relevant goal or question. The interactions classified into the accidental type may include, for example, pressing the back button, taking no action, and/or pressing a link to another web site.
* Know-Exactly type denoting a user that has a specific purposed in accessing the GUI. The Know-Exactly types know exactly what they want, down to the model number (or its equivalent). Included in this category are those who might not be able to pinpoint a unique identifier but can describe exactly what they need. The interactions classified into the know- exactly type may include, for example, entering specific data (e.g., model number) into a search engine, and/or clicking on a specific icon to access specific data (e.g., clicking on an image of a specific product to learn more about it).
* Knows- Approximately type denotes users that know approximately what they want in using the GUI. The Knows- Approximately types are in the market to buy, or in the service system to perform a certain action, but they have not made their final decision on exactly what they want to do. The interactions classified into the knows-approximately type may include, for example, entering general and/or vague data (e.g., key words not specific to one product) into a search engine, and/or clicking on a general icon to access general data (e.g., clicking on an image of a category of multiple products to present additional more specific products.
* Just-Browsing type that are in a browsing mode in using the GUI. The Just-Browsing type represents window shoppers who aren’t necessarily planning to take any specific action. In many ways, these individuals can be difficult to distinguish from the previous two categories of potential customer, since these are people who, when they run across just the right thing, will take action. The interactions classified into the just-browsing type may include, for example, random pattern of interacting with the GUI.
At 108, one or more adaptations of one or more objects of the GUI are selected according to the analysis of the monitored user interaction with the GUI, optionally in view of the target action associated with the GUI. The adaption may be selected according to the classified behavior profile, optionally in view of the target action associated with the GUI.
The adaptations may be computed by the classifier that classifies the user interactions into the adaptations, as described herein. The classifier may further receive the target action and the user interactions for classification into the adaptation.
The adaptations may be computed based on the classified behavior profile. Each behavior profile may be mapped to a set of possible adaptations. The certain adaptation to be performed may be selected according to the target action associated with the GUI.
The adaptation may be selected according to the hardware of the screen on which the GUI is presented, for example, a different adaptation of the GUI for a desktop than a mobile device due to differences in screen size.
The adaptation may be selected according to the context of the objects of the GUI, for example, to maintain the“feel” and/or“look” of the GUI. For example when the adaptation is adding another layer to the GUI, the size and/or color and/or fonts of the additional layer may be selected according to the size and/or color and/or fonts of existing objects of the GUI.
Conceptually, the adaptations are performed based on a qualification of subject experience of the user in comparison to the objective question (as described with reference to act 106). For example,“Does the user feel his/her question(s) is/are answered?”,“Does the user feel comfortable enough to make progress?” The adaptations are performed to the GUI to better meet the user’s objectives, based on the assumption that the probability of the user performing the target action is increased when the user’s objectives are met in comparison to the probability of the user performing the target action when the user’s objectives are not met.
For example, the following represent exemplary questions, reflecting information priorities and/or pace of deliberations by each of the exemplary psychographic categories described with reference to act 106. The adaptations may be performed to answer the questions.
* Methodical: Those who are Methodical may focus on language that answers HOW questions. For example: What are the details? What’s the fine print? How does this work? What’s the process you use? Can you take me through this step-by-step? How can I plan ahead? What are the product specs? What proof do you have? Can you guarantee that? * Spontaneous : Those who are Spontaneous may focus on language are that combines WHY (and sometimes WHEN) questions. For example: How can you get me to what I need quickly? Do you offer superior service? Can I customize your product or service? Can you help me narrow down my choices? How quickly can I take action and achieve my goals? How will this let me enjoy life more?
* Humanistic : Those who are Humanistic focus on language that answers WHO questions. For example: How will your product or service make me feel? Who uses your products/service? Who are you? Tell me who is on your staff and let me see bios. What will it feel like to work with you? What experience have others had with you? Can I trust you? What are your values? How will this help me strengthen relationships?
* Competitive : Those who are Competitive focus on language that answers WHAT questions. What are your competitive advantages? Why are you a superior choice? Are you a credible company? How can you help me be more productive? How can you help make me look cutting edge? What are your credentials? What is your research? How can you help me achieve my goals?
The adaptations may be computed by another classifier that receives the classified behavior profile and the target action as input, and outputs a certain adaptation. The classifier may be trained based on training data that includes multiple records, each record storing a certain behavior profile of the possible behavior profiles, a certain adaptation of the possible adaptations, and an indication of whether or not the respective user performed the target action(s) when the dynamic GET is adapted according to the certain adaptation. The classifier may be trained based each iteration described with reference to act 114, in which adaptations are made to the GUI in an effort to increase the probability that the user performs the target action. The classifier may be locally dynamically trained, and/or centrally trained (e.g., stored on a server) by transmitting the data collected during the iterations from the client terminal to the server.
Exemplary adaptations include: adding a layer over existing objects, removing one or more objects, adding one or more objects, changing the color of the object, adjusting the position of the object within the GUI, changing the size of the object, and/or changing the orientation of the object.
Exemplary adaptations according to behavior type to increase the probability of the user performing the target action include:
* Methodical type: adding additional detailed data objects and/or re-organizing the presentation of the presented objects to increase order. * Spontaneous type: personalizing one or more of the presented objects of the GUI. For example, extracting the name of the user and presenting it on the screen (e.g.,“Welcome John Doe”), and/or extracting the favorite color of the user from the user profile and changing the color of object(s) accordingly.
* Humanistic type: adapting one or more of the presented objects for human interaction and/or based on human reactions. For example, adding a chat window to chat with customer support of the web site, and/or presenting feedback provided by other users.
* Competitive type: adding objects that provide for control over one or more other objects in the GUI. For example, adding a control window for adjusting the color of the background of the GUI.
* Accidental type: maintaining the objects without adapting the GUI.
* Know-Exactly type: presenting a list of specific models of products for selection.
* Knows- Approximately type: presenting images of general categories of products for selection for further details.
*Just-Browsing type: presenting a catalogue of products available for sale.
Optionally, the adaptations are selected according to the content available within the boundaries of the GUI, and/or within the web site itself, including links from the web page (i.e., GUI) to other web pages which may be part of the same site. The content may be automatically collected and/or analyzed, for example, by code that crawls the content of the GUI (e.g., web page) and optionally follows links to other web pages which may be part of the same site. For example, popularity messages are provided as useful answers for humanistic types.
Alternatively or additionally, the adaptations are selected according to the tolerance of each object for being adapted, for example, for adding additional layer(s) over the respective object. For example, a button in a gaming GUI may accept an additional Ribbon element in proximity, without causing interference, while the same button within an e-commerce GUI requires a different GUI approach, such as a rectangular banner. The tolerance of each object for being adapted, optionally in view of the GUI, may be stored, for example, as a set of rules, in a database, a function based on a machine learning algorithm, and/or manually entered by a user.
Alternatively or additionally, the adaptations are selected according to graphical compatibility with the current GUI, for example, based on contextual graphical blocks that form the GUI. The adaptations is selected to be accepted by the user as a natural part of the current GUI. For example, a blue ribbon of a social media site is automatically identified by the code as being held within a pink container. The following adaptations may be computed accordingly: selecting a pre-created ribbon shape (e.g., from a set of stored pre-created ribbon shapes), inserting additional text into the selected ribbon, coloring the selected ribbon according to colors that exist in the currently presented GUI, and positioning the ribbon relative to the pink container. The adaptation of each object according to graphical compatibility may be stored, for example, as a set of rules, in a database, a function based on a machine learning algorithm, and/or manually entered by a user.
Optionally, the set of possible GUI adaptations may be automatically computed and/or defined in advance, and stored for real time adaption of the GUI by selecting the adaption from the set. Each possible GUI adaptation may be manually defined, for example, by the administration of the GUI and/or automatically created GUI adaptations may be manually approved, for example, by the administration of the GUI.
At 110, a dynamically adapted GUI is created by dynamically adapting one or more objects of the GUI according to the analysis of the monitored interactive action(s) performed on the GUI, optionally in view of the target action associated with the GUI.
The dynamic adaptation of object(s) of the GUI is performed to increase the probability of the user performing the target action(s) on the dynamically adapted GUI in comparison to the user performing the target action(s) on the GUI prior to the dynamic adaptation.
Optionally, the adaptation of the object(s) of the GUI is performed while maintaining existing content. For example, the size, shape, color, and/or location of objects are adjusted while the content itself (e.g., text, images, pictures, links, videos, other multimedia objects) are maintained. Alternatively or additionally, content may be adjusted, for example, objects storing irrelevant content (e.g., according to the behavior profile) are removed and/or additional objects storing relevant content (e.g., according to the behavior profile) are added. For example, for a detail oriented user, additional detail objects are added. For an organized user, excess detail is removed.
At 112, the target action(s) performed by the user on one or more of the GUI objects is detected. For example, the user makes a purchase, clicks on a target icon, enters personal information into a form, and/or views an advertisement.
Alternatively, at 114, when the user has not yet performed the target action, one or more features described with reference to acts 102-110 are iterated.
The monitoring of the user interactions (as described with reference to act 104) is performed based on the dynamically adapted GUI. The user interactions are analyzed (as described with reference to act 106), which may result in a re-classification into another behavior profile, or maintenance of the existing behavior profile. Another adaptation of the same object and/or another object of the GUI may be performed. The current dynamically adapted GUI may undergo another adaptation by dynamically adapting the currently dynamically adapted GUI according to the additional adaptation. The iterative adaptations to the GUI are performed to increase the probability (i.e., with each subsequent iteration) that the user performs the target action.
The iterations may be dynamically performed during the same user session (i.e., current session), according to changes in the interactive actions performed on the GUI (e.g., changing user behavior), and/or according to increasing knowledge gained about the current user interactions. Each iteration is designed to increase the probability that the user performs the target action using the current version of the dynamically adapted GUI over the probability of performing the target action using the previous version of the dynamically adapted GUI.
The rate of the dynamic adjustment of the user may be set, for example, manually by an administration and/or automatically by code, and/or determined in real time for example according to changes in the behavior profile. The rate of adjustment may be set, for example, to once every 5 minutes to avoid user confusion. Alternatively, the rate may be automatically set according to the user interactive action(s) performed on the GUI. Users that change their mind by performing different interactions (which may be mapped to different behavior profiles) are presented with dynamically adapted GUIs that keep us with the changing interactive action(s) performed on the GUI by the user.
The iterations may be performed based on an adaptation of the GUI that involves a target user interaction, which is different than the target action. For example, the adaption may be to present an object in which the user may enter key words to be fed into a search engine, and the target action is the user making a purchase. The search engine is presented within the GUI to aid the user in searching for a product to buy. The reaction of the user to the adaptation of the GUI, i.e., whether or not the user performed the target user interaction (user interactions), may be analyzed to determine the next adaption of the GUI. For example, when the user has not used the search engine, or when the user attempted to use the search engine but the search engine is not working, another adaptation of the GUI may be performed, for example, presenting suggested products to the user rather than a search engine.
The iterations may be performed until a stop criteria is met, for example, a time limit and/or number of adaptations. The stop criteria may be indicative that the adaptations are not effective in the user performing the target action. A default adaptation may be performed when the stop criteria is met, for example, a chat service window for the user to directly contact the administrator of the GUI (e.g., to ask about a specific product in the case of an online store).
At 116, the classifier(s), used in act 106, is trained. It is noted that training of the classifier may be performed on a different computing device and/or different server, independently of execution of acts 102-114. For example, the classifier may be trained prior to execution of acts 102. The classifier may be updated based on the results of execution of acts 102-114, as described with reference to act 118.
The classifier may be trained based on a training dataset that includes a label for each user classifying the user into one of the possible behavioral categories. The label may be created, for example, manually determined by an expert in the behavior categories, manually determined by the user themselves, based on each user filing out a questionnaire (e.g., validated tool) that classifies each user according to their answers) and/or based on code that automatically analyzes other aspects of the user (e.g., user profile, demographics, past shopping history, comments made on social media sites). The training dataset includes the user interaction with the GUI, optionally stored in a suitable data structure as described herein. The classifier is trained according to the training dataset, to classify a new user into one of the possible behavior categories based on an input of user interactions with the GUI.
At 118, the classifier(s) are updated according to the data collected during the current session. For example, the user interactions, the classification results (e.g., behavior profile(s)), the target action(s), the GUI, and/or the selected adaptation(s) may be collected by the client terminals, computing device, and/or web server, and transmitted to a host of the classifier for training the classifier (e.g., the computing device, a remote server). Updating the classifier based on additional data collected from different users using different client terminals and/or different user interfaces to interact with different GUIs to perform different target actions increases the accuracy of the classifier.
Reference is now made to FIG. 3A, which is a schematic of a GUI 302 of a gaming web site, prior to dynamic adaptation, in accordance with some embodiments of the present invention. GUI 302 denotes a registration page for the gaming web site. The user interactions are analyzed as described herein. Conceptually, the analysis based on real time user interactions determines that the user objective is interest in popular social gaming, and the question the user has is“Are many people using this game”?
Reference is now made to FIG. 3B, which is a schematic of a dynamically adapted GUI 304 based on dynamic adaptation of GUI 302 of FIG. 3A according to the analysis of monitored user interactions, in accordance with some embodiments of the present invention. A new banner 306 is added to GUI 302 to create GUI 304 based on the analysis of the monitored user interactions, to increase the probability that the user performs the target action, which is registering to play the game. Conceptually, based on the user objective and user question of the user, the analysis quantifies the user experience as“Question not answered”. The GUI is adapted by inserting banner 306, which answers the user question that the game is indeed a popular game, to increase the probability that the user performs the target action of registering to user the game (i.e., conversion).
Reference is now made to FIG. 4, which is a schematic depicting an exemplary architecture of a client-server architecture for dynamic adaption of a GUI according to user interactions with the GUI, in accordance with some embodiments of the present invention. Components described with reference to FIG. 4 are based on components of system 200 described with reference to FIG. 2. Client terminal 410 communicates with computing device 404 acting as a server over a network 412, as described herein. The GUI is presented on a display of client terminal 410. The user interactions may be monitored by code executing on client terminal 410. The classification of the user interactions into the behavior profile is performed by server 404 according to the user interactions provided by client terminal over network 412. Server generates instructions for dynamic adaptation of the GUI presented on the display of client terminal 410.
Architecture 400A denotes a standard network architecture, in which communication is one-way over network 412. For example, based on AJAX, where communication is based on client terminal 410 request, with server 402 unable to initiate communication. Server 402 is effectively stateless with limited or no memory, effectively dependent on communication from client terminal 410.
Architecture 400B denotes a real-time network architecture, for example based on the WebSocket protocol and/or the Distributed Data Protocol (DDP). Architecture 400B provides bi directional communication, where both client terminal 410 and server 402 may communicate with each other at the same time over network 412. Architecture 400B reduces the time required to classify the user interactions into the behavior profile and/or update the GUI, by distributing the classification of the user interaction and/or selection of the adaptation of the object(s) of the GUI between client terminal 410 and server 402.
The WebSocket protocol enables interaction between a web client (such as a browser) and a web server with lower overheads, facilitating real-time data transfer from and to the server. A standardized way is provided for the server to send data to the client without being first requested by the client and allowing messages to be passed back and forth between the client and server while keeping the connection open. In this way, a two-way ongoing conversation can take place between the client and the server. The communications are done over TCP port number 80 (or 443 in the case of TLS -encrypted connections), which is of benefit for those environments which block non- web Internet connections using a firewall. DDP is a client-server protocol for querying and updating a server- side database and for synchronizing such updates among clients. DDP uses the publish-subscribe messaging pattern. It was created for use by the Meteor JavaScript framework.
Reference is now made to FIG. 5, which is a block diagram depicting an exemplary dataflow for adapting a GUI according to user interactions with the GUI, in accordance with some embodiments of the present invention. The dataflow described with reference to FIG. 5 is based on one or more features and/or component described with reference to one or more of FIGs. 1-4.
At 502, the user session is initiated, for example, the web page is loaded by the web browser, and/or the application is loaded and executed. The GUI is presented on a display, for example, of the client terminal. The user interactions with the GUI during user session 502, as described herein.
The GUI may be registered to use the services provided by the computing device for dynamic adaptation, for example, provided via a BackOffice REST API 501. Following site registration, basic site information may be stored within a database storing Basic Site info definitions (e.g., URL, site context, target action (e.g., conversion goals).
Select performance pixel code 504 (e.g., code 206A described with reference to FIG. 2) may be integrated into the GUI, optionally the web site, for example, as a plug-in into the web browser, API, SDK, library files, and/or other software interfaces.
Upon loading the GUI (e.g., web site), pixel code 504 lists the session as a subscriber to the GUI adaptation service when the GUI is registered.
A script builder server 506 (which may be implemented as code instructions executed by computing device 204 described with reference to FIG. 2) calculates relevant modules based on the site input definitions 508, which are loaded to the client terminal (e.g., client terminal 210 described with reference to FIG. 2) for example using socket.io, and a cache database (DB) representation is initialized (e.g., MiniMongo), for example, based on DDP.
The user interactions with the GUI during user session 502, as described herein. The monitored interactions are provided to an analytics server 510 (which may be implemented as code instructions executed by computing device 204 described with reference to FIG. 2), which may analyze the monitored interactions and stores the monitored interactions in a site benchmark info database 512. The monitored interactions stored by site benchmark info 512 are fed into a neural network 514.
Neural network 512 analyzes and/or classified the monitored user interactions with the GUI during the user session, as described herein. Neural network 512 may output a probability of the analysis and/or classification, as described herein. The behavior classification, optionally an identification of a persona type of the user (as described herein) is stored in a persona identifier database 516. The DDP updates the client cache DB with the classification when required.
A response generation/optimization server 518 (which may be implemented as code instructions executed by computing device 204 described with reference to FIG. 2) computes and/or stores possible GUI adaptations as described herein. Adaptations are sometimes referred to herein as responses. The adaptation(s) are selected according to the behavior profile classification (e.g., identified persona type), as described herein. The possible GUI adaptations may be applied to a web site (e.g., based on HTML and/or CSS). The selected adaptations may be loaded as modules to the client terminal.
In real time, at the client terminal (i.e., client side), the monitored user interactions are classified. An adaptation is selected and applied to adapt the GUI, optionally when a probability of the classification result is above a threshold, for example, over 80%.
The results of whether the user performed the target action on the adapted GUI may be stored in a response and effectiveness database 520, which may be used to update and/or train neural network 514, as described herein.
MongoDB is a free and open-source cross-platform document-oriented database program. Classified as a NoSQLdatabase program, MongoDB uses JSON-like documents with schemas. MongoDB is developed by MongoDB Inc., and is published under a combination of the GNU Affero General Public License and the Apache License. Min Mongo is a client-side MongoDB implementation which supports basic queries, including some geospatial ones. Code from Meteor .js minimongo package is used, reworked to support more geospatial queries and made npm+browserify friendly. The code is either IndexedDb backed (IndexedDb), WebSQL backed (WebSQLDb), Local storage backed (LocalStorageDb) or in memory only (MemoryDb).
The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
It is expected that during the life of a patent maturing from this application many relevant GUIs will be developed and the scope of the term GUI is intended to include all such new technologies a priori.
As used herein the term“about” refers to ± 10 %. The terms "comprises", "comprising", "includes", "including", “having” and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of" and "consisting essentially of".
The phrase "consisting essentially of" means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.
The word“exemplary” is used herein to mean“serving as an example, instance or illustration”. Any embodiment described as“exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments.
The word“optionally” is used herein to mean“is provided in some embodiments and not provided in other embodiments”. Any particular embodiment of the invention may include a plurality of“optional” features unless such features conflict.
Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases“ranging/ranges between” a first indicate number and a second indicate number and“ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.
In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety.

Claims

WHAT IS CLAIMED IS:
1. A method for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprising:
presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI;
monitoring at least one interactive action performed on the GUI by a user during a current session;
analyzing the at least one interactive action performed on the GUI during the current session; and
creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
2. The method of claim 1, wherein the monitoring, the analyzing, and the creating are iterated to increase the computed probability of the user performing the at least one target action on the current dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on a previously adapted GUI.
3. The method of claim 1 , wherein the analyzing comprises classifying the at least one interactive action performed on the GUI into one of a plurality of behavior profiles, and dynamically adapting the at least one object of the plurality of objects based on the classified behavior profile.
4. The method of claim 3, wherein the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected according to the behavior profile, and according to the at least one target action.
5. The method of claim 3, wherein the classification is iteratively performed based on at least one interactive action performed on the GUI obtained during sequential time intervals until a probability of the classification into one of a plurality of behavior profiles is above a threshold.
6. The method of claim 3, wherein the classification is performed based on a mapping of each of the objects of the GUI designed for user interaction to one of the behavior profiles.
7. The method of claim 3, wherein the classification is performed according to a layout of the plurality of objects of the GUI.
8. The method of claim 3, wherein the plurality of behavior profiles are indicative of a current state of a dynamic state of the user, wherein the dynamic state may vary during the current session.
9. The method of claim 3, wherein the plurality of behavior profiles are indicative of different possible personas of the user.
10. The method of claim 9, wherein the plurality of behavior profiles are selected from the group consisting of: a Methodical type denoting a user favoring a GUI presenting logically organized details, a Spontaneous type denoting a user favoring a personalized GUI, a Humanistic type denoting a user favoring a GUI associated with a human touch, and a Competitive type denoting a user favoring a GUI that provides control features to the user.
11. The method of claim 9, wherein the dynamic adaptation of the at least one object of the plurality of objects of the GUI is according to the behavior profiles, comprising: Methodical type: adding additional detailed data objects and/or re-organizing the presentation of the presented objects to increase order, Spontaneous type: personalizing at least one of the presented objects of the GUI, Humanistic type: adapting at least one of the presented objects for human interaction and/or based on human reactions, and Competitive type: adding objects that provide for control over at least one other object in the GUI.
12. The method of claim 9, wherein the classification is performed by at least one classifier trained on a training dataset comprising a plurality of records, each record including a respective label indicative of a psychographic analysis of a respective user and interactions of the respective user with the respective GUI.
13. The method of claim 3, wherein the plurality of behavior profiles are indicative of different states of the process of performing the at least one target action.
14. The method of claim 13 , wherein the plurality of behavior profiles are selected from the group consisting of: Accidental type denoting a user that accessed the GUI without a goal, a Know-Exactly type denoting a user that has a specific purposed in accessing the GUI, a Knows- Approximately type denotes users that know approximately what they want in using the GUI, and a Just-Browsing type that are in a browsing mode in using the GUI.
15. The method of claim 14, wherein the dynamic adaptation of the at least one object of the plurality of objects of the GUI is according to the behavior profiles, comprising: Accidental type: maintaining the objects without adapting the GUI, Know-Exactly type: presenting a list of specific models of products for selection, Knows- Approximately type: presenting images of general categories of products for selection for further details, and Just-Browsing type: presenting a catalogue of products available for sale.
16. The method of claim 3, wherein the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected by at least one classifier that receives the classified behavior profile and the at least one target action as input, wherein the at least one classifier is trained on training data that includes a plurality of records, each record storing a certain behavior profile of a plurality of behavior profiles, a certain adaptation of a plurality of possible adaptations, and an indication of whether or not the respective user performed the at least one target action when the dynamic GUI is adapted according to the certain adaptation.
17. The method of claim 1, wherein the GUI comprises at least one of a web page, and an application.
18. The method of claim 1, wherein the at least one target action is selected from the group consisting of: clicking on a certain icon, selecting a certain graphical element, clicking on a certain link, registering as a user, performing a financial transaction, making a purchase, leaving contact details, and watching a video.
19. The method of claim 1, wherein the at least one interactive action performed on the GUI includes at least one member selected from the group consisting of: active actions performed by the user, negative actions performed by the user, and lack of action by the user.
20. The method of claim 1, wherein the at least one interactive action performed on the GUI includes at least one member selected from the group consisting of: clicking on a certain objects of the plurality of objects of the GUI, entering data, movement patterns of a cursor across the GUI, physical user interface for interacting with the GUI, user touch patterns on a touchscreen presenting the GUI, gestures, voice activation patterns, adjustment of the GUI relative to the screen, adjustment of volume, selection of muting, selection of disabling of pop-ups, and no movement at all over a time interval.
21. The method of claim 1 , wherein the at least one interactive action performed on the GUI is stored in at least one data structure selected from the group consisting of: an image denoting movement of the cursor over the screen over a time interval, a vector denoting locations on the screen where the user touched and/or moved the cursor to, and metadata associated with the plurality of objects of the GUI indicating the actions performed by the user.
22. The method of claim 1, wherein dynamically adapting at least one object of the plurality of objects of the GUI is selected from the group consisting of: adding a layer over at least one existing object, removing at least one objects, adding at least one objects, changing the color of at least one object, adjusting the position of at least one object within the GUI, changing the size of at least one object, and/or changing the orientation of at least one object.
23. The method of claim 1, wherein the dynamically adapting at least one object of the plurality of objects is performed while maintaining existing content.
24. The method of claim 1, wherein the dynamic adaptation of the at least one object of the plurality of objects of the GUI is selected according to at least one member of the group consisting of: a hardware of a screen on which the GUI is presented, a context of the plurality of objects of the GUI, content available within the boundaries of the GUI, tolerance of each object for being adapted, and graphical compatibility with the current GUI.
25. A system for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprising:
a non-transitory memory having stored thereon a code for execution by at least one hardware processor, the code comprising:
code for presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI; code for monitoring at least one interactive action performed on the GUI by a user during a current session;
code for analyzing the at least one interactive action performed on the GUI during the current session; and
code for creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
26. A computer program product for dynamically updating a graphical user interface (GUI) based on a dynamic behavioral analysis of a user interacting with the GUI during a current session, comprising:
a non-transitory memory having stored thereon a code for execution by at least one hardware processor, the code comprising:
instructions for presenting a GUI comprising a plurality of objects on a display of a client terminal, wherein the GUI is associated with at least one target action performed by a user on the GUI;
instructions for monitoring at least one interactive action performed on the GUI by a user during a current session;
instructions for analyzing the at least one interactive action performed on the GUI during the current session; and
instructions for creating a dynamically adapted GUI by dynamically adapting at least one object of the plurality of objects of the GUI according to the analysis of the at least one interactive action performed on the GUI, wherein the dynamic adaptation is performed according to a computed increase in probability of the user performing the at least one target action on the dynamically adapted GUI in comparison to a computed probability of the user performing the at least one target action on the GUI prior to the dynamic adaptation.
PCT/IL2019/050632 2018-06-06 2019-06-03 Systems and methods for dynamic adaptation of a graphical user interface WO2019234736A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862681109P 2018-06-06 2018-06-06
US62/681,109 2018-06-06

Publications (1)

Publication Number Publication Date
WO2019234736A1 true WO2019234736A1 (en) 2019-12-12

Family

ID=68770692

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2019/050632 WO2019234736A1 (en) 2018-06-06 2019-06-03 Systems and methods for dynamic adaptation of a graphical user interface

Country Status (1)

Country Link
WO (1) WO2019234736A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256172A (en) * 2020-10-20 2021-01-22 北京字节跳动网络技术有限公司 Application display method, device, terminal and storage medium
WO2023021299A1 (en) * 2021-08-18 2023-02-23 Blue Prism Limited Systems and methods for determining gui interaction information for an end user device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100106703A1 (en) * 2006-05-02 2010-04-29 Mark Cramer Dynamic search engine results employing user behavior
US20170212650A1 (en) * 2016-01-22 2017-07-27 Microsoft Technology Licensing, Llc Dynamically optimizing user engagement

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100106703A1 (en) * 2006-05-02 2010-04-29 Mark Cramer Dynamic search engine results employing user behavior
US20170212650A1 (en) * 2016-01-22 2017-07-27 Microsoft Technology Licensing, Llc Dynamically optimizing user engagement

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256172A (en) * 2020-10-20 2021-01-22 北京字节跳动网络技术有限公司 Application display method, device, terminal and storage medium
WO2023021299A1 (en) * 2021-08-18 2023-02-23 Blue Prism Limited Systems and methods for determining gui interaction information for an end user device

Similar Documents

Publication Publication Date Title
US11249730B2 (en) System and method for converting actions based on determined personas
US10102535B2 (en) System and method for brand management using social networks
US10552887B1 (en) Web-based automated product demonstration
US10319022B2 (en) Apparatus and method for processing a multimedia commerce service
US10290040B1 (en) Discovering cross-category latent features
US20140244447A1 (en) Apparatus and method for processing a multimedia commerce service
US10121187B1 (en) Generate a video of an item
JP7316453B2 (en) Object recommendation method and device, computer equipment and medium
US10846517B1 (en) Content modification via emotion detection
US10909606B2 (en) Real-time in-venue cognitive recommendations to user based on user behavior
RU2717627C2 (en) Systems and methods for analysing and studying objects in social networks
US20170109780A1 (en) Systems, apparatuses and methods for using virtual keyboards
JP2020501266A (en) Content customization based on predicted user preferences
US11250468B2 (en) Prompting web-based user interaction
US8725795B1 (en) Content segment optimization techniques
US11928173B1 (en) Dynamic web application based on events
WO2019234736A1 (en) Systems and methods for dynamic adaptation of a graphical user interface
US10755318B1 (en) Dynamic generation of content
US11636527B2 (en) Personalization based on private profile models
Kashyap I am the future: Machine learning in action
Niles-Hofmann Data-driven learning design
US20220005368A1 (en) Upskill management
US20230245150A1 (en) Method and system for recognizing user shopping intent and updating a graphical user interface
US20230146003A1 (en) Simulating bid requests for content underdelivery analysis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19815323

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19815323

Country of ref document: EP

Kind code of ref document: A1