EP3320419A1 - System und verfahren zur anpassung einer benutzerschnittstelle - Google Patents

System und verfahren zur anpassung einer benutzerschnittstelle

Info

Publication number
EP3320419A1
EP3320419A1 EP16750941.3A EP16750941A EP3320419A1 EP 3320419 A1 EP3320419 A1 EP 3320419A1 EP 16750941 A EP16750941 A EP 16750941A EP 3320419 A1 EP3320419 A1 EP 3320419A1
Authority
EP
European Patent Office
Prior art keywords
code
user
user device
transformations
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP16750941.3A
Other languages
English (en)
French (fr)
Inventor
Eric CERET
Gaëlle PAEN-CALVARY
Marc BITTAR
Sophie CHESSA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institut Polytechnique de Grenoble
Universite Grenoble Alpes
Original Assignee
Institut Polytechnique de Grenoble
Universite Grenoble Alpes
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institut Polytechnique de Grenoble, Universite Grenoble Alpes filed Critical Institut Polytechnique de Grenoble
Publication of EP3320419A1 publication Critical patent/EP3320419A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces

Definitions

  • the present disclosure relates to the field of human-computer interaction, and in particular a system and method for generating a user interface.
  • GUI graphical user interface
  • the utili ⁇ sateur device could for example be a desktop computer, a laptop, a smartphone, a tablet computer, etc ..
  • a user interface may include an audio interface based on speech, and a touch type interface and / or another type of interface.
  • User interfaces are used in a wide variety of software applications, such as word processing, spreadsheets, web browsers, etc.
  • the digital content of the user interfaces, and the manner in which they are presented on the user device, are determined by the data provided by a file stored on a user device or transmitted by a remote server.
  • display windows for some applications may be represented by files encoded in Java or C ++, while pages are usually encoded by files of type HTML (Hypertext Markup Language).
  • user interfaces are generally designed to be compatible with a relatively wide range of user devices, so that a user can successfully access information, make selections, navigate to new windows / pages , and introduce data.
  • a desktop device is generally considered to comprise a relatively large screen of 12 inches or more, a keyboard and a mouse
  • a mobile device is generally characterized in that it uses a relatively small touch screen, and by the absence of keyboard or mouse.
  • a graphical user interface there are often two versions of a graphical user interface, one being adapted to "desktop” devices, and the other adapted to "mobile” devices.
  • the user installs the version of the application corresponding to his device, or in the case of web interfaces, the two types of web pages are stored by the remote server hosting the site, and one or the other are transmitted to the device.
  • user for example on the based on an automatic detection of the type of the user device, and / or on the basis of a selection by the user.
  • One solution for obtaining such versions of the user interface would be to extend the strategy currently adopted to create and store more than one version of each interface. However, since the number of versions to be supported increases, this solution would lead to a high cost in data storage, and also to a high cost for the development and maintenance of each version.
  • a conceptual approach known as "Responsive Design” has been proposed to adapt a web page to a particular size of display window.
  • This approach involves programming the CSS file (Cascading Style Sheets) associated with an HTML document so that the display format is selected based on a detected window size in which the web page is to be displayed.
  • Adaptation can change the organization of the display (for example, change the location of some parts of the user interface), or make some parts of the user interface invisible.
  • the same user interface code is transmitted to all the devices accessing the web page, and one of the display formats defined in the CSS file of this code will be selected according to the size of the window. All the data that could be displayed on the device is transferred, even if the CSS directives express that in the current situation, some of the data is not displayed.
  • An object of embodiments of the present disclosure is to at least partially solve one or more problems of the prior art.
  • a system for generating a user interface, UI, to be executed on a user device comprising: one or more memory devices storing a current context defined by characteristics associated with the user device , to one or more users of the user device and / or the environment of the user device; and a processing device capable of selecting an adapted UI from among predefined UIs if necessary, or, failing that, of generating UI code to implement the UI: by generating a first portion of the UI; UI code using one or more first transformations selected on the basis of one or more of the features, said one or more first transformations being adapted to transform at least one model defining a first portion of the user interface into said first portion of the code UI; generating a second portion of the UI code using one or more second selected transformations based on one or more of the features; and assembling at least the first and second portions to generate the UI code, said one or more second transformations being adapted to transform at least one model defining a second portion
  • the processing device is further adapted to calculate a distance between the current context and a context associated with one or more other UI codes generated previously, and to select one of the other UI codes according to this distance.
  • the processing device is further adapted to generate the UI code on the basis of one or more additional transformations previously selected for the generation of a user interface adapted to a previous context similar to the current context.
  • said one or more memory devices further store dynamic content to be represented by the UI, and the processing device is adapted to select said one or more first and second transformations further on the basis of the dynamic content.
  • the features comprise one or more characteristics among: an indication of the size of the display device of utili ⁇ sateur; an indication of at least one UI programming language compatible with the user device; an indication of an ambient light level detected by the user device; an indication of a noise level ambient detected by the user device; an indication of a level of expertise of the user of the user device; an indication of the language of the user of the user device; and an indication of the level of the user's view of the user device.
  • the user device comprises one or more of: an ambient light detector for detecting the ambient light level; a microphone for detecting the ambient noise level; and a location device for detecting the position of the user device.
  • the UI is an audio interface.
  • the UI is a graphical user interface
  • each of the first and second portions of the code IU generates an elementary window corres ponding ⁇ of the graphical user interface.
  • At least one of the first and second portions of the UI code implements a user input element to allow a user to enter a selection.
  • the processing device is further adapted to generate a header code and a termination code of the UI code.
  • the processing device is further adapted to receive one or more modified characteristics and to adapt the UI code based on the modified characteristics.
  • the adaptation of the UI code on the basis of the modified characteristics is a global adaptation of the UI code comprising the regeneration of the entire UI code.
  • the treatment device is adapted to identify at least the first portion of the UI code to be adapted on the basis of the modified characteristics, and the adaptation of the UI code comprises a local adaptation of the UI code in which the first portion of the UI code is adapted but not the second portion.
  • the processing device is adapted to select the first and second transformations by calculating a distance with respect to a volume Vi corresponding to each transformation on the basis of the following equation:
  • Kiç is a value current of each of the characteristics Ki
  • the system is implemented: in the user device; and / or in a remote server arranged to communicate with the user device via one or more intermediate networks.
  • a method for generating a user interface, UI, to be executed by a user device comprising: generating UI code to implement the UI: generating a first portion of the UI code using one or more first selected transformations on the basis of one or more characteristics associated with the user device, one or more users of the user device and / or the environment of the user device, said one or more first transformations being adapted to transforming at least one model defining a first portion of the user interface into said first portion of the UI code; generating a second portion of the UI code using one or more second selected transformations based on one or more of the features; and assembling at least the first and second portions to generate the UI code, said one or more second transformations being adapted to transform at least one model defining a second portion of the user interface into said second portion of the UI code; and execute the UI code by the user device.
  • the method further comprises calculating a distance between the current context and a context associated with one or more other UI codes generated previously, and the selection of one of the other UI codes according to this distance.
  • the assembly of at least the first and second portions to generate the UI code comprises generating a header code and a termination code of the UI code.
  • the first portion of code corresponds to an elementary window of the user interface
  • the generation of the first portion of code comprises the generation of a code header of a GUI element (interface graphical user) and a code termination of a GUI element.
  • Figure 1 schematically illustrates an exemplary system for providing a user interface to a user device
  • Figure 2 schematically illustrates a system for providing a user interface to a user device according to an exemplary embodiment of the present description
  • Figure 3 schematically illustrates a user device according to an exemplary embodiment
  • FIG. 4A represents a graphical user interface according to an exemplary embodiment
  • FIG. 4B represents an elementary data input window of the interface of FIG. 4A according to an exemplary embodiment
  • FIG. 4C represents elementary identification windows of the interface of FIG. 4A according to several exemplary embodiments
  • Figs. 5A, 5B and 5C are flow diagrams illustrating operations in a method for generating a user interface according to an exemplary embodiment
  • Fig. 6 is a diagram showing user interface categories adapted to light levels, user expertise, and display surface, according to an exemplary embodiment
  • Fig. 7A is a diagram showing volumes in a one-dimensional space according to an exemplary embodiment
  • Figure 7B is a diagram showing volumes in a one-dimensional space according to another embodiment
  • Fig. 7C is a diagram showing volumes in a two-dimensional space according to an exemplary embodiment
  • Fig. 8 is a flowchart illustrating operations in a method for generating a user interface according to an exemplary embodiment
  • Figs. 9A and 9B are flowcharts showing operations in a method of selecting a magnetic volume based on a distance calculation according to an exemplary embodiment
  • Fig. 10A is a flowchart illustrating a method of dynamically adapting a user interface according to an exemplary embodiment
  • Fig. 10B is a flowchart illustrating steps of an operation of Fig. 10A of context monitoring according to an exemplary embodiment
  • Fig. 10C is a flowchart illustrating steps of an operation of Fig. 10A for selecting a pre-existing user interface according to an exemplary embodiment
  • Fig. 10D is a flowchart illustrating steps of an operation of Fig. 10A for managing faulty operation according to a centralized approach
  • Fig. 10E is a flowchart illustrating steps of an operation of Fig. 10A for managing faulty operation according to a distributed approach.
  • GUI graphical user interface
  • FIG. 1 schematically illustrates an exemplary system 100 for providing a graphical user interface according to a solution similar to that described in the aforementioned prior art section.
  • the system includes a number of user devices, such as a laptop 102, a smartphone 104, and an internet-capable watch 106.
  • Each user device 102, 104, 106 includes a display for displaying a GUI.
  • each user device 102, 104, 106 may execute a web browser arranged to communicate with a remote server (SERVER) 108 via one or more intermediate networks 110 comprising, for example, the internet network.
  • a web browser is a software application, loaded onto a user device, which provides access to information via web pages provided by a remote server. The web browser provides information resources to a user, allowing the user to view the information, to enter data, and to access other information by browsing other pages.
  • the remote server (SERVER) 108 for example transmits web pages to the user devices 102, 104, 106 to display content and to receive selections made by the user.
  • the web pages are for example stored in HTML format by one or more memory devices (MEMORY) 112 at the remote server 108.
  • the memory 112 stores different versions of each web page, each version ensuring a different GUI, GUI 1 to GUI N.
  • Each of these interfaces is for example adapted to a different type of user device, user expertise and / or environment.
  • the response time is relatively short for the transmission of an appropriate version of the versions of the web page to a requesting user device, when the number N of GUI versions becomes relatively high, there will be a corresponding increase in resources memories needed to store these interfaces.
  • each user device 102, 104, 106 is for example designed to execute one or more software applications. In association with each software application, the user device 102, 104, 106 stores for example N GUI1 GUI interfaces (not shown in Figure 1).
  • GUIs are stored at a remote server and made available to any type of device
  • an application or website includes 100 GUIs, in other words 100 web pages or windows, this would lead to a total of 600,000 interfaces.
  • the storage requirements for these many interfaces would be 6000 times those of the 100 GUIs, and the cost of development and maintenance would be excessive.
  • the number of GUI versions could already be limited to a given type of device and a given programming language, which would divide by 20 in the previous example, the number of versions to be available. However, the number of GUI versions would still be 300, which is likely to exceed the available storage capacity of an internet-enabled watch or smartphone.
  • FIG. 2 schematically illustrates an exemplary system 200 for providing a user interface according to an exemplary embodiment of the present description.
  • adapted user interfaces are generated by a server 208 and transmitted to the user devices (not shown in FIG. 2).
  • the system 200 can communicate with one or more user devices via one or more intermediate networks 110 such as the Internet.
  • the system 200 comprises the server 208 and one or more memory devices (MEMORY) 212 storing user or device profiles 1 to N (USER / DEVICE PROFILE1 to USER / DEVICE PROFILEN), as will be described in more detail hereinafter. -after.
  • the server 208 comprises, for example, a communication interface (COMMS INTERFACE) 220 for communicating with the user devices via the intermediate networks 110.
  • the communications interface 220 is coupled to a UI generation module 222.
  • the module 222 is for example implemented by software.
  • the module 222 comprises a processing device (P) 224 comprising one or more processors under the control of instructions stored in an instruction memory (MEM INSTR) 226.
  • the module 222 also comprises for example one or more memories (MEMORY) 228 memorizing the dynamic content (DYNAMIC CONTENT) of a UI, and the transformations (TRANSFORMATIONS) used during the generation of the UI, in the sense of Model Driven Engineering and as we will describe it more in detail in the following.
  • the UI generation module 222 generates, for example, UIs, practically in real time, in response to requests from user devices.
  • the memory 212 stores for example a profile indicating features relating to the user device, such as the type of device, the size of the display screen , and other capabilities of the device.
  • the memory 112 also stores, for example, characteristics concerning the user, such as his preferred language, his level of expertise, and an indication of specific user requirements, such as a larger text size for users having a bad view, etc.
  • there could be a plurality of device profiles each corresponding to each user device that the user can use to access web pages.
  • the UI generation module 222 for example generates UIs based on the profile of the user and / or the device requesting access to a web page / UI, and on the basis of one or more environment characteristics. , which are for example received from the user device. optionally, the UI generation module 222 also adapts the UI on the basis of dynamic content retrieved by the server 208 and which must be displayed to the user.
  • the adaptation of the UI can be described as being "magnetic" in the sense that the generated UI is for example the one that corresponds most to a current context, in other words to common device characteristics, user and / or environment.
  • FIG. 3 schematically illustrates a user device 300 according to an exemplary embodiment.
  • the user device 300 comprises for example a communication interface (COMMS INTERFACE) 302 allowing communications with the remote server 208.
  • the device 300 comprises for example a processing device 304 comprising one or more processors under the control of instructions stored in a memory ins ⁇ structions (INSTR MEM) 306, the processor 304 performs the functions of the mobile device.
  • a web browser makes it possible, for example, to display web pages on a display (DISPLAY) 307 of the user device.
  • other types of applications stored by the memory 306 may allow other types of UIs to be displayed to a user.
  • the user device 300 comprises for example one or more detectors, such as an ambient light detector (AMBIENT LIGHT DETECTOR) 308, a microphone (MICROPHONE) 310, and a location device (GPS) 312.
  • the detector of Ambient light 308 indicates the level of brightness to be used for the UI.
  • data from the microphone indicates whether the user is in a quiet environment such as a desk, or in a noisy environment such as public transport.
  • the GPS 312 can for example indicate the position of the user device and / or the speed with which the user moves, which could influence how the UI is generated for this user device.
  • the user device 300 may receive user interfaces, such as web pages, generated by the remote server 208.
  • user interfaces such as web pages
  • data captured by one or more of the detectors 308, 310, 312 are for example transmitted to the remote server 208 and used to define the profile of the device 300 stored in the memory 212.
  • the user device In addition or instead, the user device
  • the user device 300 can itself generate appropriate user interfaces.
  • the user device 300 comprises for example one or more memory devices (MEMORY) 314, which could be the same memory or a different memory of the instruction memory 306, and which for example memorize a user profile and / or device (USER / DEVICE PROFILE) 316 and transformations (TRANSFORMATIONS) used during the generation of the UI, as will be described in more detail below.
  • the user and / or device profile may include one or more readings from the detectors 308, 310 and 312.
  • the user and / or device profile may include other features.
  • these features may further be periodically transmitted to the remote server 208 to form the device profile associated with the user device 300.
  • FIG. 4A illustrates an exemplary graphical user interface 400, which is for example generated by the UI generation module 222, or by the processing device 304 of the user device 300.
  • the GUI 400 has for example the form of a web page in the HTML file format, or another appropriate programming language .
  • the GUI 400 is represented inside a boundary 401 representing the outer limit of the global UI window forming the GUI 400.
  • this global window 401 can be entirely visible on the display or only part of this window can be visible, and the user can navigate in the window 401, for example by causing scrolling using a mouse or by touching a touch screen.
  • the GUI 400 comprises for example elementary windows within the global GUI window, each elementary window comprising one or more interface elements such as titles, dialogs, buttons to click, etc.
  • the GUI provides, for example in response to a user request made using a separate web page, a list of cinemas in a 1 kilometer area around the user device. This is indicated in an elementary window 402, with the title "Cinemas within a radius of 1 Km”.
  • the GUI 400 comprises for example a dynamic content in a basic window 404, which for example indicates a list 406 of cinemas (CINEMA 1 to CINEMA J), a list 407 of the distances of each cinema with respect to the user, and a list 408 addresses of each cinema.
  • a dynamic content in a basic window 404 which for example indicates a list 406 of cinemas (CINEMA 1 to CINEMA J), a list 407 of the distances of each cinema with respect to the user, and a list 408 addresses of each cinema.
  • a movie ticket reservation form 410 is also for example provided on the GUI 400, allowing a user of the user device to book tickets online to go see a movie in one of the cinemas.
  • the form 410 comprises for example a basic window 412 repeating the list of identified cinemas, each provided with a selection box 413 allowing the user to select one cinemas.
  • a basic date and time entry window 414 allows the user to select a time and date for the movie.
  • An elementary window 416 provides a movie selection box for presenting, in response to the user typing one or more letters of the movie, a list of 417 available movies whose names begin with those letters.
  • a basic identification window 418 is also for example provided in the GUI 400, and consists of text entry boxes where the user can enter a user name and password identification. For example, by logging on to the website, prerecorded user account data, such as credit card details, can be retrieved to allow the purchase of a movie ticket.
  • each of the elementary windows 402, 404, 412, 414, 416 and 418 is generated based on features such as the dimensions of the display of the user device, because the display is a touch screen, whether or not a mouse is available, and / or the expertise of the user to perform this type of data entry. Further, although not shown in Fig.
  • the color and / or contrast of the GUI 400 are adapted based on the ambient light level detected by the user device.
  • the HTML file format may allow a brightness setting to be set to control the brightness of the display of the user device.
  • FIG. 4B illustrates an example of a variant of a date introduction elementary window 414 'that could be substituted for the elementary window 414 in a different version of the GUI 400.
  • the elementary date introduction window 414 of FIG. Figure 4A is suitable for advanced users, and allows to introduce a date and time.
  • the elementary date introduction window 414 ' is for example intended for less competent users, and allows only a selection of the date.
  • FIG. 4C illustrates examples of the implementation of the elementary identification window 418 that could be used in the GUI 400.
  • a basic window 420 in a minimum implementation includes only a text insertion box 422 allowing a user to enter identification data and another text input box 424 allowing a user to enter a password. user to enter a password, and a small button 426, that the user can click using a mouse, or touch on a touch screen, to submit the data that has been entered.
  • This implementation is for example suitable for a very small display, and for reasonably advanced users, since there is no indication of the data that is required in each text box.
  • a slightly larger basic window 430 implementing the identification window 418 includes the same elements as the implementation 420, but with the words "Identifier:” and "Password:” added. This implementation is for example suitable for less advanced users than for the window 420, but also requires a slightly larger display.
  • the elementary windows 440 and 450 provide progressively larger implementations of the identification window 418, containing the text insertion boxes 422, 424 and the button 426, both of which are set. work with the "Please identify yourself" statement, which makes these interfaces suitable for even less competent users.
  • the elementary window 450 further includes a company logo 452 and a background image 454, requiring a relatively large display screen.
  • Fig. 5A is a flowchart illustrating operations in a UI generation method according to an exemplary embodiment. These operations are for example carried out by the UI generation module 222 of FIG. 2, by executing software instructions stored in the instruction memory 226.
  • GUI 400 the concepts could include cinemas
  • a context model representing all the parameters to be taken into account during the adaptation process of the UI, such as the brightness, the screen size, the level of the user's view, etc .;
  • this model being for example a task model, representing the interface in the form of a series of tasks, an abstract UI model that represents the various workspaces in the IU and the possibilities of navigation between them, ie a concrete UI model that represents selected stakeholders, such as composite speakers (eg a panel ), or elementary speakers (eg a text field), workspaces of the abstract UI.
  • a task model representing the interface in the form of a series of tasks
  • an abstract UI model that represents the various workspaces in the IU and the possibilities of navigation between them
  • a concrete UI model that represents selected stakeholders, such as composite speakers (eg a panel ), or elementary speakers (eg a text field), workspaces of the abstract UI.
  • the level of abstraction of the UI model is detected.
  • each type of model is associated with a given meta-model describing the model, and the system is able to detect the type of model based on the meta-model.
  • the model is transformed into an abstract UI model, and the next operation is the operation 504.
  • This transformation of the model is for example performed using one or more transformations selected on the basis of the characteristics of the device, the user and / or the environment.
  • the model is transformed into a concrete UI model, and the next operation is Operation 505.
  • the transformation of the model is for example carried out using one or more transformations selected on the basis of the characteristics of the device, the user and / or the environment.
  • the model is transformed into code to execute the UI, for example in the HTML file format or similar.
  • the transformation of the model is for example carried out using one or more transformations selected on the basis of the characteristics of the device, the user and / or the environment.
  • Fig. 5B is a flowchart illustrating an example of operations in a method of generating the UI code at the global level, in other words for the global UI window, such as window 401 in the example of Fig. 4A .
  • one or more transformations on the overall UI window are selec ⁇ tioned.
  • the at least one transform transforms at least one model defining the UI into UI code to generate a global UI window having particular dimensions, a particular color model, a particular arrangement of elementary windows, and / or other characteristic elements. The selection of the transformation or transformations will now be described in more detail.
  • a global GUI code header representing the GUI is generated.
  • the body of the global GUI code is generated using said one or more selected transformations.
  • the body is generated by the execution of one or more elementary tasks to generate each elementary window. These basic tasks are called child elements.
  • an operation 510 it is determined whether there is a child element to be processed. If so, the next operation is operation 511, in which the code of the child element is generated, as will be described with reference to FIG. 5C. The method then returns to the operation 510. In the other case, if all the child elements have been processed, the next operation is an operation 512 in which a termination of the global GUI code is generated in order to complete the representation of the child. GUI code.
  • the next operation is an operation 515, in which the file comprising the generated code is transmitted to a user device for display.
  • Fig. 5C is a flowchart illustrating an example of operations for implementing operation 511 of Fig. 5B to generate the code for a child element.
  • one or more transformations are selected to generate the code of a window elementary GUI, based on the content to be displayed, and the characteristics of the user profile and / or the device and / or the environment (brightness, noise, etc.) - For example, for at least some of the windows In the basic elements of the user interface, there are a plurality of potential transformations that can be used to generate the elementary window having elements adapted to certain characteristics of the user device, the environment and / or the user. Selecting a conversion may involve a calculation of a distance representative of the extent to which each processing is consistent with the charac teristics ⁇ , optionally with the application of a different weight to each characteristic. An exemplary method for selecting a transformation will be described in more detail below.
  • a GUI code element body is then generated using one or more selected transformations.
  • an operation 519 it is determined whether there is a child element to be processed. If so, the next operation is operation 520, in which the code of the child element is generated. Then return to operation 519. In the other case, if all the child elements have been processed, the next operation is the operation 521, in which a GUI code element termination is generated in order to complete the code representation of the child element of the global GUI.
  • Figure 6 illustrates an example of a 3D feature space, in which a USER EXPERTISE level is represented on the x-axis, a level of Ambient light (LIGHT LEVEL) is shown on the y-axis, and the display size (DISPLAY SURFACE) is shown on the z-axis.
  • LIGHT LEVEL level of Ambient light
  • DISPLAY SURFACE display size
  • a first transformation leads to an elementary component of UI which is suitable for feature levels falling in a volume VI of FIG. 6
  • a second transformation leads to an elementary component of UI which is suitable for characteristic levels falling into a volume V2 of Figure 6
  • a third transformation leads to an elementary component of UI which is suitable for feature levels falling in a volume V3 of Figure 6.
  • the UI generation module 222 selects for example the transformation associated with the volumes VI, V2 and V3 on the basis of a calculation. distance from each volume. Examples of distance calculations will now be described in more detail with reference to Figs. 7A, 7B and 7C.
  • FIG. 7A is a diagram illustrating the case of one-dimensional volumes VI and V2 for a single feature, each volume corresponding to a given TRANSFORMATION ⁇ mation.
  • the distance (C, V) of a point C with respect to a volume V is for example void if the point C falls within the volume, in other words if Vi m j_ n ⁇ C ⁇ V max, where the volume Vi has lower and upper limits Vi m -j_ n and Vi max respectively. This is the case for point C1 in FIG. 7A, which falls in volume VI.
  • the volume with the shortest distance is selected, which for the points C2 and C3 is the volume V2.
  • Figure 7B illustrates another example of one-dimensional volumes VI and V2, but for a characteristic defined by discrete values.
  • the value is the skill level of the user, which is for example one of the following five values in order: novice; beginner ; competent; powerful; and expert.
  • d (C2, Vi) min (
  • ) is applied, for example.
  • the distance is for example equal to 0 if the volume and the point have the same value, and equal to 1 if the volume and the point have different values.
  • the distance will be equal to 0 if the device user has a positioning system, and equal to 1 otherwise.
  • Figure 7C illustrates an example of a case in which distances must be calculated in a two-dimensional space on the basis of characteristics K1 and K2.
  • the volumes VI and V2 are for example defined by:
  • V2 ⁇ K1 e [K1 v2min , K1 v2max ], K2 e [K2 v2min , K2 v2max ]>
  • a point C1 falls in volume VI, and thus its distance from this volume is equal to 0.
  • point C2 is in the range of volume VI with respect to the characteristic Kl, but not with respect to the K2 feature.
  • Point C3 is not in the volume ranges VI or V2 for any of the characteristics K1 and K2.
  • Each of the distances between C2 and C3 and the volumes VI and V2 is for example defined as the smallest length of a straight line joining a point situated in the volume Vi at the point C:
  • d (C, Vi) V (d K1 (C, Vi) 2 + d K2 (C, Vi) 2 )
  • the values of the characteristics K1 and K2 have been normalized with respect to each other, for example on a scale of 0 to 100, where 0 corresponds to the lowest value K m -j_ n of the characteristic K, and 100 corresponds to the highest value K max of the characteristic K.
  • the distance to a volume Vi can thus be defined by:
  • certain features may be weighted to have different importance.
  • the ambient light conditions may vary only relatively little and be of relatively low importance.
  • ambient light conditions can be of great importance, and thus be weighted accordingly.
  • a weighting coefficient ⁇ ⁇ is thus for example added to the formula, which becomes the following formula:
  • selecting certain volumes for certain track characteristics may be completely prevented. For example, it may not be possible for a certain elementary window associated with a certain volume to be applied to a display screen that is larger than a certain size.
  • Lim m -j_ n is the lowest value of a characteristic K for which the volume V can be selected
  • Lim max is the highest value of the characteristic K for which the volume V can be selected.
  • only the lower limit Lim m -j_ n or only the upper limit Lim max could be provided.
  • Fig. 8 is a flowchart showing an example of operations in a method for creating models and transformations for generating an elementary window according to an exemplary embodiment. These operations are for example performed by a designer during a design phase.
  • a set of n relevant characteristics ⁇ K-j_ ⁇ n is identified .
  • this operation involves determining, for the given content to be displayed, which relevant features may affect how the content is displayed. For example, when the number of items to be displayed is greater than a certain number, the size of the display of the user device may become relevant to decide how to represent that content.
  • each characteristic can take one of the following forms:
  • weights 73 ⁇ 4- ⁇ are defined for each characteristic. Weights are defined based on the content. For example, depending on the content to be displayed, the size of the display of the user device may be of high importance when defining the UI.
  • a plurality of magnetic volumes Vi is defined, for example by subdividing each characteristic range into two or more sub-ranges, each magnetic volume corresponding to a vector of a combination of selected sub-ranges.
  • conditions are defined for an effective operation of each UI in order to specify the functional requirements of the UI and / or the elementary components of the UI. For example, if the UI displays the current temperature, it needs a connection to a temperature sensor to get temperature readings. If this connection is lost, the UI can not function properly, and for example it produces a reaction such as displaying a warning or error message.
  • Such functional conditions and such reactions can be defined at the level of the IU or the elementary component level.
  • one or more models and one or more transformations are designed and generated or otherwise created to generate an elementary UI window associated with each volume.
  • transformations that include instructions used by the UI generation module or by a user device during the generation of a UI, are created by a designer.
  • task, domain, and context templates are created by a designer, and these templates are also used. during the generation of the UI.
  • the context model can be filled by a designer based on user characteristics, or by programs and / or sensors, for example based on geolocation.
  • Other models, such as the abstract UI model and the concrete UI model can be generated automatically based on the task model.
  • Figure 9A is a flowchart illustrating operations in a method of selecting transformations to generate a UI consisting of one or more elementary windows. This method is for example carried out during the operation 516 of FIG. 5C.
  • a "getTransformation” function is called, with as input parameters the current context C, the magnetic volumes V, and the context characteristics K with their associated weights n Ki .
  • a variable “minDistance” is set to infinity
  • a result value “Transformation” of the function is set to zero
  • a variable v is set to 0.
  • an operation 904 it is determined whether the variable v is smaller than Card (V), in other words the number of magnetic volumes. If the answer is no, operations 905 to 908 are performed.
  • a distance is calculated by calling another function "magneticDistance", with as input parameters the current context C, the volume V [v], and the context characteristics K.
  • the operation 906 it is determined whether the distance determined in the operation 905 is smaller than the value of the variable minDistance. If so, the next operation is the operation 907, in which the minDistance variable is set equal to the distance value, and the value of "Transformation” is set equal to the transformation associated with the volume V [v].
  • the operation 908 in which the variable v is incremented, then it returns to operation 904.
  • the operation after the operation 904 is an operation 909, in which the value of the variable "Transformation" is returned as a result.
  • FIG. 9B illustrates an example of operations implementing the distance calculation of operation 905 of FIG. 9A.
  • variable "distance" is set to 0, and a variable i is set to 0.
  • operation 912 it is determined if i is less than n. If yes, the next operation is operation 913, in which the distance is calculated by:
  • distance distance + (Ck ⁇ Kyi mj _ n &&Ck ⁇ Ky; max ?
  • operation 915 the next operation is operation 915, in which the square root of the value of the distance is returned as a result.
  • Fig. 10A is a flowchart illustrating an example of operations in a method of adapting a UI based on a context of usage changes occurring during the display of a UI on a user device. For example, although the characteristics of the device utili ⁇ sateur are unlikely to change, some features of the environment such as the ambient light level or background noise might change as the UI is displayed. The method of Figure 10A allows the UI to be adapted based on such a change.
  • the current context is monitored, and in the case of change, the following operation is the operation 1002.
  • the operation 1002 is also for example carried out if in an operation 1003 a user requests to navigate to a UI which must be generated, or if in an operation 1004 a faulty operation is detected.
  • the operation 1002 it is determined whether there is a pre-existing UI adapted to the new current context. For example, in some embodiments, a limited number of the most common IUs may be generated in advance and stored in memory. In addition or instead, following the generation of a given UI, the code can be maintained for a certain time in the case where another user device has the same context and requests the same UI.
  • the check in operation 1002 includes, for example, a distance calculation similar to that described in relation to the aforementioned FIGS. 9A and 9B, but in which it is recognized that a UI is associated with a volume, rather than transformations. If in an operation 1002 a UI is found, the next operation is the operation 1005, in which this UI is considered to be code to execute, after compilation if necessary. The UI, called magnetic UI in FIG. 10A, is thus transmitted to the user device and executed by it.
  • an operation 1006 it is determined whether there is a similar UI, i.e. a UI generated for a context similar to the new context current, and which can be adapted on the basis of the current context.
  • a similar context is for example defined as a context sharing the same value of at least one of its characteristics with the new current context.
  • operation 1006 If in operation 1006 there is no similar UI, in an operation 1007, the UI is generated using, for example, the method described in connection with FIGS. 5A, 5B and 5C, then the process proceeds to operation 1005.
  • the similar UI is for example adapted on the basis of the new current context. This implies, for example, the determination of the transformations used for the generation of the similar UI, and the determination that one or more of the transformations should be replaced by one or more new transformations in the light of the new current context. For example, based on the differences between the context associated with the similar context and the new context, it could be determined that one or more of the transformations selected for the similar UI are not well adapted and need to be replaced. New selections are for example made for only these transformations.
  • the selected transformations can then be used to generate an adapted UI, and then the process proceeds to an operation 1014 of Figure 10C described in more detail below.
  • An advantage of the adaptation of a similar UI to the new current context is that only a moderate change in the UI results, and the selection of at least one of the previously selected transformations for the similar UI should not be repeated.
  • the adaptation of the UI according to the context in the operations 1002, 1005 and 1006 is performed globally on the UI, rather than at the server level or on the server. user device.
  • the adaptation of the UI code based on the modified characteristics is a global adaptation of the UI code including the regeneration of the entire code of the UI.
  • the adap ⁇ tation of the UI according to the context in the operations 1002, 1005 and 1006 may be performed locally, in other words on only a portion of the UI, regenerating at the server level only a part of the code of the UI (for example when parts of the UI can no longer be displayed) or by modifying locally, at the server or on the user device, the code of the UI 'UI (for example when the color of the text is set to red).
  • the operation 1001 involves the identification, by the processor 224, of one or more portions of the UI code to be adapted based on the modified context.
  • operations 1002, 1005 and 1006 only said one or more identified portions of the code of the UI are then adapted. Local adaptation is advantageous since it can be performed faster than a global adaptation of the IU, and / or involves reduced processing resources.
  • Fig. 10B is a flowchart illustrating the context monitoring operation 1001 of Fig. 10A in more detail according to an exemplary embodiment.
  • the usage context C is captured from the user device.
  • the context for use C is for example defined by n context values ⁇ Ci ⁇ n .
  • an operation 1028 it is checked whether the use context C has changed with respect to the previously conserved value. If it is not the case, the process returns to the operation 1027. By cons if the context has changed, the next operation is the operation 1029.
  • operation 1029 it is determined whether the change in context requires adaptation of the UI. For example, a calculation is made to determine whether the transformations to be selected based on the context values are identical to the transformations selected when the IU was last generated. If they are identical, the method returns to operation 1027. On the other hand, if one or more of the transformations have changed, an adaptation is considered to be required, and the process proceeds to operation 1002 of FIG. 10A.
  • Fig. 10C is a flowchart illustrating in more detail operation 1005 of executing the UI, according to an exemplary embodiment.
  • an operation 1010 it is determined whether the code of the UI is executable. If the answer is no, in an operation 1011, the system identifies the programming language used in the UI, then, in an operation 1012, it is determined whether the language requires compilation or not. If the answer is yes, the file is compiled in an operation 1013.
  • End User in other words the code to execute on the user device.
  • Figure 10C is based on the case of a global adaptation of the UI, local adaptation could be achieved as described previously, either at the server either on the user device.
  • the operations 1010 to 1014 are for example carried out for each elementary component of the UI that must be adapted.
  • Fig. 10D is a flowchart illustrating the operation 1004 of Fig. 10A in more detail according to an exemplary embodiment.
  • the example of FIG. 10D corresponds for example to a centralized approach implemented by the UI generation module 222 of FIG. 2.
  • conditions during the execution of a current UI are for example examined to determine, in an operation 1016, whether there are unfilled conditions.
  • the unsatisfied conditions correspond, for example, to one or more missing input values to be displayed by the UI. If the answer is no, the process returns to step 1015.
  • one of a plurality of reactions is selected, comprising for example: an operation
  • Fig. 10E is also a flowchart illustrating in more detail operation 1004 of Fig. 10A, and elements similar to those of Fig. 10D have been shown with the same reference numerals and will not be described again in detail.
  • Figure 10E corresponds to a distributed approach, wherein at least a portion of the method is performed at user devices. For example, operations 1015 and 1016 are performed at the user device.
  • the next operation is the operation 1022, wherein the user device determines the reaction which is for example to change the style of the UI in a operation 1022, for example by displaying a form of warning sign on the display of the device of the user, or to transmit a report to the server 208 in an operation 1023.
  • the server for example implements operation 1017 and implements one or more of the operations 1018, 1019 and 1020 before performing the operation 1002.
  • An advantage of the embodiments described herein is that, by generating a user interface adapted to features associated with a user, one or more user devices, and / or the environment, the interface may be adapted to specific usage contexts, without the need to create in advance, maintain and store all available UI versions. Indeed, as previously described, the number of versions increases exponentially with the number of different characteristics of users, devices and / or environment to consider, which makes it quickly unfeasible to create and memorize each version.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
EP16750941.3A 2015-07-07 2016-07-07 System und verfahren zur anpassung einer benutzerschnittstelle Withdrawn EP3320419A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1556433A FR3038744B1 (fr) 2015-07-07 2015-07-07 Systeme et procede pour l'adaptation magnetique d'une interface d'utilisateur
PCT/FR2016/051742 WO2017006066A1 (fr) 2015-07-07 2016-07-07 Systeme et procede pour l'adaptation d'une interface d'utilisateur

Publications (1)

Publication Number Publication Date
EP3320419A1 true EP3320419A1 (de) 2018-05-16

Family

ID=54848665

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16750941.3A Withdrawn EP3320419A1 (de) 2015-07-07 2016-07-07 System und verfahren zur anpassung einer benutzerschnittstelle

Country Status (3)

Country Link
EP (1) EP3320419A1 (de)
FR (1) FR3038744B1 (de)
WO (1) WO2017006066A1 (de)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7412658B2 (en) * 2002-11-14 2008-08-12 Sap Ag Modeling system for graphic user interface
US8276069B2 (en) * 2007-03-28 2012-09-25 Honeywell International Inc. Method and system for automatically generating an adaptive user interface for a physical environment
FR2985336B1 (fr) * 2011-12-29 2016-08-26 Thales Sa Procede de simulation de l'interface homme-machine d'un appareil

Also Published As

Publication number Publication date
WO2017006066A1 (fr) 2017-01-12
FR3038744A1 (de) 2017-01-13
FR3038744B1 (fr) 2017-08-11

Similar Documents

Publication Publication Date Title
US10956128B2 (en) Application with embedded workflow designer
US11243747B2 (en) Application digital content control using an embedded machine learning module
CN102016905B (zh) 智能自动完成
US8966405B2 (en) Method and system for providing user interface representing organization hierarchy
CN102483698B (zh) 动态web应用的客户端层验证
US20120166522A1 (en) Supporting intelligent user interface interactions
US20130212534A1 (en) Expanding thumbnail with metadata overlay
US10656907B2 (en) Translation of natural language into user interface actions
KR102518172B1 (ko) 컴퓨팅 시스템에서 사용자 어시스턴스를 제공하기 위한 장치 및 방법
CN102124460B (zh) 用于网站地图的标准模式和用户界面
US10997359B2 (en) Real-time cognitive modifying a mark-up language document
US10430256B2 (en) Data engine
US10547582B1 (en) Methods and systems for enhancing viewer engagement with content portions
US20080288865A1 (en) Application with in-context video assistance
US20210027643A1 (en) Augmented reality tutorial generation
US10997963B1 (en) Voice based interaction based on context-based directives
Rodrigues et al. New trends on ubiquitous mobile multimedia applications
US20190250999A1 (en) Method and device for storing and restoring a navigation context
US20230247102A1 (en) Addressing previous client device technology in online platforms
WO2017006066A1 (fr) Systeme et procede pour l'adaptation d'une interface d'utilisateur
US20140237368A1 (en) Proxying non-interactive controls to enable narration
JP2023540479A (ja) データ解析を使用した音声認識及びインターレース方式のオーディオ入力の拡張
US20150378530A1 (en) Command surface drill-in control
US20200174573A1 (en) Computer system gesture-based graphical user interface control
Raman Specialized browsers

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20180116

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20190619

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20191030