US20230077200A1 - System and Method for an Intelligent Drag and Drop Designer - Google Patents
System and Method for an Intelligent Drag and Drop Designer Download PDFInfo
- Publication number
- US20230077200A1 US20230077200A1 US17/985,110 US202217985110A US2023077200A1 US 20230077200 A1 US20230077200 A1 US 20230077200A1 US 202217985110 A US202217985110 A US 202217985110A US 2023077200 A1 US2023077200 A1 US 2023077200A1
- Authority
- US
- United States
- Prior art keywords
- placeholder
- selection
- probability
- action
- movable action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 39
- 230000009471 action Effects 0.000 claims abstract description 152
- 230000000007 visual effect Effects 0.000 claims description 36
- 230000004044 response Effects 0.000 claims description 5
- 238000013461 design Methods 0.000 abstract description 30
- 238000010586 diagram Methods 0.000 description 19
- 230000008901 benefit Effects 0.000 description 5
- 230000007423 decrease Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000012938 design process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/34—Graphical or visual programming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0633—Workflow analysis
Definitions
- the present disclosure relates generally to the field of customized workflow process technology. More specifically, the present disclosure relates to new systems and methods for an intelligent designer that improves computer functionality and use for creating and designing workflow processes and electronic forms; making a user’s design process more intuitive, quicker, and easier.
- Dynamic forms and workflows can be created on low-code development platforms or via full code implementations, enabling a user to automate many processes in a way that dramatically enhances productivity and efficiency in any industry.
- Forms are made up of multiple actions positioned in a specific order to increase efficiency and effectiveness of use.
- workflows are made up of multiple actions sequenced in a specific order to bring about the desired solution. Designing a customized form or workflow often requires familiarity with the specific task or purpose the form/workflow is addressing as well as technical knowledge and experience in designing the form/workflow. At times, a workflow or form can become large and/or intricate, creating the potential for incorrect placement or inadvertent omission of actions.
- the present disclosure is related to systems and methods for an intelligent workflow and form designer that provides a movable action; displays one or more placeholders for insertion of the movable action; generates and displays a visual emphasizing aid around one of the placeholders upon selection of the movable action; increases the emphasis of the visual aid in accordance with the movement of the selected action, changing emphasis from one placeholder to another in accordance with the proximity of the selected action to the one or more placeholders; and inserts the selected movable action into the emphasized placeholder upon release of the selected action.
- a system for an intelligent designer comprises a computer system having a hardware processor and a physical memory using executable instructions that, as a result of being executed by the hardware processor, cause the computer system to: generate and display, via the hardware processor, a graphical user interface comprising a design canvas, wherein the design canvas comprises at least one movable action and one or more placeholders for the at least one movable action; identify, via the hardware processor, when a movable action has been selected; in response to identifying that a movable action has been selected, generate and display a visual emphasizing aid surrounding a single placeholder; determine when the selected movable action has been released; and in response to determining that the selected movable action has been released, configure placement of the selected movable action onto the position of the single placeholder surrounded by the visual emphasizing aid.
- the system in response to identifying that a movable action has been selected, may determine, via the hardware processor, the location of the selected movable action; determine the location of the single placeholder; calculate the distance between the selected movable action and the single placeholder; increase the emphasis of the visual emphasizing aid when the selected movable action increases in proximity to the single placeholder, wherein the emphasis of the visual emphasizing aid continues to increase as the selected movable action continues to increase in proximity to the single placeholder; and decrease the emphasis of the visual emphasizing aid when the selected movable action decreases in proximity to the single placeholder, wherein the emphasis of the visual emphasizing aid continues to decrease as the selected movable action continues to decrease in proximity to the single placeholder.
- FIG. 1 is a functional diagram generally illustrating an embodiment of a graphical user interface for an intelligent drag and drop designer.
- FIGS. 2 A- 2 D disclose a functional flow diagram generally illustrating an embodiment of designing a workflow with an intelligent drag and drop designer.
- FIG. 3 is a functional block diagram generally illustrating an embodiment of a method for using an intelligent drag and drop designer to design a workflow or form.
- FIG. 4 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a workflow.
- FIG. 5 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a form.
- FIGS. 6 A- 6 C disclose a functional flow diagram generally illustrating an embodiment of selecting from a branching action’s two or more placeholders when designing a workflow with an intelligent drag and drop designer.
- FIG. 7 is a functional block diagram generally illustrating an embodiment of a network system for an intelligent drag and drop designer.
- FIG. 8 is a functional block diagram generally illustrating an embodiment of an electronic device system for an intelligent drag and drop designer.
- Designer introduces intelligence into the graphical user interface of a workflow or form designer-making the experience of designing workflows and forms smarter, easier, and quicker.
- the Designer comprises a built-in intelligence that allows it to determine which placeholder is likely to be used or preferred and then visually emphasizes such placeholder to the user.
- One of its primary benefits is the removal of the cumbersome need for, and difficulties associated with, having to precisely move workflow/form actions directly onto a placeholder on a design canvas. Instead, the Designer enables the user to select the action, move it onto the design canvas, and release it, upon which the Designer automatically drops the action into the emphasized placeholder.
- the Designer also allows the user to emphasize alternative placeholders for selection by making only slight movements with the selected action. And when released, the Designer automatically drops the action into the emphasized placeholder selected by the user.
- FIG. 1 is a functional diagram generally illustrating an embodiment of a graphical user interface for an intelligent drag and drop designer.
- a Designer graphical user interface (“GUI”) for designing workflows may comprise a design canvas 105 , a cursor 135 , and an action toolbox 110 with one or more movable actions 115 .
- the design canvas 105 may comprise a workflow design, comprised of a start action 120 , a placeholder 125 , and a finish action 130 .
- a design canvas refers to the area on a Designer GUI where a user can design a form or workflow.
- a movable action refers to an entry that can be inputted into the design of a workflow or a form.
- workflow actions are: an action to branch a decision tree by conditions, an action to branch by values, and actions that include a condition precedent, running parallel paths, and calling a workflow.
- form actions are: an action to insert barcodes, action to insert multiple choice options, and actions to insert timestamps, email options, images, and labels.
- a placeholder refers to a component that holds, denotes, or reserves a place into which a movable action may be inserted.
- FIGS. 2 A- 2 D disclose a functional flow diagram generally illustrating an embodiment of designing a workflow with an intelligent drag and drop designer.
- the Designer in designing a workflow, the Designer enables a user to select a movable action 205 with a cursor 210 .
- the Designer Upon the action 205 being selected by the cursor 210 and moved slightly, the Designer generates and displays a visual aid 220 around a placeholder 215 .
- the Designer causes the visual aid 220 to increase in emphasis.
- the Designer allows the user to not be required to drag the action 205 directly into the position of the placeholder 215 .
- the Designer allows the user to merely release the action 205 , whereupon the action 205 is automatically placed into the position of the placeholder 215 that is currently being visually emphasized.
- FIG. 2 C shows the action 205 after it has been released and automatically placed in position of the placeholder 215 .
- the Designer may generate a new placeholder 225 .
- FIG. 2 D shows the Designer enabling the user to select another action 230 , whereupon a visual aid 235 is generated and displayed around the new placeholder 225 .
- the Designer is compatible with any number of electronic devices and enables an action to be selected and released by multiple methods. Such methods may depend on the electronic device utilizing the Designer. For example, an action may be selected by a single holding click of a mouse and released by the release of the held click. Or, an action may be simultaneously selected and released by a double click of a mouse. The ability to select, release, and drop by a single double-click is possible only because the Designer provides a designated placeholder for the action to be automatically inserted into. An action may also be selected by a user selecting the action on a touch screen with only the user’s fingers, or simultaneously selected and released by a user’s swipe on a touch screen.
- a visual aid surrounding a placeholder refers to a method by which visible emphasis is given to a placeholder.
- visual aids providing emphasis for a placeholder serve the purpose of a placeholder being made more visually prominent to the user when designing the form or workflow.
- Embodiments of a visual aid emphasis comprise a contrasting color that highlights the placeholder, circular/rectangular lines around the placeholder, and the placeholder itself being made bigger or bolder.
- the visual aid emphasis portrays the feeling of the placeholder pulling or attracting the selected action or an increased sensitivity by the placeholder. Being highlighted or magnetized are other descriptions used to reference the dynamic relationship between a selected action and the visual aid emphasis.
- Embodiments of a visual aid increasing in emphasis/prominence comprise: a contrasting color increasing in darkness, brightness, or size; circular/rectangular lines increasing in thickness, quantity, or size; and the placeholder continually increasing in size.
- FIG. 3 is a functional block diagram generally illustrating an embodiment of a method for using an intelligent drag and drop designer to design a workflow or form.
- the method for designing either a workflow or form begins when the Designer identifies or detects that an action is selected 310 .
- the Designer When the action is selected 310 , the Designer generates and displays a visual emphasizing aid around a single placeholder 315 .
- the Designer detects that the selected action 310 is released 320
- the Designer automatically places the action 335 onto the placeholder that was visually emphasized.
- the Designer may detect that the user is dragging 325 the selected action closer to the placeholder, whereupon the Designer causes the visual aid around the placeholder to increase in emphasis 330 .
- the Designer automatically places 335 the action into the selected placeholder.
- FIG. 4 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a workflow.
- a Designer may comprise a GUI 400 that may be used to create a workflow.
- the workflow GUI 400 may comprise a toolbox 405 with movable workflow actions 410 , a design canvas 415 , a start action 420 , a placeholder 425 , a stop action 430 , and a cursor 435 .
- the workflow GUI 400 enables a user to select a workflow action 410 to place onto the placeholder 425 .
- a visual aid appears around the placeholder 425 , increasing in emphasis as the workflow action 410 is brought closer to the placeholder 425 .
- Once a workflow action 410 is placed onto the placeholder 425 a new placeholder appears.
- FIG. 5 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a form.
- a Designer may comprise a GUI 500 that may be used to create a form.
- the form GUI 500 may comprise a toolbox 505 with movable form actions 510 , a design canvas 515 , and a cursor 520 .
- the form GUI 500 allows for a user to select a form action 510 to place onto the design canvas 515 .
- the Designer detects that a form action 510 is selected, it generates and displays a visual aid around the location for insertion of the form action 510 , increasing in emphasis as the form action 510 is brought closer to the location.
- the Designer determines which placeholder should be emphasized and emphasizes a new location when it detects that a new form action 510 is selected.
- the Designer when a workflow is first designed, the Designer displays only one placeholder. As a result of it being the only placeholder displayed, it will be the only placeholder emphasized when a user selects an action. After an action is dropped into the first placeholder the Designer will automatically generate and display a new placeholder directly after the previously added action.
- the Designer may as a default, emphasize the placeholder in the first branch.
- the Designer may expand and collapse in size the target zones within the workflow/form design by determining the proximity of the selected action to the target zones.
- the Designer allows for pre-configuration of settings such that specified placeholders are emphasized by default.
- the Designer may also distinguish and differentiate between the presence of multiple placeholders in a workflow or form design and determine which placeholder the user is intending to select.
- the Designer may, by default, automatically visually emphasize a placeholder, such as the placeholder closest to the toolbox. For example, if the toolbox is on the left side of the GUI, the farthest left placeholder is automatically visually emphasized when a selected action is moved onto the canvas.
- the Designer is sufficiently intelligent so as to sense the direction of the action’s movement in relation to its original placement on the canvas and determine which placeholder is intended to be selected by what direction the action is moved in.
- a placeholder to the right of the previously emphasized placeholder is visually emphasized. If the action is moved in the down direction in relation to its original placement on the canvas, a placeholder below the previously emphasized placeholder is visually emphasized.
- the Designer determines which placeholder the user is intending to select, as a selected action is moved across a canvas the Designer is sufficiently intelligent so as to identify the location of the selected action and its proximity to the location of all existing placeholders.
- the Designer determines the distance between the selected action and other placeholders, compares the distances, ascertains which placeholder is closest in proximity to the action, and generates a visual emphasis for the closest placeholder.
- the Designer reevaluates the proximity of all placeholders and continues to generate a visual emphasis for the closest placeholder, changing visual emphasis as needed from one placeholder to another. When the Designer visually emphasizes a placeholder, it does not visually emphasize other placeholders.
- the Designer may comprise functionalities that incorporate aspects of all the above embodiments, wherein the Designer may sense both the proximity of placeholders in relation to an action and also sense the direction in which the action is moved.
- FIGS. 6 A- 6 C disclose a functional flow diagram generally illustrating an embodiment of selecting from a branching action’s two or more placeholders when designing a workflow with an intelligent drag and drop designer.
- the Designer s design canvas 605 displays a workflow design with a start action 610 , a stop action 615 , an inputted branching action 620 with a first placeholder 625 and a second placeholder 630 , and a separate, third placeholder 635 .
- the Designer detects that the user has dragged the action 640 onto the canvas 605 .
- the Designer recognizes the presence of the action 640 on the canvas 605 , determines that the first placeholder 625 is closest, and visually emphasizes 645 the first placeholder 625 .
- the first placeholder 625 is visually emphasized 645
- the second placeholder 630 and the third placeholder 635 are not visually emphasized.
- the Designer determines that the movement of the selected action 640 is in the direction of the second placeholder 630 and generates a visual emphasis 650 around the second placeholder 630 .
- the third placeholder 635 is visually emphasized 655 .
- Additional intelligence by the Designer may comprise machine learning capabilities.
- Embodiments of machine learning capabilities comprise the Designer taking into account probabilities for two or more displayed placeholders and emphasizing the placeholder with the highest probability. Such probabilities may be based on a form or workflow’s subject matter, substantive data, pattern of previous selections, appropriate/likely grouping, and prior user history.
- Machine learning capabilities may also comprise a Designer that can identify an action inputted incorrectly and generating appropriate notifications.
- FIG. 7 is a functional block diagram generally illustrating an embodiment of a network system for an intelligent drag and drop designer.
- a network system may comprise a Designer server 710 accessible over a local area network or a wide area network 720 , such as the Internet.
- the Designer server 710 may enable third party servers 730 , users 740 , and electronic devices 750 to connect to a Designer GUI 760 .
- the Designer server 710 may also host additional design GUIs 770 , each accessible to their respective owners and other users.
- the Designer server 710 is remotely accessible by a number of user computing devices 750 , including for example, laptops, smartphones, computers, tablets, and other computing devices that are able to access the local area network or a wide area network where the Designer server 710 resides.
- each user electronic device 750 connects with the Designer server 710 to interact with the Designer GUI 760 and the additional Designer GUIs 770 .
- each additional Designer GUI 770 may employ a number of connectors to interact with third party 730 servers and their data, services, or applications.
- FIG. 8 is a functional block diagram generally illustrating an embodiment of an electronic device system for an intelligent drag and drop designer.
- the electronic device 810 may be coupled to a Designer server 710 via a network interface 820 .
- the electronic device 810 generally comprises a memory 830 , a processor 840 , a graphics module 850 , and an application programming interface 860 .
- the electronic device 810 is not limited to any particular configuration or system.
- Embodiments of the systems and methods are described with reference to schematic diagrams, block diagrams, and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams, schematic diagrams, and flowchart illustrations, and combinations of blocks in the block diagrams, schematic diagrams, and flowchart illustrations, respectively, may be implemented by computer program instructions. These computer program instructions may be loaded onto a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks.
- Other embodiments may comprise overlay features demonstrating relationships between one more steps, active users, previous users, missing steps, errors in the workflow, analytical data from use of the workflow, future use of the workflow, and other data related to the workflow, users, or the relationship between the workflow and users.
- Non-transitory computer readable media may include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., compact disk (CD), digital versatile disk (DVD)), smart cards, and flash memory devices (e.g., card, stick).
- magnetic storage devices e.g., hard disk, floppy disk, magnetic strips
- optical disks e.g., compact disk (CD), digital versatile disk (DVD)
- smart cards e.g., card, stick
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments are provided for a design tool with built-in intelligence that automatically senses the location of one or more placeholders for selectable actions when designing a form or workflow. The design tool intuitively determines which placeholder should logically be used next; visually emphasizes, when an action is selected, the location and designation of the placeholder predicted to be used next; visually emphasizes alternative placeholders relevant to the position of the selected action by detecting minor changes in the position of the selected action; and automatically positions the selected action into the predicted or selected placeholder when the action is released.
Description
- This application claims benefit to U.S. Non-Provisional Application No. 17/376,007, filed on Jul. 14, 2021, entitled “Systems, Methods, And Devices For An Intelligent Drag And Drop Designer,” which claims benefit to U.S. Non-Provisional Application No. 16/802,390 filed on Feb. 26, 2020, entitled “System and Method for an Intelligent Drag and Drop Designer,” which in turn claims priority to U.S. Provisional Application No. 62/810,460, filed on Feb. 26, 2019, entitled “System and Method for an Intelligent Drag and Drop Designer,” the contents all of which are incorporated by reference herein as though set forth in their entirety, and to which priority and benefit are claimed.
- The present disclosure relates generally to the field of customized workflow process technology. More specifically, the present disclosure relates to new systems and methods for an intelligent designer that improves computer functionality and use for creating and designing workflow processes and electronic forms; making a user’s design process more intuitive, quicker, and easier.
- Dynamic forms and workflows can be created on low-code development platforms or via full code implementations, enabling a user to automate many processes in a way that dramatically enhances productivity and efficiency in any industry. Forms are made up of multiple actions positioned in a specific order to increase efficiency and effectiveness of use. Similarly, workflows are made up of multiple actions sequenced in a specific order to bring about the desired solution. Designing a customized form or workflow often requires familiarity with the specific task or purpose the form/workflow is addressing as well as technical knowledge and experience in designing the form/workflow. At times, a workflow or form can become large and/or intricate, creating the potential for incorrect placement or inadvertent omission of actions. As a result, the design process can be tedious and challenging work, requiring technical experience and careful attention to the placement of individual actions within the overall form/workflow. This potential for error is increased because current design tools are rudimentary in the features they offer, cumbersome when used, or lack built-in intuition that enables ease of use. Specifically, design canvasses that require a user to drag and drop workflow or form actions with pinpoint accuracy pose inherent difficulties in the design process, leading to user frustration and errors.
- Thus, what is needed are systems and methods for a design tool with built-in intelligence that automatically senses the location of one or more placeholders for selectable actions when designing a form or workflow; intuitively determines which placeholder should logically be used next; visually emphasizes, when an action is selected, the location and designation of the placeholder predicted to be used next; visually emphasizes alternative placeholders relevant to the position of the selected action by detecting minor changes in the position of the selected action; and automatically positions the selected action into the predicted or selected placeholder when the action is released. Such systems and methods will also improve communication technology between the networks and servers of the separate users dependent on the procedure for designing the form or workflow, allowing for increased understanding, implementation, and engagement across organizational boundaries.
- The following presents a simplified overview of example embodiments in order to provide a basic understanding of some aspects of the invention. This overview is not an extensive overview of the example embodiments. It is not intended to identify key or critical elements of the example embodiments or delineate the scope of the appended claims. Its sole purpose is to present some concepts of the example embodiments in a simplified form as a prelude to the more detailed description that is presented herein below. It is to be understood that both the following general description and the following detailed description are exemplary and explanatory only and are not restrictive.
- In accordance with the embodiments disclosed herein, the present disclosure is related to systems and methods for an intelligent workflow and form designer that provides a movable action; displays one or more placeholders for insertion of the movable action; generates and displays a visual emphasizing aid around one of the placeholders upon selection of the movable action; increases the emphasis of the visual aid in accordance with the movement of the selected action, changing emphasis from one placeholder to another in accordance with the proximity of the selected action to the one or more placeholders; and inserts the selected movable action into the emphasized placeholder upon release of the selected action.
- In one embodiment, a system for an intelligent designer comprises a computer system having a hardware processor and a physical memory using executable instructions that, as a result of being executed by the hardware processor, cause the computer system to: generate and display, via the hardware processor, a graphical user interface comprising a design canvas, wherein the design canvas comprises at least one movable action and one or more placeholders for the at least one movable action; identify, via the hardware processor, when a movable action has been selected; in response to identifying that a movable action has been selected, generate and display a visual emphasizing aid surrounding a single placeholder; determine when the selected movable action has been released; and in response to determining that the selected movable action has been released, configure placement of the selected movable action onto the position of the single placeholder surrounded by the visual emphasizing aid.
- In another embodiment, the system, in response to identifying that a movable action has been selected, may determine, via the hardware processor, the location of the selected movable action; determine the location of the single placeholder; calculate the distance between the selected movable action and the single placeholder; increase the emphasis of the visual emphasizing aid when the selected movable action increases in proximity to the single placeholder, wherein the emphasis of the visual emphasizing aid continues to increase as the selected movable action continues to increase in proximity to the single placeholder; and decrease the emphasis of the visual emphasizing aid when the selected movable action decreases in proximity to the single placeholder, wherein the emphasis of the visual emphasizing aid continues to decrease as the selected movable action continues to decrease in proximity to the single placeholder.
- Still other advantages, embodiments, and features of the subject disclosure—including devices and methods—will become readily apparent to those of ordinary skill in the art from the following description wherein there is shown and described a preferred embodiment of the present disclosure, simply by way of illustration of one of the best modes best suited to carry out the subject disclosure. As will be realized, the present disclosure is capable of other different embodiments and its several details are capable of modifications in various other embodiments all without departing from, or limiting, the scope herein.
- The drawings are of illustrative embodiments. They do not illustrate all embodiments. Other embodiments may be used in addition or instead. Details which may be apparent or unnecessary may be omitted to save space or for more effective illustration. Some embodiments may be practiced with additional components or steps and/or without all of the components or steps that are illustrated. When the same numeral appears in different drawings, it refers to the same or like components or steps.
-
FIG. 1 is a functional diagram generally illustrating an embodiment of a graphical user interface for an intelligent drag and drop designer. -
FIGS. 2A-2D disclose a functional flow diagram generally illustrating an embodiment of designing a workflow with an intelligent drag and drop designer. -
FIG. 3 is a functional block diagram generally illustrating an embodiment of a method for using an intelligent drag and drop designer to design a workflow or form. -
FIG. 4 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a workflow. -
FIG. 5 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a form. -
FIGS. 6A-6C disclose a functional flow diagram generally illustrating an embodiment of selecting from a branching action’s two or more placeholders when designing a workflow with an intelligent drag and drop designer. -
FIG. 7 is a functional block diagram generally illustrating an embodiment of a network system for an intelligent drag and drop designer. -
FIG. 8 is a functional block diagram generally illustrating an embodiment of an electronic device system for an intelligent drag and drop designer. - Before the present methods and systems are disclosed and described, it is to be understood that the methods and systems are not limited to specific methods, specific components, or to particular implementations. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. Various embodiments are described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more embodiments. It may be evident, however, that the various embodiments may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form to facilitate describing these embodiments.
- An intelligent drag and drop designer (“Designer”), as disclosed herein, introduces intelligence into the graphical user interface of a workflow or form designer-making the experience of designing workflows and forms smarter, easier, and quicker. The Designer comprises a built-in intelligence that allows it to determine which placeholder is likely to be used or preferred and then visually emphasizes such placeholder to the user. One of its primary benefits is the removal of the cumbersome need for, and difficulties associated with, having to precisely move workflow/form actions directly onto a placeholder on a design canvas. Instead, the Designer enables the user to select the action, move it onto the design canvas, and release it, upon which the Designer automatically drops the action into the emphasized placeholder. The Designer also allows the user to emphasize alternative placeholders for selection by making only slight movements with the selected action. And when released, the Designer automatically drops the action into the emphasized placeholder selected by the user.
-
FIG. 1 is a functional diagram generally illustrating an embodiment of a graphical user interface for an intelligent drag and drop designer. A Designer’s graphical user interface (“GUI”) for designing workflows may comprise adesign canvas 105, acursor 135, and anaction toolbox 110 with one or moremovable actions 115. Thedesign canvas 105 may comprise a workflow design, comprised of astart action 120, aplaceholder 125, and afinish action 130. - A design canvas refers to the area on a Designer GUI where a user can design a form or workflow. A movable action refers to an entry that can be inputted into the design of a workflow or a form. Examples of movable actions for the design of workflows (“workflow actions”) are: an action to branch a decision tree by conditions, an action to branch by values, and actions that include a condition precedent, running parallel paths, and calling a workflow. Examples of movable actions for the design of forms (“form actions”) are: an action to insert barcodes, action to insert multiple choice options, and actions to insert timestamps, email options, images, and labels. A placeholder refers to a component that holds, denotes, or reserves a place into which a movable action may be inserted.
-
FIGS. 2A-2D disclose a functional flow diagram generally illustrating an embodiment of designing a workflow with an intelligent drag and drop designer. As shown inFIG. 2A , in designing a workflow, the Designer enables a user to select amovable action 205 with acursor 210. Upon theaction 205 being selected by thecursor 210 and moved slightly, the Designer generates and displays avisual aid 220 around aplaceholder 215. As shown inFIG. 2B , as theaction 205 is dragged closer to theplaceholder 215, the Designer causes thevisual aid 220 to increase in emphasis. The Designer allows the user to not be required to drag theaction 205 directly into the position of theplaceholder 215. Rather, the Designer allows the user to merely release theaction 205, whereupon theaction 205 is automatically placed into the position of theplaceholder 215 that is currently being visually emphasized.FIG. 2C shows theaction 205 after it has been released and automatically placed in position of theplaceholder 215. Upon theaction 205 being placed into the workflow design, the Designer may generate anew placeholder 225.FIG. 2D shows the Designer enabling the user to select anotheraction 230, whereupon avisual aid 235 is generated and displayed around thenew placeholder 225. - The Designer is compatible with any number of electronic devices and enables an action to be selected and released by multiple methods. Such methods may depend on the electronic device utilizing the Designer. For example, an action may be selected by a single holding click of a mouse and released by the release of the held click. Or, an action may be simultaneously selected and released by a double click of a mouse. The ability to select, release, and drop by a single double-click is possible only because the Designer provides a designated placeholder for the action to be automatically inserted into. An action may also be selected by a user selecting the action on a touch screen with only the user’s fingers, or simultaneously selected and released by a user’s swipe on a touch screen.
- A visual aid surrounding a placeholder refers to a method by which visible emphasis is given to a placeholder. Thus, visual aids providing emphasis for a placeholder serve the purpose of a placeholder being made more visually prominent to the user when designing the form or workflow. Embodiments of a visual aid emphasis comprise a contrasting color that highlights the placeholder, circular/rectangular lines around the placeholder, and the placeholder itself being made bigger or bolder. As the user moves the selected action closer to the emphasized placeholder, the visual aid emphasis portrays the feeling of the placeholder pulling or attracting the selected action or an increased sensitivity by the placeholder. Being highlighted or magnetized are other descriptions used to reference the dynamic relationship between a selected action and the visual aid emphasis. In this relationship, as the selected action is moved closer to the visual aid, the visual aid emphasis “increases” in size; meaning, the visual aid grows/increases in prominence with the intent to draw more attention to it. Embodiments of a visual aid increasing in emphasis/prominence comprise: a contrasting color increasing in darkness, brightness, or size; circular/rectangular lines increasing in thickness, quantity, or size; and the placeholder continually increasing in size.
-
FIG. 3 is a functional block diagram generally illustrating an embodiment of a method for using an intelligent drag and drop designer to design a workflow or form. The method for designing either a workflow or form begins when the Designer identifies or detects that an action is selected 310. When the action is selected 310, the Designer generates and displays a visual emphasizing aid around asingle placeholder 315. Once the Designer detects that the selectedaction 310 is released 320, the Designer automatically places theaction 335 onto the placeholder that was visually emphasized. Alternatively, the Designer may detect that the user is dragging 325 the selected action closer to the placeholder, whereupon the Designer causes the visual aid around the placeholder to increase inemphasis 330. Regardless of the action’s proximity to the placeholder, when the Designer detects that the user has released 320 the action, the Designer automatically places 335 the action into the selected placeholder. -
FIG. 4 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a workflow. As shown inFIG. 4 , a Designer may comprise aGUI 400 that may be used to create a workflow. Theworkflow GUI 400 may comprise atoolbox 405 withmovable workflow actions 410, adesign canvas 415, astart action 420, aplaceholder 425, astop action 430, and acursor 435. Theworkflow GUI 400 enables a user to select aworkflow action 410 to place onto theplaceholder 425. When theworkflow action 410 is selected, a visual aid appears around theplaceholder 425, increasing in emphasis as theworkflow action 410 is brought closer to theplaceholder 425. Once aworkflow action 410 is placed onto theplaceholder 425, a new placeholder appears. -
FIG. 5 generally illustrates an embodiment of a graphical user interface of an intelligent drag and drop designer for designing a form. As shown inFIG. 5 , a Designer may comprise aGUI 500 that may be used to create a form. Theform GUI 500 may comprise atoolbox 505 withmovable form actions 510, adesign canvas 515, and acursor 520. Theform GUI 500 allows for a user to select aform action 510 to place onto thedesign canvas 515. When the Designer detects that aform action 510 is selected, it generates and displays a visual aid around the location for insertion of theform action 510, increasing in emphasis as theform action 510 is brought closer to the location. Once the Designer detects that theform action 510 has been placed onto the emphasized location in thedesign canvas 515, the Designer determines which placeholder should be emphasized and emphasizes a new location when it detects that anew form action 510 is selected. - In an embodiment, when a workflow is first designed, the Designer displays only one placeholder. As a result of it being the only placeholder displayed, it will be the only placeholder emphasized when a user selects an action. After an action is dropped into the first placeholder the Designer will automatically generate and display a new placeholder directly after the previously added action.
- In another embodiment, when a workflow has a branching option for the placement of two or more actions, wherein the branches are numbered from left to right, the Designer may as a default, emphasize the placeholder in the first branch. In one embodiment, as an action is moved around a design, the Designer may expand and collapse in size the target zones within the workflow/form design by determining the proximity of the selected action to the target zones. In other embodiments, the Designer allows for pre-configuration of settings such that specified placeholders are emphasized by default.
- The Designer may also distinguish and differentiate between the presence of multiple placeholders in a workflow or form design and determine which placeholder the user is intending to select. In one embodiment for determining which placeholder the user is intending to select, the Designer may, by default, automatically visually emphasize a placeholder, such as the placeholder closest to the toolbox. For example, if the toolbox is on the left side of the GUI, the farthest left placeholder is automatically visually emphasized when a selected action is moved onto the canvas. In such an embodiment, the Designer is sufficiently intelligent so as to sense the direction of the action’s movement in relation to its original placement on the canvas and determine which placeholder is intended to be selected by what direction the action is moved in. For example, if the action is moved in the right direction in relation to its original placement on the canvas, a placeholder to the right of the previously emphasized placeholder is visually emphasized. If the action is moved in the down direction in relation to its original placement on the canvas, a placeholder below the previously emphasized placeholder is visually emphasized.
- In another embodiment of the Designer determining which placeholder the user is intending to select, as a selected action is moved across a canvas the Designer is sufficiently intelligent so as to identify the location of the selected action and its proximity to the location of all existing placeholders. The Designer determines the distance between the selected action and other placeholders, compares the distances, ascertains which placeholder is closest in proximity to the action, and generates a visual emphasis for the closest placeholder. As the selected action is moved and changes location, the Designer reevaluates the proximity of all placeholders and continues to generate a visual emphasis for the closest placeholder, changing visual emphasis as needed from one placeholder to another. When the Designer visually emphasizes a placeholder, it does not visually emphasize other placeholders.
- In other embodiments, the Designer may comprise functionalities that incorporate aspects of all the above embodiments, wherein the Designer may sense both the proximity of placeholders in relation to an action and also sense the direction in which the action is moved.
-
FIGS. 6A-6C disclose a functional flow diagram generally illustrating an embodiment of selecting from a branching action’s two or more placeholders when designing a workflow with an intelligent drag and drop designer. InFIG. 6A , the Designer’sdesign canvas 605 displays a workflow design with astart action 610, astop action 615, an inputted branchingaction 620 with afirst placeholder 625 and asecond placeholder 630, and a separate,third placeholder 635. After anaction 640 has been selected, the Designer detects that the user has dragged theaction 640 onto thecanvas 605. The Designer recognizes the presence of theaction 640 on thecanvas 605, determines that thefirst placeholder 625 is closest, and visually emphasizes 645 thefirst placeholder 625. Because thefirst placeholder 625 is visually emphasized 645, thesecond placeholder 630, and thethird placeholder 635 are not visually emphasized. As shown inFIG. 6B , when the selectedaction 640 is moved away from thefirst placeholder 625 and in the direction of thesecond placeholder 630, the Designer determines that the movement of the selectedaction 640 is in the direction of thesecond placeholder 630 and generates avisual emphasis 650 around thesecond placeholder 630. Similarly, as shown inFIG. 6C , when the selectedaction 640 is moved in the direction of thethird placeholder 635, thethird placeholder 635 is visually emphasized 655. - Additional intelligence by the Designer may comprise machine learning capabilities. Embodiments of machine learning capabilities comprise the Designer taking into account probabilities for two or more displayed placeholders and emphasizing the placeholder with the highest probability. Such probabilities may be based on a form or workflow’s subject matter, substantive data, pattern of previous selections, appropriate/likely grouping, and prior user history. Machine learning capabilities may also comprise a Designer that can identify an action inputted incorrectly and generating appropriate notifications.
-
FIG. 7 is a functional block diagram generally illustrating an embodiment of a network system for an intelligent drag and drop designer. A network system, as shown inFIG. 7 , may comprise aDesigner server 710 accessible over a local area network or awide area network 720, such as the Internet. TheDesigner server 710 may enablethird party servers 730, users 740, andelectronic devices 750 to connect to aDesigner GUI 760. TheDesigner server 710 may also hostadditional design GUIs 770, each accessible to their respective owners and other users. - In accordance with the preferred embodiment, the
Designer server 710 is remotely accessible by a number ofuser computing devices 750, including for example, laptops, smartphones, computers, tablets, and other computing devices that are able to access the local area network or a wide area network where theDesigner server 710 resides. In normal operation, each userelectronic device 750 connects with theDesigner server 710 to interact with theDesigner GUI 760 and theadditional Designer GUIs 770. As is also known, eachadditional Designer GUI 770 may employ a number of connectors to interact withthird party 730 servers and their data, services, or applications. -
FIG. 8 is a functional block diagram generally illustrating an embodiment of an electronic device system for an intelligent drag and drop designer. Theelectronic device 810 may be coupled to aDesigner server 710 via anetwork interface 820. Theelectronic device 810 generally comprises amemory 830, aprocessor 840, agraphics module 850, and an application programming interface 860. Theelectronic device 810 is not limited to any particular configuration or system. - As used in the specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Throughout the description and claims of this specification, the word “comprise” and variations of the word, such as “comprising” and “comprises,” means “including but not limited to,” and is not intended to exclude, for example, other components, integers or steps. “Exemplary” or “For Example” means “an example of” and is not intended to convey an indication of a preferred or ideal embodiment. “Such as” is not used in a restrictive sense, but for explanatory purposes.
- Disclosed are components that may be used to perform the disclosed methods and systems. These and other components are disclosed herein, and it is understood that when combinations, subsets, interactions, groups, etc. of these components are disclosed that while specific reference of each various individual and collective combinations and permutation of these may not be explicitly disclosed, each is specifically contemplated and described herein, for all methods and systems. This applies to all embodiments of this application including, but not limited to, steps in disclosed methods. Thus, if there are a variety of additional steps that may be performed it is understood that each of these additional steps may be performed with any specific embodiment or combination of embodiments of the disclosed methods.
- Embodiments of the systems and methods are described with reference to schematic diagrams, block diagrams, and flowchart illustrations of methods, systems, apparatuses and computer program products. It will be understood that each block of the block diagrams, schematic diagrams, and flowchart illustrations, and combinations of blocks in the block diagrams, schematic diagrams, and flowchart illustrations, respectively, may be implemented by computer program instructions. These computer program instructions may be loaded onto a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create a means for implementing the functions specified in the flowchart block or blocks.
- Other embodiments may comprise overlay features demonstrating relationships between one more steps, active users, previous users, missing steps, errors in the workflow, analytical data from use of the workflow, future use of the workflow, and other data related to the workflow, users, or the relationship between the workflow and users.
- Furthermore, the disclosure here may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed embodiments. Non-transitory computer readable media may include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips), optical disks (e.g., compact disk (CD), digital versatile disk (DVD)), smart cards, and flash memory devices (e.g., card, stick). Those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope of the disclosed embodiments.
- Unless otherwise expressly stated, it is in no way intended that any method set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not actually recite an order to be followed by its steps or it is not otherwise specifically stated in the claims or descriptions that the steps are to be limited to a specific order; it is in no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including: matters of logic with respect to arrangement of steps or operational flow; plain meaning derived from grammatical organization or punctuation; and the number or type of embodiments described in the specification.
Claims (20)
1. A system for an intelligent designer, comprising:
a computer system having a hardware processor and a physical memory using executable instructions that, as a result of being executed by the hardware processor, cause the computer system to:
display a graphical user interface comprising at least one movable action and a first placeholder and a second placeholder associated with placement of the at least one movable action;
identify a selection of the at least one movable action;
determine a first selection-probability of a first placeholder;
determine a second selection-probability of a second placeholder;
compare the first selection-probability of the first placeholder and the second selection-probability of the second placeholder; and
display a visual emphasizing aid based at least in part on the comparing.
2. The system of claim 1 , wherein the executable instructions that cause the computer system to compare further comprise instructions that, as a result of being executed by the hardware processor, cause the computer system to:
determine which of the first selection-probability of the first placeholder and the second selection-probability of the second placeholder is higher.
3. The system of claim 2 , wherein the displaying further comprises displaying the visual emphasizing aid surrounding the placeholder with the determined higher selection-probability.
4. The system of claim 3 , wherein the executable instructions further comprise instructions that, as a result of being executed by the hardware processor, cause the computer system to:
configure placement of the at least one selected movable action onto at least one position associated with the placeholder surrounded by the visual emphasizing aid.
5. The system of claim 2 , wherein the determined higher selection-probability is based at least in part on one of the first placeholder or the second placeholder being closer in proximity to the selected movable action.
6. The system of claim 1 , wherein the first selection-probability comprises a degree of likelihood that the selected at least one movable action will be placed into the first placeholder, and wherein the second selection-probability comprises a degree of likelihood that the selected at least one movable action will be placed into the second placeholder.
7. The system of claim 1 , wherein at least one of the first selection-probability and the second selection-probability is based at least in part on at least one of a workflow subject matter form, a pattern of previous selections, and a prior user history.
8. The system of claim 1 , wherein the determining the first selection-probability of the first placeholder, and the determining the second selection-probability of the second placeholder, are in response to the identifying the selection of the movable action.
9. A method for an intelligent designer, comprising:
displaying a graphical user interface comprising at least one movable action and a first placeholder and a second placeholder associated with placement of the at least one movable action;
identifying, via a hardware processor, a selection the at least one movable action;
determining, via the hardware processor, a first election-probability of a first placeholder;
determining, via the hardware processor, a second selection-probability of a second placeholder;
comparing, via the hardware processor, the first selection-probability of the first placeholder and the second selection-probability of the second placeholder; and
displaying a visual emphasizing aid based at least in part on the comparing.
10. The method of claim 9 , further comprising:
determining, via the hardware processor, which of the first selection-probability of the first placeholder and the second selection-probability of the second placeholder is higher.
11. The method of claim 10 , wherein the displaying further comprises displaying the visual emphasizing aid surrounding the placeholder with the determined higher selection-probability.
12. The method of claim 11 , further comprising:
configuring, via the hardware processor, placement of the at least one selected movable action onto at least one position associated with the placeholder surrounded by the visual emphasizing aid.
13. The method of claim 10 , wherein the determined higher selection-probability is based at least in part on one of the first placeholder or the second placeholder being closer in proximity to the selected movable action.
14. The method of claim 9 , wherein the first selection-probability comprises a degree of likelihood that the selected at least one movable action will be placed into the first placeholder, and wherein the second selection-probability comprises a degree of likelihood that the selected at least one movable action will be placed into the second placeholder.
15. The method of claim 9 , wherein the at least one of first selection-probability and the second selection-probability is based at least in part on at least one of a workflow subject matter form, a pattern of previous selections, and a prior user history.
16. The method of claim 9 , wherein the determining the first selection-probability of the first placeholder, and the determining the second selection-probability of the second placeholder, are in response to the identifying the selection of the movable action.
17. A device for facilitating use of an intelligent designer, comprising a processor and a memory operatively coupled to the processor, the memory containing instructions executable by the processor to cause the device to:
display a graphical user interface comprising at least one movable action and a first placeholder and a second placeholder associated with placement of the at least one movable action;
identify a selection the at least one movable action;
determine a first selection-probability of a first placeholder;
determine a second selection-probability of a second placeholder, wherein the first selection-probability comprises a degree of likelihood that the selected at least one movable action will be placed into the first placeholder, and wherein the second selection-probability comprises a degree of likelihood that the selected at least one movable action will be placed into the second placeholder;
compare the first selection-probability of the first placeholder and the second selection-probability of the second placeholder; and
display a visual emphasizing aid based at least in part on the comparing.
18. The device of claim 17 , wherein the memory contains additional instructions executable by the processor to cause the device to:
determine which of the first selection-probability of the first placeholder and the second selection-probability of the second placeholder is higher.
19. The device of claim 18 , wherein the displaying further comprises displaying the visual emphasizing aid surrounding the placeholder with the determined higher selection-probability.
20. The device of claim 18 , wherein the determined higher selection-probability is based at least in part on one of the first placeholder or the second placeholder being closer in proximity to the selected movable action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/985,110 US20230077200A1 (en) | 2019-02-26 | 2022-11-10 | System and Method for an Intelligent Drag and Drop Designer |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962810460P | 2019-02-26 | 2019-02-26 | |
US16/802,390 US11093127B2 (en) | 2019-02-26 | 2020-02-26 | System and method for an intelligent drag and drop designer |
US17/376,007 US11526266B2 (en) | 2019-02-26 | 2021-07-14 | System and method for an intelligent drag and drop designer |
US17/985,110 US20230077200A1 (en) | 2019-02-26 | 2022-11-10 | System and Method for an Intelligent Drag and Drop Designer |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/376,007 Continuation US11526266B2 (en) | 2019-02-26 | 2021-07-14 | System and method for an intelligent drag and drop designer |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230077200A1 true US20230077200A1 (en) | 2023-03-09 |
Family
ID=78818365
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/376,007 Active US11526266B2 (en) | 2019-02-26 | 2021-07-14 | System and method for an intelligent drag and drop designer |
US17/985,110 Pending US20230077200A1 (en) | 2019-02-26 | 2022-11-10 | System and Method for an Intelligent Drag and Drop Designer |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/376,007 Active US11526266B2 (en) | 2019-02-26 | 2021-07-14 | System and method for an intelligent drag and drop designer |
Country Status (1)
Country | Link |
---|---|
US (2) | US11526266B2 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5416900A (en) * | 1991-04-25 | 1995-05-16 | Lotus Development Corporation | Presentation manager |
US5434965A (en) * | 1992-12-23 | 1995-07-18 | Taligent, Inc. | Balloon help system |
US10216491B2 (en) * | 2016-09-16 | 2019-02-26 | Oracle International Corporation | Controlled availability of objects in a visual design tool for integration development |
US11093127B2 (en) * | 2019-02-26 | 2021-08-17 | Nintex UK Ltd. | System and method for an intelligent drag and drop designer |
US11080653B2 (en) * | 2019-03-22 | 2021-08-03 | Invia Robotics, Inc. | Virtual put wall |
-
2021
- 2021-07-14 US US17/376,007 patent/US11526266B2/en active Active
-
2022
- 2022-11-10 US US17/985,110 patent/US20230077200A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US11526266B2 (en) | 2022-12-13 |
US20210382611A1 (en) | 2021-12-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10915226B2 (en) | Mobile user interface to access shared folders | |
US10261664B2 (en) | Activity management tool | |
EP2699998B1 (en) | Compact control menu for touch-enabled command execution | |
US20020054148A1 (en) | GUI control method and apparatus and recording medium | |
US10282219B2 (en) | Consolidated orthogonal guide creation | |
US9323451B2 (en) | Method and apparatus for controlling display of item | |
US20060085758A1 (en) | Desktop alert management | |
TWI534694B (en) | Computer implemented method and computing device for managing an immersive environment | |
US11093127B2 (en) | System and method for an intelligent drag and drop designer | |
CN109074372B (en) | Applying metadata using drag and drop | |
US9910835B2 (en) | User interface for creation of content works | |
JP2011081778A (en) | Method and device for display-independent computerized guidance | |
EP2960763A1 (en) | Computerized systems and methods for cascading user interface element animations | |
CN105138170A (en) | Theme determination method and terminal | |
CN104834430A (en) | Method and apparatus for moving icons | |
WO2017177820A1 (en) | File sending method and device in instant communication | |
US11112938B2 (en) | Method and apparatus for filtering object by using pressure | |
US11526266B2 (en) | System and method for an intelligent drag and drop designer | |
US11243678B2 (en) | Method of panning image | |
US20140068481A1 (en) | Rich User Experience in Purchasing and Assignment | |
CN111198830B (en) | Identification method and device of mobile storage equipment, electronic equipment and storage medium | |
US20140157146A1 (en) | Method for retrieving file and electronic device thereof | |
CN105068711A (en) | Multi-window application display method and equipment | |
JP6247182B2 (en) | Operating procedure recording device, operating procedure recording method, and operating procedure recording program | |
CN113868131A (en) | Visual test case construction method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |