CN114625255B - Freehand interaction method oriented to visual view construction, visual view construction device and storage medium - Google Patents

Freehand interaction method oriented to visual view construction, visual view construction device and storage medium Download PDF

Info

Publication number
CN114625255B
CN114625255B CN202210315121.XA CN202210315121A CN114625255B CN 114625255 B CN114625255 B CN 114625255B CN 202210315121 A CN202210315121 A CN 202210315121A CN 114625255 B CN114625255 B CN 114625255B
Authority
CN
China
Prior art keywords
data
view
task
gesture
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210315121.XA
Other languages
Chinese (zh)
Other versions
CN114625255A (en
Inventor
李铁萌
李素雯
周维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210315121.XA priority Critical patent/CN114625255B/en
Publication of CN114625255A publication Critical patent/CN114625255A/en
Application granted granted Critical
Publication of CN114625255B publication Critical patent/CN114625255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The invention provides a freehand interaction method for visual view construction, a visual view construction device and a storage medium, wherein the method comprises the following steps: generating a data dimension based on the visual view source data; receiving and storing hand joint point data acquired by hand tracking equipment in real time; identifying gesture events, and identifying tasks based on a pre-stored correspondence table between gesture events and visual view operation tasks; if the data dimension menu activation task is identified, activating and displaying a data dimension menu generated based on the data dimension; under the condition that a dimension menu selection task is identified after a menu is activated, selecting the dimension menu based on a corresponding gesture; if the data axis creation task is identified, creating the data axis based on the track of the corresponding gesture; if a data axis association task is identified after at least two data axes are created, at least two data axes are selected and associated based on the corresponding gesture trajectories, and a visualization view is generated based on the associated data axes and the source data.

Description

Freehand interaction method oriented to visual view construction, visual view construction device and storage medium
Technical Field
The invention relates to the technical field of visual view construction, in particular to a freehand interaction method for visual view construction, a visual view construction device and a storage medium.
Background
Data visualization is commonly used for big data exploration, professional data analysis, data large screen monitoring and other occasions. Before visualization using data, the visualization view needs to be constructed. The common data visualization view construction method comprises the following steps: a programming construction method based on keyboard interaction, an agility construction method based on mouse interaction, an agility construction method based on pen interaction or touch and a construction method based on handle ray interaction in an immersion display environment. Tableau, iVisDesinger, data illuminators, etc. visualization tools employ a mouse interaction-based visualization construction technique. The Tableau supports more convenient drag interaction to configure the view, but focuses on the templated and normalized view templates and layout; the iVisDesinger supports multi-view visualization of flexible layout and can construct and manipulate views directly on canvas, but relies on frequent menu mode switching; data Illustator is a Data-oriented vector graphic design method, and has better flexibility in visual design, but stronger dependence on fine mouse operation. SketchInsight, dataInk, graphics, etc. uses a visual construction technique based on pen or touch interaction. Wherein, sketchught uses short strokes to quickly create visualizations and gradually take off complex controls; dataInk and Graphics employ context menus to allow users to focus on data sketches and iterative design tasks. In addition, the immersion display tools of ImAxes et al employ a visual construction technique based on handle interactions.
While various data visualization view construction techniques are presented, existing ones are not adequate in support of large-sized displays and immersive display environments. On one hand, the interaction equipment limits the interaction position of the user, so that the interaction space is small, and the user needs to stand in front of the display screen to complete the visual construction of pen or touch interaction or needs to sit on the console to execute the visual construction of mouse-keyboard interaction; on the other hand, interaction between a mouse and a pen touch is excessively dependent on an interface control, and the function of switching through the interface control is needed repeatedly, so that the visual construction efficiency is low. In addition, the naturalness and intuitiveness of the mouse interaction are not high, the user needs to convert the construction intention into mouse operation and interface control operation, and the same problem exists in pen-touch interaction. Freehand interaction technology is often used for data interaction occasions in large-size displays or immersion display environments, and has off-screen interaction capability and natural interaction characteristics. However, a freehand interaction method for visual view construction is lacking at present, so that the generalization application of freehand interaction in the visual data analysis field is limited.
Therefore, how to implement the construction of the visual view based on freehand interaction is a problem to be solved.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a freehand interaction method and a visual view construction device for visual view construction, so that a user can complete the rapid and smooth creation of the visual view through freehand interaction.
In one aspect of the invention, a freehand interaction method for visual view construction is provided, and the method comprises the following steps:
generating a data dimension based on the visual view source data;
receiving hand joint point data acquired by hand tracking equipment in real time, and storing the hand joint point data in a time sequence form of a frame object, wherein the hand joint point data comprises palm position and direction data, position data of each finger joint point and finger bone direction vector data;
identifying a user gesture event based on the received hand joint point data, and identifying a visual view operation task based on the identified user gesture event and a correspondence table between pre-stored gesture events and visual view operation tasks; the visual view operation tasks in the corresponding relation table comprise the following tasks: a data dimension menu activation task, a data dimension menu selection task, a data axis generation task and a data axis association task;
Activating and displaying a data dimension menu generated based on the data dimension on a display interface under the condition that the identified visual view operation task is a data dimension menu activation task;
when a data dimension menu is activated, recognizing that the current visual view operation task is a data dimension menu selection task, selecting dimensions based on gestures corresponding to the data dimension menu selection task;
under the condition that the identified visual view operation task is a data axis creation task, creating a data axis based on a gesture track corresponding to the data axis creation task, and establishing a corresponding relation between each created data axis and each selected data dimension;
and under the condition that the current visual view operation task is identified as a data axis association task after at least two data axes are created, selecting at least two data axes based on gesture tracks corresponding to the data axis association task, associating the selected data axes, and generating a visual view based on the selected data axes and the visual view source data.
In some embodiments of the present invention, the gesture events in the correspondence table include some or all of the following gesture events: a palm-turning gesture event, an index finger-pointing gesture event, a two-finger-pinching gesture event, and a two-finger-pointing gesture event.
In some embodiments of the present invention, the gesture event corresponding to the data dimension menu activation task is a palm-turning gesture event; the gesture event corresponding to the data dimension menu selection task is an index finger pointing gesture event; the gesture event corresponding to the data axis generating task is a two-finger pinch gesture event based on two hands or one hand; the gesture event corresponding to the data axis association task is a double-finger pointing gesture event.
In some embodiments of the present invention, the visual view operation tasks in the correspondence table further include some or all of the following tasks: a view switch task, a view circle task, a view move task, a view zoom task, a view delete task, a visual properties menu activation task, and a visual mapping transformation task.
In some embodiments of the invention, the method further comprises: and controlling the display of the created visual view element according to the identified gesture event aiming at the created view.
In some embodiments of the present invention, the gesture event corresponding to the view switching task is an index finger pointing gesture event for generating a view type indication track; the gesture event corresponding to the view circle selection task is the gesture event which points to the index finger and generates a track surrounding a specific view area; the gesture event corresponding to the view moving task is a two-finger-knob gesture event on the circled view area; the gesture event corresponding to the view scaling task is a two-finger-knob gesture event on the corner of the circled view area; the gesture event corresponding to the view deleting task is a palm sweeping gesture event on the circled view area; the gesture event corresponding to the visual attribute menu activation task is a palm-turning gesture event on the existing view; the gesture events corresponding to the visual mapping transformation task comprise index finger pointing gesture events on a visual attribute menu and index finger pointing gesture events on a data dimension menu generated after menu items on the visual attribute menu are selected.
In some embodiments of the invention, the method further comprises: after the gesture event is recognized, a visual agent cursor is displayed on the display interface corresponding to the recognized gesture to map the position and interaction state of the hand with the visual agent cursor.
In some embodiments of the present invention, the view position is bound to the hand position during the execution of the view movement task, such that the position of the view moves with the hand position; and binding the view corner positions with the hand positions in the process of executing the view scaling task so as to determine the scaling degree based on the change amplitude of the hand actions.
In some embodiments of the present invention, the identified user gesture event further comprises a release gesture event indicating an end of a task corresponding to a previous gesture.
In some embodiments of the invention, the method further comprises: after creating the data axis, binding the created data axis with the selected data dimension; the generating a visual view based on the selected data axis and the visual view source data includes: a visual view exhibiting the selected data dimension is generated based on the associated data axis and the visual view source data.
In some embodiments of the invention, the view types include: scatter plot, line plot, bar plot, parallel plot, and pie plot; the generating a visual view based on the selected data axis and the visual view source data includes: a visual view of a default view type is generated based on the selected data axis and the visual view source data, the default view type being one of a scatter plot, a line plot, a bar plot, a parallel plot, and a pie chart.
In some embodiments of the invention, the established coordinate axes are linear axes of a Cartesian coordinate system or arc axes based on a polar coordinate system.
In some embodiments of the present invention, the selecting at least two data axes based on gesture trajectories corresponding to the data axis association task includes: at least two data axes are selected by passing a trajectory generated by a gesture event corresponding to the data axis associated task through the at least two data axes.
In another aspect of the present invention there is also provided a visual view construction apparatus comprising a processor and a memory, characterised in that the memory has stored therein computer instructions for execution by the processor, the apparatus implementing the steps of the method as described above when the computer instructions are executed by the processor.
In a further aspect of the invention there is also provided a computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor implements the steps of the method as described above.
The freehand interaction method and the visual view construction device for visual view construction provided by the invention can enable a user to quickly establish the visual view based on gestures. And the operation is natural and smooth, and the memory of a user is facilitated.
It will be appreciated by those skilled in the art that the objects and advantages that can be achieved with the present invention are not limited to the above-described specific ones, and that the above and other objects that can be achieved with the present invention will be more clearly understood from the following detailed description.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate and together with the description serve to explain the invention. In the drawings:
fig. 1 is a flow chart of a freehand interaction method for visual view construction in an embodiment of the invention.
Figure 2 is a schematic representation of the bones of the hand.
FIG. 3 is a schematic diagram of a coordinate system employed by the Leap Motion.
FIG. 4 is a schematic diagram of a basic gesture according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of a gesture for view switching and a track formed by the gesture according to an embodiment of the invention.
FIG. 6 is a schematic diagram of exemplary tasks and corresponding gestures during creation and manipulation of a visual view in accordance with an embodiment of the present invention.
FIG. 7 is a schematic diagram illustrating a correspondence relationship between gestures and basic tasks according to an embodiment of the present invention.
FIG. 8 is a schematic diagram illustrating the interactive steps for selecting dimension menu item tasks in an embodiment of the present invention.
FIG. 9 is a schematic diagram illustrating the steps of creating a linear axis according to an embodiment of the present invention.
FIG. 10 is a schematic diagram illustrating the steps of creating a circular axis according to an embodiment of the present invention.
FIG. 11 is an interactive diagram of a view of associating vertical coordinate axes in an embodiment of the present invention.
FIG. 12 is an interactive diagram of a view of associating parallel coordinate axes in an embodiment of the present invention.
FIG. 13 is an interactive diagram illustrating a view of the associated polar coordinate axis in accordance with an embodiment of the present invention.
Fig. 14-15 are schematic views of visual mapping interactions of views in an embodiment of the present invention.
FIG. 16 is a diagram illustrating dynamic changes of a visual agent cursor according to an embodiment of the present invention.
Fig. 17A to 17D are interactive diagrams of adjusting (circling, highlighting, moving, zooming, and deleting) views in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following embodiments and the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. The exemplary embodiments of the present invention and the descriptions thereof are used herein to explain the present invention, but are not intended to limit the invention.
It should be noted here that, in order to avoid obscuring the present invention due to unnecessary details, only structures and/or processing steps closely related to the solution according to the present invention are shown in the drawings, while other details not greatly related to the present invention are omitted.
It should be emphasized that the term "comprises/comprising" when used herein is taken to specify the presence of stated features, elements, steps or components, but does not preclude the presence or addition of one or more other features, elements, steps or components.
In order to solve the problem that the existing visual view construction lacks support of freehand interaction, the invention provides a freehand interaction technology for visual view construction, which adopts hand joint data to identify various gestures and tracks, and controls visual view elements through the gestures and tracks to complete visual view construction.
The invention aims to provide a systematic, natural and fluent freehand interaction method for data visualization construction, namely a systematic, natural and fluent freehand interaction method for data visualization construction. The systematicness means that the interaction method can support a complete basic view construction flow, including visual creation, dimension mapping, manipulation and the like. Nature means that the design of the gesture is based on a physical metaphor, easy for the user to learn. Fluency refers to the ability of the interaction technique to focus the user on the manipulation of views and elements, reducing frequent mode switching or interaction with controls. The method can solve the problems of interactive limit, interactive intuitiveness and interface control dependence of the traditional visual construction technology.
Fig. 1 is a flow chart of a freehand interaction method for visual view construction according to an embodiment of the present invention, which may be implemented by computer software, as shown in fig. 1, and includes the following steps:
step S110, a data dimension is generated based on the visualized view source data.
In some embodiments of the present invention, the source data of the visual view may be, for example, source data pre-stored at a local or server side for generating the visual view, and may be, for example, excel format or CSV, JSON format data or other format data.
In an embodiment of the invention, the source data for generating the visual view may be opened and read in advance by an execution system executing the method of the invention.
Step S120, receiving hand joint point data acquired by hand tracking equipment in real time, and storing the hand joint point data in a time sequence form of a frame object, wherein the hand joint point data comprises palm position and direction data, position data of each finger joint point and phalangeal direction vector data.
In some embodiments of the present invention, hand Motion data (or hand joint point data) may be acquired or referred to by a vision-based gesture recognition device such as Leap Motion, hollens, or the like. The acquired hand motion data is contained in a number of frame objects that form a time series of frame objects, and may include palm position and orientation data, position data for each finger joint, and phalangeal orientation vector data.
As an example, hand joint data may be acquired using Leap Motion as a hand tracking device. A JavaScript API package LeapJS provided by Leap Motion authorities may be introduced in the visual web page to connect the Leap Motion device and obtain a sequence of frame objects containing hand-node data at a rate of 200 frames per second. The data obtained in each frame includes palm position and direction data, position data of each finger joint point, and phalangeal direction vector data. LeapJS also encapsulates mathematical functions to perform the necessary vector calculations. Here, the Leap Motion is merely an example, and the present invention is not limited thereto. A schematic of the hand joint composition is shown in fig. 2.
In an embodiment of the present invention, the coordinate system adopted by the Leap Motion is a right-hand cartesian coordinate system, that is, the collected hand articulation point data follows the right-hand cartesian coordinate system, as shown in fig. 3. The origin of the hand articulation point data is located in the center of the top surface of the hand tracking device. The x-axis is parallel to the long side of the device, and the right direction is the positive direction; the y-axis is vertical to the top surface of the device and is upward in the positive direction; the z-axis is parallel to the short side of the device and perpendicular to the screen, being the positive direction outwards.
Step S130, identifying user gesture events based on the received hand joint point data, and identifying visual view operation tasks based on the identified user gesture events and a corresponding relation table between pre-stored gesture events and visual view operation tasks; the visual view operation tasks in the corresponding relation table comprise the following tasks: a data dimension menu activation task, a data dimension menu selection task, a data axis generation task, and a data axis association task.
In the embodiment of the invention, in the case that the visual view is not created, the visual view is created first, and then the visual view can be changed based on the created visual view, such as view movement, view scaling, view deletion, view type switching and the like. That is, the construction sequence of the visual view is that the data dimension is selected firstly, then the data axis binding the data dimension is generated, and on the basis of generating more than two data axes, two or more data axes are associated and the visual view is generated; on the basis of the circled existing view, the operations of moving, zooming and deleting the view can be implemented.
Thus, the present invention first proceeds with the creation of a visual view. In an embodiment of the present invention, the basic tasks involved in creating a visual view include: a data dimension menu activation task, a data dimension menu selection task, a data axis generation task, and a data axis association task.
In an embodiment of the present invention, the gesture events in the correspondence table include some or all of the following gesture events: a palm-turning gesture event, an index finger-pointing gesture event, a two-finger-pinching gesture event, and a two-finger-pointing gesture event. Preferably, the gesture event corresponding to the data dimension menu activation task is a palm-turning gesture event; the gesture event corresponding to the data dimension menu selection task is an index finger pointing gesture event; the gesture event corresponding to the data axis generating task is a two-finger pinching gesture event based on two hands or one hand; the gesture event corresponding to the data axis association task is a double-finger pointing gesture event.
The view type and the circled view can be switched according to the strokes of the index finger and the like on the basis of generating the view, and the view can be moved, zoomed and deleted on the basis of the existence of the selected view. Thus, to enable manipulation of the visual view to create the visual view, further, the visual view manipulation tasks in the correspondence table may further include some or all of the following tasks: a view switch task, a view circle task, a view move task, a view zoom task, a view delete task, a visual properties menu activation task, and a visual mapping transformation task. These tasks are used to manipulate the visual elements that have been created, thereby enabling the method of the present invention to also control the display of the created visual view elements based on recognized gesture events for the created view.
In some embodiments of the present invention, the gesture event corresponding to the view switching task is an index finger pointing gesture event that generates a view type indication track; the gesture event corresponding to the view circle selection task is the gesture event which points to the index finger and generates a track surrounding a specific view area; the gesture event corresponding to the view moving task is a two-finger-knob gesture event on the circled view area; gesture events corresponding to the view zooming task are two-finger-knob gesture events on the corner points of the circled view area; the gesture event corresponding to the view deletion task is a palm swipe gesture event on the circled view area; the gesture event corresponding to the visual attribute menu activation task is a palm-turning gesture event on the existing view; the gesture events corresponding to the visual mapping transformation task comprise index finger pointing gesture events on a visual attribute menu and index finger pointing gesture events on a data dimension menu generated after menu items on the visual attribute menu are selected. The correspondence of partial tasks and gestures as shown in fig. 7.
Here, the view type indicates to which view type the track is to be switched by the index finger pointing gesture. View types may include scatter plots, line plots, bar plots, parallel plots, pie charts, and the like, as shown in fig. 5. The index finger pointing gesture trajectory indicating the switch to the scatter plot type may be a small circle trajectory; the index finger pointing gesture trajectory indicating the switch to the line pattern may be a line-broken trajectory; the index finger pointing gesture trajectory indicating the switch to the histogram type may be a rectangular trajectory; the index finger pointing gesture trajectory indicating the switch to the parallel graph type may be a parallel line trajectory; the index finger pointing gesture trajectory indicating a switch to pie chart type may be a large circular trajectory with a pie inside or approximately a pie. For example, if the trajectory outlined on the existing scatter view is a polyline trajectory, indicating that the view type to be switched to is a polyline, then the current scatter view is switched to the polyline.
The gesture event generated trace in which the index finger points to and generates a trace surrounding a particular viewport is a widely closed trace or a widely approximately closed trace surrounding a viewport made in a non-viewport for delineating the viewport.
The gesture event as described above will be described in detail later.
Based on the received and stored hand node data, gesture events may be identified. In summary, as shown in fig. 4, in an embodiment of the present invention, the identified gesture event may be a gesture event of the following gesture events: the two-finger pinch (which may be an index finger thumb pinch or a middle finger thumb pinch), index finger pointing, middle finger index finger double-finger pointing, palm turning (e.g., palm turning up), palm swipe, and index finger stroke track, which is a track formed by the sliding of the index finger pointing gesture, is not shown in fig. 4. Wherein:
(1) Two-finger pinch gesture (index finger thumb pinch gesture or middle finger thumb pinch gesture): for freehand grappling of visual view elements, the grapplable view elements include an axis and its end points, the selected visual view and its Corner points (Corner points); the construction of the visual view of the present invention will be described in the afternoon with an index finger thumb gesture as an example of a two-finger Pinching (Pinching) gesture.
Thumb of index fingerThe finger-pinching gesture is identified as follows: in a frame object, taking the distance between the joint coordinate positions (TipPosition) of the index finger and the thumb tip as a judging condition, and recognizing the index finger thumb Pinching gesture (Pinching) when the distance is less than or equal to 30 units; wherein (x 1, y1, z 1) and (x 2, y2, z 2) are coordinates (TipPosition) of the tips of the thumb and index finger, respectively. Distance between index finger and thumb tip joint coordinate positionsdThe calculation formula of (2) is as follows:
(2) Index finger Pointing (Pointing) gesture: for selecting a data dimension menu item and generating an index finger stroke track.
(3) Index finger middle finger Double finger Pointing gesture (Double Pointing): for associating data axes.
The recognition modes of the index finger pointing gesture and the index finger middle finger double-finger pointing gesture are as follows:
taking an index finger pointing gesture as an example, firstly calculating the dot product of the middle phalange direction vector and the near-end phalange direction vector of an index finger, then calculating the dot product of the near-end phalange direction vector and the metacarpal calcaneus direction vector, respectively judging whether the value of the two dot products is larger than 0.9, and when the value of the dot product is larger than 0.9, nearly paralleling the two direction vectors. Here, 0.9 is only an example, and may be based on recognition accuracy or the like. If the inter-phalangeal direction vectors are nearly co-parallel and the thumb, middle finger, ring finger and little finger all assume an unstretched state (stretched), the index finger pointing gesture is recognized. The index finger middle finger double-finger pointing gesture is based on calculating index finger bone pointing state, and simultaneously calculating middle finger bone pointing state, and thumb, ring finger and little finger all present un-stretched state (false). The calculation principle of the two is the same.
In embodiments of the present invention, existing gesture trajectory recognition algorithms (e.g., $1 trajectory recognition algorithms) may be used to perform gesture trajectory recognition. Gesture trajectories supported by the present invention include, but are not limited to: small circular tracks, broken line tracks, rectangular tracks, parallel line tracks, large circular tracks with internal pie, large closed tracks, etc. Wherein the track on the canvas is used for view circle selection operation and the track in the view area is used for view type switching.
(4) A flip up (Open) gesture: for activating menus, including data dimension menus and visual properties menus.
The method for identifying the upward palm turning gesture comprises the following steps: palm normal variables in the Leap Motion frame object data are used to determine palm state. palmNormal is the normalized normal vector for the palm. When the y-direction component of the palm normal vector is greater than 0.9, i.e., hand. Palm normal [1] > 0.9, it is determined that the vector is nearly co-parallel with the y-direction forward direction, and the palm is facing upward.
(5) Palm Swipe (Swipe) gestures are used to delete an existing view or an existing data axis.
The palm swipe gesture recognition mode: in one embodiment of the invention, a leap motion self-contained Swipe event is used to recognize a palm Swipe gesture. After the leap.js is used for instantiating the leap.controller class, the geture event in the frame object is intercepted, and a geture variable is acquired in a callback function of the response function. When the value of the style is swipe, a palm swipe gesture is triggered.
(6) Index finger stroke tracks for generating track instructions for view switching and circle view operations, including, for example: small circle track, broken line track, rectangular track, parallel line track, large circle track with cake inside and large-scale closed track.
The above manner of identifying each gesture event is merely an example, and the present invention is not limited thereto, and other computing methods capable of accurately identifying each gesture event are also applicable to the present invention.
In step S140, in the case that the identified visual view operation task is a data dimension menu activation task, a data dimension menu generated based on the data dimension is activated and displayed on the display interface.
In an embodiment of the present invention, based on the data dimensions generated in step S110, a data dimension menu containing the corresponding data dimension contents may be generated.
As shown in fig. 6, when the user turns up the palm, the data dimension menu is activated, whereupon the data dimension menu is displayed on the display interface.
Step S150, when the data dimension menu is activated and the current visual view operation task is identified as the data dimension menu selection task, dimension selection is performed based on a gesture corresponding to the data dimension menu selection task.
After displaying the data dimension menu, the user may further select a menu item in the data dimension menu by an index finger pointing gesture, the index finger may move, with movement of the index finger, the hand tracking device recognizes movement of the index finger pointing gesture, and changes the pointing menu item with movement of the gesture. If the index finger pointing gesture stays at the fixed position for more than a preset time, the menu item pointed by the current gesture is confirmed to be selected, and the task is ended. In another embodiment of the present invention, the end of the data dimension menu selection task may also be indicated by the release of a gesture, for example, if the user's gesture changes from an index finger pointing gesture pointing to a certain menu item to a release gesture (the release gesture is, for example, a relaxed state in which the fingers are naturally spread), the end of the selection task after confirming the selection of the menu item that the index finger pointing gesture was last pointing to.
Step S160, when the identified visual view operation task is a data axis creation task, creating the data axis based on the track of the gesture corresponding to the data axis creation task, and establishing a corresponding relation between each created data axis and each selected data dimension.
In an embodiment of the present invention, the creation of the data axis may adopt a two-hand collaborative interaction scheme, where an implementation process is that two hands trigger a pinching gesture simultaneously, which means that two ends of the data axis are pinched by the two hands, and the positions of the ends of the data axis are adjusted by moving the hands respectively and the length of the data axis is adjusted simultaneously, and when the pinching gesture is completed by the two hands, the ends of the data axis are released to complete the creation of the data axis.
In some embodiments of the present invention, the data axis may be created by using a single hand to trigger a pinching gesture, where the position triggered by the pinching gesture determines the position of one end point of the data axis, and where the position of the hand at the end of the pinching gesture determines the position of the other end point of the data axis, and the distance between the two end point positions determines the length of the data axis.
A number of data axes corresponding to the number of selected data dimensions may be created and each created data axis may be associated with each selected data dimension, such as a horizontal axis corresponding to a selected first dimension and a number axis corresponding to a selected second dimension, or vice versa. Alternatively, a number of data axes less than the selected data dimension may be created, and each generated data axis may be associated with a selected portion of the data dimension.
Step S170, in the case that the current visual view operation task is identified as a data axis association task after at least two data axes are created, selecting at least two data axes based on gesture tracks corresponding to the data axis association task, associating the selected data axes, and generating a visual view based on the selected data axes and the visual view source data.
In one embodiment of the present invention, the association of the data axes is accomplished by performing a stroke trace through a double-finger pointing gesture, and the two data axes that are stroked are associated.
For example, after creating two or three data axes, if the system further recognizes that the current visual view manipulation task is a data axis association task based on the current new gesture event (e.g., a double-finger pointing gesture tracing across two or three data axes), then associate the two or three data axes traced across by the double-finger pointing gesture and generate a visual view exhibiting the selected data dimension based on the associated data axes and the aforementioned visual view source data.
In an embodiment of the present invention, the default view type generated by the visual view is a scatter diagram, but the present invention is not limited thereto, and the default view type generated by the visual view may be other view types besides the scatter diagram. The user can switch the generated scatter diagram to other view types through the index finger stroke track.
As above, the construction of the visual view is quickly completed through freehand interaction. Based on the constructed view, the visual view element can be controlled according to the recognized gesture event by utilizing the preset gesture. Among other things, controllable visual view elements may include axis elements, visual views, data dimension menus, visual properties menus, and the like. More specifically, the interactive operations of switching view types, moving views, scaling, mapping transformation and the like can be performed on the basis of the constructed visual view.
In one embodiment of the present invention, the switching of the view types is performed by performing a stroke trace through an index finger pointing gesture on the corresponding view. Among the stroke tracks implemented by the index finger pointing gesture, the small circle track corresponds to a scatter diagram, the broken line track corresponds to a line diagram, the rectangular track corresponds to a histogram, the parallel line track corresponds to a parallel coordinate diagram, and the large circle with a cake inside corresponds to a cake diagram.
In one embodiment of the present invention, if the index finger pointing gesture is performed in a non-view area, the view that is identified as performing the circle task operation and surrounded by the index finger pointing track is selected for view movement, zoom and delete operations.
In one embodiment of the present invention, if a two-finger pinch gesture is performed on the selected view, it is identified that the view movement task operation will be triggered, and the view position at this time is bound to the hand position, and the position when the pinch gesture is released determines the new position of the view.
In an embodiment of the present invention, if a two-finger pinch gesture is performed on a selected view area corner, the two-finger pinch gesture is identified as triggering a view scaling task operation, and at this time, the view corner position and the hand position are bound, and the position of the pinch gesture when released determines the view corner position, and the view size is calculated in real time according to the view corner position.
In one embodiment of the invention, if a palm swipe gesture is performed on the selected view, it is recognized that the view delete operation will be triggered, at which point the selected view will disappear from the canvas display area.
In one embodiment of the invention, when a palm up gesture is performed on an existing visual view, it is identified as a visual properties menu that will trigger binding with the view; when a turn-up gesture is performed in the non-view region, it is identified as a data dimension menu to be triggered; whichever menu is selected, the selection of the menu item is accomplished by the index finger pointing across.
The operational steps of the freehand interaction method for visual view construction of the present invention are illustrated in more detail below.
(1) Select menu dimension (field)
The first step in building a visual view is to select a field (dimension) from a menu of data dimensions. The invention considers what gesture menu is adopted for activation and selection, can wake up the memory of the user, and does not conflict with other operation gestures. The final selected gesture is a "palm up" strategy to activate the menu, see fig. 8. Because the supination hand gesture is less likely to be false triggered than "open + palm down" and is a special auxiliary gesture relative to all gestures. When the hand changes from "roaming" state (as in fig. 8 (a)), the system recognizes the "turn-up" gesture, then triggers a custom "turn-up" event, and invokes the corresponding response function, which sets the dimension menu to "visible", i.e., opens the menu, as shown in fig. 8 (b). Then, after the item has been selected with the "index finger pointing" (as shown in fig. 8 (c)), or "released" by one hand on a blank canvas (i.e., returned to the "roaming" state as shown in fig. 8 (d)), the menu selection task is ended and the menu is closed.
(2) Creating a data axis
The second step in constructing the visualization is to generate a data axis corresponding to the selected data dimension. The present invention can treat a straight shaft as a stretchable wire from scratch and uses a two-handed "pinching" gesture to pull the shaft out as shown in fig. 9. The length and position of the shaft are freely adjustable in real time in the hand. The interactive technology supports the creation of two axes, namely a linear axis based on a Cartesian coordinate system and an arc axis based on a polar coordinate system, and the difference is that: the former needs to be released by both hands at the same time; the latter requires first releasing one hand, pinching one end point with the other hand, and making a turn around the other end point, see fig. 10. Fig. 9 (a) represents a "roam" gesture; (b) Representing pulling out the linear shaft using a two-hand "pinch" gesture, i.e., creating a linear shaft; (c) represents releasing both hands and determining the position of the linear axis. Fig. 10 (a) represents creating a linear axis using a two-hand "pinch" gesture; (b) Representing loosening one hand, pinching the end point by the other hand, and rotating around the fixed point for one circle; (c) releasing the single hand to form the final arc shaft. The position of the proxy cursor is bound with the abscissa of the palm because the palm position is relatively stable, and the jump release phenomenon can be reduced. When both hands activate the thumb and index finger pinching gesture, a straight axis is created, with the positions of the left and right endpoints of the axis corresponding to the lateral and longitudinal coordinates of the left and right hands, respectively (i.e., the positions of the left and right proxy cursors). After both hands release the thumb and index finger pinching gesture and change back to the "walk" (Browsing) gesture, the linear axis is fixed back in the current position. The decision conditions for the circular axis to be created are: whether a "fixed endpoint" is within a track formed by an "active endpoint" (active endpoint: an endpoint that wraps around the fixed endpoint). The radius of the circular shaft is determined by the distance between the released movable end point and the fixed end point, and the arc starting point of the circular shaft is determined by the position of the movable end point.
(3) Creating views
The third step in constructing the visual view is to associate the established coordinate axes (e.g., two axes, or three axes) to generate the view. Multiple selections of axes are implemented by using a "Double Pointing" trajectory to draw through axes to be associated, so as to generate a view, defaulting to a scatter diagram, as shown in fig. 11, (a) represents that the "Double Pointing" trajectory is drawn through two axes to be associated, and (b) represents that a default scatter diagram is formed based on the associated two axes. A parallel graph is generated with the axes being nearly parallel, as shown in fig. 12, where (a) and (b) represent the use of a "double-finger" trajectory to draw across the two parallel axes to be correlated and generate the parallel graph, respectively. If the linear axis and the polar axis are associated, as shown in fig. 13, the two axes are multi-selected using a "Double Pointing" trajectory across the linear axis and the polar axis to be associated (an operation shown in fig. 13 (a)), thereby generating a view (a view shown in fig. 13 (b)) based on data of the corresponding dimensions of the associated axes. In the conventional mouse interaction, a user needs to use a mouse to click for multiple times to complete the selection of multiple axes, and the interaction flow is not consistent. In the interaction technology of the invention, the track of the connecting line is activated by using the gesture of double finger pointing, and the association of two axes is directly completed. Where double fingers have the semantics of "connecting two axes". Firstly, monitoring a 'double-finger pointing' event on a visual element of a data axis (axis), monitoring a 'double-finger pointing end' event on a canvas, and triggering an axis association task corresponding to the 'double-finger pointing' event after a 'double-finger pointing' gesture is activated so as to acquire axis information needing to be associated; triggering the axis association ending task corresponding to the double-finger pointing ending event when releasing the double-finger pointing gesture to generate corresponding view elements (scattered points of the scattered point diagram, broken lines of the parallel coordinate diagram and the like) according to the axis information required to be associated.
(4) Switching view types
The direct selection of the switching button on the control is not intuitive, the invention adopts a rapid interaction mode to realize the switching of view types, and the idea of strokes is used: a view type is specified with an "index finger pointing" gesture to switch views. For example, when a user "strokes" a polyline with an index finger on a scatter plot, the current view will switch to a polyline, if circular will switch to a scatter plot, etc., as shown in FIG. 5, the system supports multiple stroke semantics. After the index finger gesture is activated, the index finger gesture event is triggered, the track is drawn on the canvas, and the coordinate point of the track is saved and updated in the track coordinate point (tracePoints) variable. When the coordinate point of the track is within the view range (non-blank canvas), the system recognizes what shape the track is, and switches the corresponding view type.
(5) Modifying visual mappings
The invention designs a context menu supporting visual mapping to help a user to quickly realize visual mapping of data dimensions, wherein the visual mapping refers to corresponding relations between data of each dimension and visual display modes thereof as shown in fig. 14-15. The trigger gesture of the visual mapping task is the same as the trigger gesture of the dimensional menu (is a "flip up (Open)" gesture). The difference is that the trigger objects are different, the visual properties menu needs to be triggered on the view. The selection item is likewise selected by means of an index finger pointing gesture. As an example, the visual mapping may employ a ring menu, but the present invention is not limited thereto. The annular menu is divided into several identical sectors, each sector corresponding to a menu item, which can be equally distributed around the view, keeping the interaction distance of the individual options uniform, conforming to the ferz law. The specific operation is that the user triggers a "turn up palm" gesture on a certain view, and then the "index finger points to" a certain item. Monitoring an upward palm turning event on a highlight frame of the view, triggering the event by using an upward palm turning gesture, popping up an annular mapping menu, and automatically popping up a dimension menu after a certain visual attribute is selected so as to enable a user to continuously select a corresponding data dimension to finish mapping. Taking the scatter plot as an example, when the user turns the gesture from "release" shown in fig. 14 (a) to "flip up" shown in fig. 14 (b) on the scatter plot, and thus opens the ring mapping menu on the scatter plot, the mappable visual properties are shown on the fan-shaped item: size, glyph, color, transparency, x-axis and y-axis. As shown in fig. 15 (a), after a certain visual attribute (such as "size") is scratched by "index finger pointing", the dimension menu is popped up, and then the "index finger pointing" is continuously used to select a certain dimension on the dimension menu (such as "Horsepower automobile Horsepower" is a data dimension), at this time, the size visual attribute and the Horsepower data dimension are mapped to each other, and the scatter diagram becomes a bubble diagram, as shown in fig. 15 (b): the circle elements in the view represent a vehicle, a circle of small radius corresponds to a vehicle of small horsepower, and a circle of large radius is vice versa.
In some embodiments of the present invention, after a gesture event is recognized, a visual agent cursor is displayed on a display interface corresponding to the recognized gesture to map the position and interaction state of a hand by the visual agent cursor, as in fig. 16. In one example, the visual agent cursor may be composed of two rings, including one small solid ring and one large hollow ring. The small ring is used for visual feedback of the pointing gesture, the large ring is used for visual feedback of the two-finger gesture (Pinching), and here, the cursor forms of the large ring and the small ring are only examples, and other cursor forms are also possible.
In some embodiments of the present invention, as shown in fig. 16, when the hand is in the roaming (or released) state, the two large and small rings can be aggregated into one small disc, with the small radius of the large ring tending to the small radius of the small ring and the small ring having a shallower fill color saturation. When the index finger pointing gesture is triggered, the filling color saturation of the small ring is changed from light color to dark color, and the tracing color saturation of the large ring is changed from dark color to light color; when the two-finger pinch gesture is triggered, the large circle radius becomes large and the tracing remains highly saturated.
In some embodiments of the present invention, in order to represent the pinching force, the present invention dynamically changes the size of the large circle corresponding to the two-finger pinching gesture, and when the pinching force is maximum, the radius of the large circle changes to the maximum.
In the embodiment of the invention, the index finger middle finger double-finger pointing gesture is a derivative gesture based on the index finger pointing gesture, inherits the visual feedback of the index finger pointing gesture, and is different in prompt characters.
In some embodiments of the invention, the up-flip gesture triggered on the view may be represented by a halo effect of the visual agent.
(6) Adjusting views
As shown in the upper diagram in fig. 17A, the "index finger pointing" gesture is used to draw the lasso box outside the visual area (e.g., outside the visual area), meaning "circle" when not operating on visual elements, does not conflict with the "index finger pointing" gesture used to switch view types being drawn in the visual area (view), as the track drawn on the blank canvas is not recognized as circular or other shape. As shown in the lower diagram of fig. 17A, the view is highlighted when selected, indicating that the view is selected and is steerable. The present invention may follow a two-finger pinching action, and the user may move the view by mimicking the pinching view with a "two-finger pinching" gesture, as shown in fig. 17B, or pinch the diagonal of the highlight frame with a "two-finger pinching" gesture to resize the view, as shown in fig. 17C, zoom out the view when a pinching action is made, zoom in the view when a loosening pinching degree opposite to the pinching action is made. In addition, the selected object may be deleted using a "palm Swipe" (Swipe) gesture. After listening for a "palm Swipe" (Swipe) gesture on the visual view element, activating the "palm Swipe" gesture and triggering a view delete task corresponding to the "palm Swipe" gesture, the displayed objects may be deleted from the array of rendered views according to the ids of all the highlighted objects, as shown in fig. 17D.
In the embodiment of the invention, the data dimension menu can be quickly activated and the data dimension can be quickly selected through freehand interaction, the visual attribute menu can be quickly activated and the visual mapping transformation can be quickly completed through freehand interaction, and the view adjustment and the view type switching can be quickly realized through freehand interaction, so that the convenience of operation is greatly improved. The freehand interaction method has the advantages of systematicness, naturalness and fluency, can provide intuitive interaction methods for the technicians in the field, and can enable the technicians in the field to learn quickly.
In addition, the freehand interaction method is not limited by the position of the touch technology and the position of the keyboard and mouse interaction, and can complete interaction at any position capable of identifying hand data. In addition, the freehand interaction method uses fewer interface controls, avoids interface interaction burden caused by interface control operation, and enables technical users in the field to concentrate on visual construction tasks instead of interface operation rules.
In summary, the invention provides a freehand interaction method for constructing information visualization through innovative design from the aspects of basic gesture design, visual feedback design and interaction action design, allows a user to complete creation, editing and manipulation of a visual view through freehand interaction, and helps the user to naturally and smoothly construct the visual view. The result of user evaluation proves that the interaction technology provides natural and effective interaction experience, has low learning cost, can enable the user to construct visualizations by bare hands, and has certain application value and potential.
The invention also provides a visual view construction device corresponding to the method, which comprises a processor and a memory, and is characterized in that the memory stores computer instructions, the processor is used for executing the computer instructions stored in the memory, and when the computer instructions are executed by the processor, the device realizes the steps of the freehand interaction method facing the visual view construction.
The embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the edge computing server deployment method described above. The computer readable storage medium may be a tangible storage medium such as an optical disk, a USB flash drive, a floppy disk, a hard drive, etc.
It should be understood that the invention is not limited to the particular arrangements and instrumentality described above and shown in the drawings. For the sake of brevity, a detailed description of known methods is omitted here. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and shown, and those skilled in the art can make various changes, modifications and additions, or change the order between steps, after appreciating the spirit of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative components, systems, and methods described in connection with the embodiments disclosed herein can be implemented as hardware, software, or a combination of both. The particular implementation is hardware or software dependent on the specific application of the solution and the design constraints. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, a plug-in, a function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine readable medium or transmitted over transmission media or communication links by a data signal carried in a carrier wave.
It should also be noted that the exemplary embodiments mentioned in this disclosure describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, or may be performed in a different order from the order in the embodiments, or several steps may be performed simultaneously.
In this disclosure, features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations can be made to the embodiments of the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A freehand interaction method for visual view construction is characterized by comprising the following steps:
generating a data dimension based on the visual view source data;
receiving hand joint point data acquired by hand tracking equipment in real time, and storing the hand joint point data in a time sequence form of a frame object, wherein the hand joint point data comprises palm position and direction data, position data of each finger joint point and finger bone direction vector data;
identifying a user gesture event based on the received hand joint point data, and identifying a visual view operation task based on the identified user gesture event and a correspondence table between pre-stored gesture events and visual view operation tasks; the visual view operation tasks in the corresponding relation table comprise the following tasks: the method comprises a data dimension menu activation task, a data dimension menu selection task, a data axis generation task and a data axis association task, wherein gesture events in the corresponding relation table comprise a palm turning gesture event, an index finger pointing gesture event, a two-finger pinching gesture event and a two-finger pointing gesture event, and the gesture events corresponding to the data dimension menu activation task are palm turning gesture events; the gesture event corresponding to the data dimension menu selection task is an index finger pointing gesture event; the gesture event corresponding to the data axis generating task is a two-finger pinch gesture event based on two hands or one hand; the gesture event corresponding to the data axis associated task is a double-finger pointing gesture event;
The visual view operation tasks in the correspondence table further comprise part or all of the following tasks: a view switching task, a view circling task, a view moving task, a view zooming task, a view deleting task, a visual attribute menu activating task and a visual mapping transformation task; the gesture event corresponding to the view switching task is an index finger pointing gesture event for generating a view type indication track; the gesture event corresponding to the view circle selection task is the gesture event which points to the index finger and generates a track surrounding a specific view area; the gesture event corresponding to the view moving task is a two-finger-knob gesture event on the circled view area; the gesture event corresponding to the view scaling task is a two-finger-knob gesture event on the corner of the circled view area; the gesture event corresponding to the view deleting task is a palm sweeping gesture event on the circled view area; the gesture event corresponding to the visual attribute menu activation task is a palm-turning gesture event on the existing view; the gesture events corresponding to the visual mapping transformation task comprise index finger pointing gesture events on a visual attribute menu and index finger pointing gesture events on a data dimension menu generated after menu items on the visual attribute menu are selected;
Activating and displaying a data dimension menu generated based on the data dimension on a display interface under the condition that the identified visual view operation task is a data dimension menu activation task;
when a data dimension menu is activated, recognizing that the current visual view operation task is a data dimension menu selection task, selecting dimensions based on gestures corresponding to the data dimension menu selection task;
under the condition that the identified visual view operation task is a data axis creation task, creating a data axis based on a gesture track corresponding to the data axis creation task, and establishing a corresponding relation between each created data axis and each selected data dimension; the creating the data axis based on the gesture track corresponding to the data axis creating task includes: triggering a pinching gesture, determining an endpoint of the data shaft through the pinching gesture, controlling the length of the data shaft, and completing creation of the data shaft when the pinching gesture is finished;
when the current visual view operation task is identified to be a data axis association task after at least two data axes are created, selecting at least two data axes based on gesture tracks corresponding to the data axis association task, associating the selected data axes, and generating a visual view based on the selected data axes and the visual view source data; the association of the data axes is completed by implementing a stroke track through a double-finger pointing gesture, so that the data axes which are scratched are associated.
2. The method according to claim 1, wherein the method further comprises: and controlling the display of the created visual view element according to the identified gesture event aiming at the created view.
3. The method according to claim 1, wherein the method further comprises: after the gesture event is recognized, a visual agent cursor is displayed on the display interface corresponding to the recognized gesture to map the position and interaction state of the hand with the visual agent cursor.
4. The method of claim 1, wherein the view position is bound to the hand position during execution of the view movement task such that the position of the view moves with the hand position;
and binding the view corner positions with the hand positions in the process of executing the view scaling task so as to determine the scaling degree based on the change amplitude of the hand actions.
5. The method of any of claims 1-4, wherein the identified user gesture event further comprises a release gesture event indicating an end of a task corresponding to a previous gesture.
6. The method according to claim 1, wherein the method further comprises: after creating the data axis, binding the created data axis with the selected data dimension;
The generating a visual view based on the selected data axis and the visual view source data includes: a visual view exhibiting the selected data dimension is generated based on the associated data axis and the visual view source data.
7. The method of claim 1, wherein the view type comprises: scatter plot, line plot, bar plot, parallel plot, and pie plot;
the generating a visual view based on the selected data axis and the visual view source data includes: a visual view of a default view type is generated based on the selected data axis and the visual view source data, the default view type being one of a scatter plot, a line plot, a bar plot, a parallel plot, and a pie chart.
8. The method of claim 1, wherein the established coordinate axis is a linear axis of a cartesian coordinate system or an arc axis based on a polar coordinate system.
9. The method of claim 1, wherein selecting at least two data axes based on gesture trajectories corresponding to the data axis-associated tasks comprises:
at least two data axes are selected by passing a trajectory generated by a gesture event corresponding to the data axis associated task through the at least two data axes.
10. A visual view construction apparatus comprising a processor and a memory, wherein the memory has stored therein computer instructions for executing the computer instructions stored in the memory, which apparatus, when executed by the processor, implements the steps of the method according to any of claims 1 to 9.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 9.
CN202210315121.XA 2022-03-29 2022-03-29 Freehand interaction method oriented to visual view construction, visual view construction device and storage medium Active CN114625255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210315121.XA CN114625255B (en) 2022-03-29 2022-03-29 Freehand interaction method oriented to visual view construction, visual view construction device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210315121.XA CN114625255B (en) 2022-03-29 2022-03-29 Freehand interaction method oriented to visual view construction, visual view construction device and storage medium

Publications (2)

Publication Number Publication Date
CN114625255A CN114625255A (en) 2022-06-14
CN114625255B true CN114625255B (en) 2023-08-01

Family

ID=81904394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210315121.XA Active CN114625255B (en) 2022-03-29 2022-03-29 Freehand interaction method oriented to visual view construction, visual view construction device and storage medium

Country Status (1)

Country Link
CN (1) CN114625255B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2503441A1 (en) * 2011-03-22 2012-09-26 Adobe Systems Incorporated Methods and apparatus for providing a local coordinate frame user interface for multitouch-enabled devices
CN107133347A (en) * 2017-05-22 2017-09-05 智器云南京信息科技有限公司 The methods of exhibiting and device of visual analyzing chart, readable storage medium storing program for executing, terminal
CN108182728A (en) * 2018-01-19 2018-06-19 武汉理工大学 A kind of online body-sensing three-dimensional modeling method and system based on Leap Motion
CN108710628A (en) * 2018-03-29 2018-10-26 中国科学院软件研究所 A kind of visual analysis method and system towards multi-modal data based on sketch interaction
CN110597586A (en) * 2019-08-19 2019-12-20 北京邮电大学 Method and device for large screen layout of componentized layout based on dragging
CN113256767A (en) * 2021-07-14 2021-08-13 北京邮电大学 Bare-handed interactive color taking method and color taking device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7788606B2 (en) * 2004-06-14 2010-08-31 Sas Institute Inc. Computer-implemented system and method for defining graphics primitives
US20110115814A1 (en) * 2009-11-16 2011-05-19 Microsoft Corporation Gesture-controlled data visualization
US20120304059A1 (en) * 2011-05-24 2012-11-29 Microsoft Corporation Interactive Build Instructions
US8904304B2 (en) * 2012-06-25 2014-12-02 Barnesandnoble.Com Llc Creation and exposure of embedded secondary content data relevant to a primary content page of an electronic book
WO2014094895A1 (en) * 2012-12-21 2014-06-26 Böekling Bert Method and system for visualizing and manipulating graphic charts
US9760262B2 (en) * 2013-03-15 2017-09-12 Microsoft Technology Licensing, Llc Gestures involving direct interaction with a data visualization
US9665259B2 (en) * 2013-07-12 2017-05-30 Microsoft Technology Licensing, Llc Interactive digital displays
CN103593141A (en) * 2013-11-29 2014-02-19 河南博仕达通信技术有限公司 Hand gesture recognizing unlocking device and method
US9471152B2 (en) * 2014-09-18 2016-10-18 Oracle International Corporation Chart dual-Y resize and split-unsplit interaction
US10310618B2 (en) * 2015-12-31 2019-06-04 Microsoft Technology Licensing, Llc Gestures visual builder tool
GB2556068A (en) * 2016-11-16 2018-05-23 Chartify It Ltd Data interation device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2503441A1 (en) * 2011-03-22 2012-09-26 Adobe Systems Incorporated Methods and apparatus for providing a local coordinate frame user interface for multitouch-enabled devices
CN107133347A (en) * 2017-05-22 2017-09-05 智器云南京信息科技有限公司 The methods of exhibiting and device of visual analyzing chart, readable storage medium storing program for executing, terminal
CN108182728A (en) * 2018-01-19 2018-06-19 武汉理工大学 A kind of online body-sensing three-dimensional modeling method and system based on Leap Motion
CN108710628A (en) * 2018-03-29 2018-10-26 中国科学院软件研究所 A kind of visual analysis method and system towards multi-modal data based on sketch interaction
CN110597586A (en) * 2019-08-19 2019-12-20 北京邮电大学 Method and device for large screen layout of componentized layout based on dragging
CN113256767A (en) * 2021-07-14 2021-08-13 北京邮电大学 Bare-handed interactive color taking method and color taking device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于双Leap Motion的三维可视化交互方法研究;孙国道;黄普永;刘义鹏;梁荣华;;计算机辅助设计与图形学学报(第07期);正文 *
基于真实感框架的自然用户界面反馈设计方法;吕菲 等;北京邮电大学学报(社会科学版);正文 *
基于笔式界面的交互式可视化分析系统;赵倩;任磊;滕东兴;;计算机工程与应用(第06期);正文 *

Also Published As

Publication number Publication date
CN114625255A (en) 2022-06-14

Similar Documents

Publication Publication Date Title
US9996176B2 (en) Multi-touch uses, gestures, and implementation
US7640518B2 (en) Method and system for switching between absolute and relative pointing with direct input devices
US7451408B2 (en) Selecting moving objects on a system
US5757361A (en) Method and apparatus in computer systems to selectively map tablet input devices using a virtual boundary
US9489040B2 (en) Interactive input system having a 3D input space
KR100832355B1 (en) 3d pointing method, 3d display control method, 3d pointing device, 3d display control device, 3d pointing program, and 3d display control program
US8997025B2 (en) Method, system and computer readable medium for document visualization with interactive folding gesture technique on a multi-touch display
US8860675B2 (en) Drawing aid system for multi-touch devices
US10180714B1 (en) Two-handed multi-stroke marking menus for multi-touch devices
US20140118252A1 (en) Method of displaying cursor and system performing cursor display method
JP2016076275A (en) Man-machine interface based on gesture
JP2010086230A (en) Information processing apparatus, information processing method and program
WO2010032268A2 (en) System and method for controlling graphical objects
KR20160097410A (en) Method of providing touchless input interface based on gesture recognition and the apparatus applied thereto
JPWO2014041732A1 (en) Portable electronic devices
Foucault et al. SPad: a bimanual interaction technique for productivity applications on multi-touch tablets
CN108491141A (en) A kind of generation method, device and the terminal device of electronic whiteboard choice box
CN114625255B (en) Freehand interaction method oriented to visual view construction, visual view construction device and storage medium
CN113256767B (en) Bare-handed interactive color taking method and color taking device
CN109901778A (en) A kind of page object rotation Zoom method, memory and smart machine
CN109858000A (en) Form processing method, device, system, storage medium and interactive intelligent tablet computer
Matulic et al. Terrain modelling with a pen & touch tablet and mid-air gestures in virtual reality
JP5165661B2 (en) Control device, control method, control program, and recording medium
CN112799580A (en) Display control method and electronic device
Declec et al. Tech-note: Scruticam: Camera manipulation technique for 3d objects inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant