WO2017049920A1 - 图形编排处理方法及装置 - Google Patents

图形编排处理方法及装置 Download PDF

Info

Publication number
WO2017049920A1
WO2017049920A1 PCT/CN2016/082241 CN2016082241W WO2017049920A1 WO 2017049920 A1 WO2017049920 A1 WO 2017049920A1 CN 2016082241 W CN2016082241 W CN 2016082241W WO 2017049920 A1 WO2017049920 A1 WO 2017049920A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
touch screen
action
information
graphic
Prior art date
Application number
PCT/CN2016/082241
Other languages
English (en)
French (fr)
Inventor
王斌
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2017049920A1 publication Critical patent/WO2017049920A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • This application relates to, but is not limited to, the field of communications.
  • WEB-side graphical orchestration is usually used to arrange scenes in different occasions such as services, services, processes, network resources, etc., abstracting different resources into multiple graph nodes, and arranging, connecting, and merging these nodes through a visual designer.
  • Split operations such as splitting.
  • Related technologies of WEB-side graphical programming products most of which are based on mouse drag-and-drop, single-click and other events, can be used once they are out of the mouse, and touch-screen terminal products are the future development trend, similar to Microsoft surface, Apple ipad, etc.
  • touch-screen terminal products are the future development trend, similar to Microsoft surface, Apple ipad, etc.
  • the number of people using touch screen products is increasing. However, in the case of no mouse access, the graphics can be arranged by touch on the touch screen.
  • the present invention provides a graphics processing method and apparatus to at least solve the problem that the graphics cannot be processed by touch on the touch screen in the related art.
  • a graphics processing method includes: receiving touch screen information generated by a graphic processing interface; parsing a predefined touch screen action corresponding to the touch screen information; determining drawing of the graphic node according to the parsed touch screen action An action; the graphics node is arranged according to the determined drawing action.
  • the receiving the touch screen information generated by the graphic processing interface includes: receiving the multi-touch information generated by the graphic processing interface, where the multi-touch information includes at least one of the following: a touch start, a touch move, The touch ends and the touch is canceled.
  • parsing the predefined touch screen action corresponding to the touch screen information comprises: combining the touch start position, the touch displacement information, and the touch time to parse the corresponding touch touch corresponding to the touch screen information Screen action.
  • determining, according to the parsed touch screen action, the drawing action of drawing the graphic node comprises: responding to the touch screen action by monitoring the touch screen action, and generating context information, wherein the context information
  • the device carries the location of the touch screen action, the range of the touch screen action, and the node information of the graphic in the range; and the context information is summarized to determine the drawing action for programming the graphic node.
  • the touch screen action corresponding to the touch screen information includes one or more of the following actions:
  • a graphics processing device comprising: a receiving module, configured to: receive touch screen information generated by a graphic processing interface; and the analyzing module is configured to: parse out a predefined touch screen action corresponding to the touch screen information; The method is configured to: determine a drawing action for drawing a graphic node according to the parsed touch screen action; and arrange the module to: arrange the graphic node according to the determined drawing action.
  • the receiving module includes: a receiving unit, configured to: receive multi-touch information generated by the graphic processing interface, where the multi-touch information includes at least one of: a touch start, a touch move , touch end, touch cancel.
  • the parsing module includes: a parsing unit, configured to parse the touch screen corresponding to the predefined touch screen action corresponding to the touch start position, the touch displacement information, and the touch time.
  • the determining module includes: a response unit, configured to: respond to the touch screen action by monitoring the touch screen action, and generate context information, where the context information carries a touch screen action The location, the range of the touch screen action, and the node information of the graphic in the range; the summary unit is configured to: summarize the context information, and determine a drawing action for programming the graphic node.
  • a response unit configured to: respond to the touch screen action by monitoring the touch screen action, and generate context information, where the context information carries a touch screen action The location, the range of the touch screen action, and the node information of the graphic in the range
  • the summary unit is configured to: summarize the context information, and determine a drawing action for programming the graphic node.
  • the touch screen action corresponding to the touch screen information includes one or more of the following actions Kind:
  • a computer readable storage medium storing computer executable instructions for performing the method of any of the above.
  • the touch screen information generated by the received graphic processing interface is used; the predefined touch screen action corresponding to the touch screen information is parsed; and the drawing action of drawing the graphic node is determined according to the parsed touch screen action.
  • the graphics node is arranged according to the determined drawing action, which solves the problem that the graphics cannot be arranged by touch on the touch screen in the related art, and the graphic arrangement can be implemented in the touch screen terminal, thereby improving the user experience.
  • FIG. 1 is a flow chart of a graphics arrangement processing method according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a graphics arrangement processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a block diagram 1 of a graphics arrangement processing apparatus in accordance with an alternative embodiment of the present invention.
  • FIG. 4 is a block diagram 2 of a graphics arrangement processing apparatus in accordance with an alternative embodiment of the present invention.
  • FIG. 5 is a block diagram 3 of a graphics arrangement processing apparatus in accordance with an alternative embodiment of the present invention.
  • FIG. 6 is a block diagram of a device for acquiring a snapshot of a mobile application according to an embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for implementing graphics drawing by a touch screen terminal according to an embodiment of the present invention.
  • Figure 8 is a first schematic diagram of a graphical drawing arrangement in accordance with an alternative embodiment of the present invention.
  • Figure 9 is a second schematic diagram of a graphical drawing arrangement in accordance with an alternative embodiment of the present invention.
  • Figure 10 is a third schematic diagram of a graphical drawing arrangement in accordance with an alternate embodiment of the present invention.
  • FIG. 1 is a flowchart of a graphics processing method according to an embodiment of the present invention. As shown in FIG. 1, the method includes:
  • Step S102 receiving touch screen information generated by the graphic processing interface
  • Step S104 parsing a predefined touch screen action corresponding to the touch screen information
  • Step S106 determining a drawing action for drawing a graphic node according to the parsed touch screen action
  • Step S108 the graphics node is arranged according to the determined drawing action.
  • the touch screen information generated by the graphic processing interface is received; the predefined touch screen action corresponding to the touch screen information is parsed; and the drawing action of drawing the graphic node is determined according to the parsed touch screen action;
  • the drawing action performs the processing of the graphic node, and solves the problem that the related art cannot perform the processing of the graphic by the touch on the touch screen, and can realize the graphic arrangement in the touch screen terminal, thereby improving the user experience.
  • the receiving the touch screen information generated by the graphic processing interface may include: receiving the multi-touch information generated by the graphic processing interface, wherein the multi-touch information comprises at least one of the following: a touch start, a touch move, a touch end, Touch to cancel.
  • the pre-defined touch screen action corresponding to the touch screen information may include: combining the touch start position, the touch displacement information, and the touch time to parse the corresponding touch screen action corresponding to the touch screen information.
  • Determining, according to the parsed touch screen action, the drawing action of drawing the graphic node may include: responding to the touch screen action by responding to the touch screen action, and generating context information, wherein the context information carries a touch screen action The location, the range of the touch screen action, and the node information of the graph in the range; summarize the context information to determine the drawing action for the graphical node.
  • the touch screen action corresponding to the touch screen information described above may include one or more of the following actions: short touch, long touch, one touch, two touch, direction gesture, and the like.
  • a computer readable storage medium storing computer executable instructions for performing the above described graphics orchestration processing method.
  • FIG. 2 is a block diagram of a graphic arrangement processing device according to an embodiment of the present invention. As shown in FIG. 2, the receiving module 22 and the analysis module are included. 24. Determine module 26 and orchestration module 28, each of which is further described below.
  • the receiving module 22 is configured to: receive touch screen information generated by the graphic processing interface;
  • the parsing module 24 is configured to: parse out a predefined touch screen action corresponding to the touch screen information;
  • the determining module 26 is configured to: determine a drawing action for drawing the graphic node according to the parsed touch screen action;
  • the orchestration module 28 is configured to: arrange the graphics nodes according to the determined drawing action.
  • FIG. 3 is a block diagram of a graphics arrangement processing apparatus according to an alternative embodiment of the present invention.
  • the receiving module 22 includes:
  • the receiving unit 32 is configured to: receive the multi-touch information generated by the graphic processing interface, wherein the multi-touch information comprises at least one of the following: a touch start, a touch move, a touch end, and a touch cancel.
  • the parsing module 24 includes:
  • the parsing unit 42 is configured to parse the touch screen corresponding to the predefined touch screen action corresponding to the touch start position, the touch displacement information, and the touch time.
  • FIG. 5 is a block diagram 3 of a graphics arrangement processing apparatus according to an alternative embodiment of the present invention. As shown in FIG. 5, the determination module 26 includes:
  • the response unit 52 is configured to: respond to the touch screen action by responding to the touch screen action, and generate context information, where the context information carries a position of the touch screen action, a range of the touch screen action, and the range Node information of the graph;
  • the summary unit 54 is configured to: summarize the context information, and determine a drawing action for programming the graphic node.
  • the touch screen action corresponding to the touch screen information includes one or more of the following actions: short touch, long touch, one touch, double touch, and direction gesture.
  • the embodiment of the invention solves the touch action, the gesture recognition, the action response, and the graphic meter
  • the calculation mainly solves the problem of efficient use of the graphical programming on the WEB side on the touch screen terminal, and improves the user experience of graphical arrangement of the user on the touch screen terminal.
  • the graphic designer receives the multi-touch information generated by the user touch interface, including four basic events: touch start, touch movement, touch end, touch cancel, and parsed into short touches that the graphic designer can recognize, Long touch, one touch, two touch, gesture (direction) and other touch actions.
  • the graphic designer responds to the user's actions by monitoring the touch action, and generates context information (action, position), which is uniformly processed by the response module.
  • the response module summarizes the context information of each contact, determines the drawing action of the graphic node, and the designer redraws the graphic node according to the predetermined drawing action.
  • FIG. 6 is a block diagram of a device for acquiring a snapshot of a mobile application, as shown in FIG. 6, including a touch action analysis module 62, a touch action monitoring module 64, a touch action response module 66, and a graphics node redraw module. 68.
  • the functions of these modules are implemented in part or in whole by the receiving module 22, the parsing module 24, the determining module 26, and the orchestration module 28 described above, and each module will be further described below.
  • the touch action analysis module 62 is configured to: accept touch information (touch event), and convert it into a corresponding touch action;
  • the touch action monitoring module 64 is configured to: pre-bind the corresponding response operation, and monitor the action generated by the touch action analysis module 62;
  • the touch action response module 66 is configured to: determine the drawing action to be performed by summarizing the position of the action, the range of the action, and the information of the graphic node within the range;
  • the graphics node redraw module 68 is configured to: accept the drawing action, and redraw the graphics node involved.
  • FIG. 7 is a flowchart of a method for implementing graphics drawing by a touch screen terminal according to an embodiment of the present invention.
  • the graphic programming method for supporting a touch screen terminal mainly includes the following steps:
  • Step S702 the user accesses the graphic designer by using the touch screen terminal, and the user performs graphic arrangement by means of touch and gesture in the designer, where the touch and the gesture include multiple touches and gestures performed after the touch.
  • step S704 the touch action analysis module 62 parses the touch event.
  • the touch action analysis module 62 is configured to: analyze user touch and gesture action information, and support In the touched WEB page, the events supported in the w3c specification include only touchstart, touchmove, touchend, and touchcancel. This module listens to multiple events generated by multiple touches and converts them into actions that the designer can understand, such as sliding. Slide in one direction, one touch, two touch, long touch, predetermined gesture, etc. Each touch event will include the position of the touch point relative to the page. The module can be combined with the touch start position, the touch displacement information, and the touch time to analyze whether the predefined action and gesture are met.
  • FIG. 8 is a first schematic diagram of a graphical drawing arrangement according to an alternative embodiment of the present invention.
  • the up and down sliding gesture action first acquires the touch start position information (position point 1) through the touchstart event, and saves as X1, y1;
  • the information of the touch movement ie, position points 2, 3, 4) is continuously acquired, and is saved as the variable x2, y2, and the variable is continuously updated with the position of the slide, during the sliding process.
  • the y2-y1 absolute difference is triggered in a certain error range (5px) to ensure that the correct up and down sliding action is triggered; finally in the touchend event (position point 5), by checking between x1 and x2
  • the absolute difference is greater than 30px and the up and down direction is determined by the positive and negative differences.
  • the touchmove event multiple location information needs to be recorded here for complex gestures, and gestures are determined by multiple location information in the touchend event, the principles are basically similar.
  • FIG. 9 is a second schematic diagram of a graphical drawing arrangement according to an alternative embodiment of the present invention.
  • a plurality of sliding track point positions [ ⁇ x1, y1 ⁇ , ⁇ x2, y2 ⁇ are recorded according to a time interval (100 ms).
  • To ⁇ xn,yn ⁇ ] forms a plurality of dot arrays, and determines that the multiple points are not on the same line, triggering the circle selection action.
  • the response module of the circle selection action only needs to loop through each node graph in the designer to determine whether each point of the frame is in the multi-point array (take the top, bottom, left, and right vertices in the multi-point position array to determine whether In it, if the point of the graph node is within the range, the selected effect is achieved.
  • this module records the first touch start time when touchstart, and if the touch time is less than a certain value (250ms) when the second touch is used, It is considered to be a double touch, and a similar value greater than a certain value is considered to be a long touch.
  • step S706 the touch action monitoring module 64 listens to the parsed action.
  • Touching the action monitoring module 64 only listening to the actions and gestures generated by the parsing module, and responding to the action by pre-binding the corresponding response method, for example, tapping a node and sliding the action, binding the sliding connection response module, slide.
  • the binding operation can be implemented as multiple callback methods in the javascript implementation.
  • step S708 the touch action response module 66 responds to the action.
  • the graphics node can use the canvas, svg and other technologies to obtain the location and node area of each graphics node.
  • the touch action response module 66 determines the drawing action to be performed by summarizing the position of the action, the range of motion, and the information of the graph node within the range.
  • FIG. 10 is a third schematic diagram of a graphical drawing arrangement according to an alternative embodiment of the present invention. As shown in FIG. 10, different response modules perform specific processing on the information including:
  • a single graphic touches the sliding connection.
  • the touch sliding response module determines that the motion starting position is within the range of the graphic node, and the sliding distance is not less than a certain value such as (30px), then the node is executed.
  • the end position of the action as the end of the connection operation.
  • connection action occurs on two graphics nodes, and the sliding direction is relative, and the sliding distance is not less than a certain value (30px), the connection action of the two nodes is performed.
  • the above scenario only describes part of the response.
  • the actual layout can further expand the response module according to complex gestures, such as circle selection gestures, all the graphics within the circled range are selected; by clicking the brush screen, batch node creation and so on.
  • step S710 the graphic drawing module 68 performs redrawing according to the drawing action.
  • the graphic drawing module 68 draws the graphic connection and the graphic node according to the drawing action, and the related technology adopted by the module implements the graphic drawing technology supported by html5 such as canvas, svg, and the like.
  • the above method has the following characteristics: 1) the original graphics arrangement can be realized by responding to the touch event without changing the original web technology architecture.
  • the extension of the function is suitable for the function optimization of the touch screen terminal.
  • the method can be realized on the touch screen terminal, the user does not need to access the mouse, and only needs multiple touches of the finger, the graphic programming task that can be completed on the PC end can be easily and efficiently completed, and the graphical programming product is greatly improved. User experience of the screen terminal.
  • modules or steps of the embodiments of the present invention can be implemented by a general-purpose computing device, which can be centralized on a single computing device or distributed over a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed either separately as an integrated circuit module, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
  • the original graphics programming function can be extended by using the response to the touch event without changing the original web technology architecture, and is suitable for optimizing the function of the touch screen terminal.
  • more different touch actions and gestures can be customized on the touch screen terminal, and the response of the graphics to different actions can be supported, and the efficiency of the touch design can be improved.
  • the user can realize the graphic programming task that can be completed on the PC end easily and efficiently on the touch screen terminal without the user having to access the mouse, and only based on the multi-touch of the finger, thereby greatly improving the graphicization. Orchestrate the user experience of the product on the touch screen terminal.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本文公布一种图形编排处理方法及装置,其中,该方法包括:获接收图形处理界面产生的触屏信息;解析出该触屏信息对应的预先定义的触屏动作;根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;根据确定的该绘制动作对图形节点进行编排处理。

Description

图形编排处理方法及装置 技术领域
本申请涉及但不限于通信领域。
背景技术
WEB端图形化编排通常用于服务、业务、流程、网络资源等不同场合编排场景,将不同的资源抽象为多种图形节点,通过可视化设计器,对这些节点进行排布、连线、合并、拆分等编排操作。相关技术的WEB端图形化编排产品,多数基于鼠标的拖拽、单双击等事件进行操作,一旦脱离鼠标,基本无法使用,而触屏终端产品是未来的发展趋势,类似微软surface、苹果ipad等触屏产品使用人群日益增多。但是在没有鼠标接入的情况下,并不能在触摸屏上,通过触摸实现对图形的编排。
针对相关技术中不能在触摸屏上通过触摸实现对图形的编排处理问题,还未提出有效的解决方案。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本文提供了一种图形编排处理方法及装置,以至少解决相关技术中不能在触摸屏上通过触摸实现对图形的编排处理问题。
一种图形编排处理方法,包括:接收图形处理界面产生的触屏信息;解析出所述触屏信息对应的预先定义的触屏动作;根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;根据确定的所述绘制动作对图形节点进行编排处理。
可选地,接收图形处理界面产生的触屏信息包括:接收图形处理界面产生的多点触碰信息,其中,所述多点触碰信息包括以下至少之一:触碰开始、触碰移动、触碰结束、触碰取消。
可选地,解析出所述触屏信息对应的预先定义的触屏动作包括:结合触碰起始位置、触碰位移信息以及触碰时间解析出所述触屏信息对应的符合预先定义的触屏动作。
可选地,根据解析出的触屏动作确定对图形节点进行绘制的绘制动作包括:通过监听所述触屏动作,对所述触屏动作进行响应,并产生上下文信息,其中,所述上下文信息中携带有触屏动作所在位置、触屏动作的范围以及所述范围内图形的节点信息;汇总所述上下文信息,确定对图形节点进行编排的绘制动作。
可选地,所述触屏信息对应的所述触屏动作包括如下动作中的一种或多种:
短触、长触、单触、双触、方向手势。
一种图形编排处理装置,包括:接收模块,设置为:接收图形处理界面产生的触屏信息;解析模块,设置为:解析出所述触屏信息对应的预先定义的触屏动作;确定模块,设置为:根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;编排模块,设置为:根据确定的所述绘制动作对图形节点进行编排处理。
可选地,所述接收模块包括:接收单元,设置为:接收图形处理界面产生的多点触碰信息,其中,所述多点触碰信息包括以下至少之一:触碰开始、触碰移动、触碰结束、触碰取消。
可选地,所述解析模块包括:解析单元,设置为:结合触碰起始位置、触碰位移信息以及触碰时间解析出所述触屏信息对应的符合预先定义的触屏动作。
可选地,所述确定模块包括:响应单元,设置为:通过监听所述触屏动作,对所述触屏动作进行响应,并产生上下文信息,其中,所述上下文信息中携带有触屏动作所在位置、触屏动作的范围以及所述范围内图形的节点信息;汇总单元,设置为:汇总所述上下文信息,确定对图形节点进行编排的绘制动作。
可选地,所述触屏信息对应的所述触屏动作包括如下动作中的一种或多 种:
短触、长触、单触、双触、方向手势。
一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述任一项的方法。
通过本发明实施例,采用获接收图形处理界面产生的触屏信息;解析出所述触屏信息对应的预先定义的触屏动作;根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;根据确定的所述绘制动作对图形节点进行编排处理,解决了相关技术中不能在触摸屏上通过触摸实现对图形的编排处理问题,能够在触屏终端实现图形编排,提高了用户体验。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1是根据本发明实施例的图形编排处理方法的流程图;
图2是根据本发明实施例的图形编排处理装置的框图;
图3是根据本发明可选实施例的图形编排处理装置的框图一;
图4是根据本发明可选实施例的图形编排处理装置的框图二;
图5是根据本发明可选实施例的图形编排处理装置的框图三;
图6是根据本发明实施例的获取移动应用快照装置的框图;
图7是根据本发明实施例的触屏终端实现图形绘制编排方法的流程图;
图8是根据本发明可选实施例的图形绘制编排的示意图一;
图9是根据本发明可选实施例的图形绘制编排的示意图二;
图10是根据本发明可选实施例的图形绘制编排的示意图三。
本发明的实施方式
下文中将参考附图并结合实施例来详细说明本发明的实施方式。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
本发明实施例提供了一种图形编排处理方法,图1是根据本发明实施例的图形编排处理方法的流程图,如图1所示,包括:
步骤S102,接收图形处理界面产生的触屏信息;
步骤S104,解析出该触屏信息对应的预先定义的触屏动作;
步骤S106,根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;
步骤S108,根据确定的该绘制动作对图形节点进行编排处理。
通过上述步骤,获接收图形处理界面产生的触屏信息;解析出该触屏信息对应的预先定义的触屏动作;根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;根据确定的该绘制动作对图形节点进行编排处理,解决了相关技术中不能在触摸屏上通过触摸实现对图形的编排处理问题,能够在触屏终端实现图形编排,提高了用户体验。
接收图形处理界面产生的触屏信息可以包括:接收图形处理界面产生的多点触碰信息,其中,该多点触碰信息包括以下至少之一:触碰开始、触碰移动、触碰结束、触碰取消。
解析出该触屏信息对应的预先定义的触屏动作可以包括:结合触碰起始位置、触碰位移信息以及触碰时间解析出该触屏信息对应的符合预先定义的触屏动作。
根据解析出的触屏动作确定对图形节点进行绘制的绘制动作可以包括:通过监听该触屏动作,对该触屏动作进行响应,并产生上下文信息,其中,该上下文信息中携带有触屏动作所在位置、触屏动作的范围以及该范围内图形的节点信息;汇总该上下文信息,确定对图形节点进行编排的绘制动作。
上述的触屏信息对应的该触屏动作可以包括如下动作中的一种或多种:短触、长触、单触、双触、方向手势等。
一种计算机可读存储介质,存储有计算机可执行指令,所述计算机可执行指令用于执行上述图形编排处理方法。
本发明实施例还提供了一种图形编排处理装置,图2是根据本发明实施例的图形编排处理装置的框图,如图2所示,包括接收模块22、解析模块 24、确定模块26以及编排模块28,下面对每个模块进行进一步说明。
接收模块22,设置为:接收图形处理界面产生的触屏信息;
解析模块24,设置为:解析出该触屏信息对应的预先定义的触屏动作;
确定模块26,设置为:根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;
编排模块28,设置为:根据确定的该绘制动作对图形节点进行编排处理。
图3是根据本发明可选实施例的图形编排处理装置的框图一,如图3所示,接收模块22包括:
接收单元32,设置为:接收图形处理界面产生的多点触碰信息,其中,该多点触碰信息包括以下至少之一:触碰开始、触碰移动、触碰结束、触碰取消。
图4是根据本发明可选实施例的图形编排处理装置的框图二,如图4所示,解析模块24包括:
解析单元42,设置为:结合触碰起始位置、触碰位移信息以及触碰时间解析出该触屏信息对应的符合预先定义的触屏动作。
图5是根据本发明可选实施例的图形编排处理装置的框图三,如图5所示,确定模块26包括:
响应单元52,设置为:通过监听该触屏动作,对该触屏动作进行响应,并产生上下文信息,其中,该上下文信息中携带有触屏动作所在位置、触屏动作的范围以及该范围内图形的节点信息;
汇总单元54,设置为:汇总该上下文信息,确定对图形节点进行编排的绘制动作。
上述的触屏信息对应的该触屏动作包括如下动作中的一种或多种:短触、长触、单触、双触、方向手势。
下面结合可选实施例对本发明实施例进行进一步说明。
本发明实施例通过对触碰动作的解析、手势的识别、动作响应、图形计 算等,主要解决WEB端图形化编排在触屏终端上高效使用的问题,提升了用户在触屏终端进行图形编排的用户体验。包括:图形设计器接收用户触碰界面产生的多点触碰信息,包括触碰开始、触碰移动、触碰结束、触碰取消四个基本事件,解析为图形设计器能识别的短触、长触、单触、双触、手势(方向)等触碰动作。图形设计器通过监听触碰动作,对用户的动作进行响应,产生上下文信息(动作、位置),通过响应模块统一处理。响应模块汇总每个触点上下文信息,确定图形节点绘制动作,设计器根据既定的绘制动作,对图形节点进行重绘编排。
图6是根据本发明实施例的获取移动应用快照装置的框图,如图6所示,包括触碰动作解析模块62、触碰动作监听模块64、触碰动作响应模块66以及图形节点重绘模块68,这些模块的功能由上述的接收模块22、解析模块24、确定模块26以及编排模块28部分或全部实现,下面对每个模块进行进一步说明。
触碰动作解析模块62,设置为:将接受触碰信息(touch事件),将其转换为相应的触碰动作;
触碰动作监听模块64,设置为:预绑定对应的响应操作,监听触碰动作解析模块62产生的动作;
触碰动作响应模块66,设置为:通过汇总动作所在位置、动作范围、范围内图形节点信息,确定要执行的绘制动作;
图形节点重绘模块68,设置为:接受绘制动作,对涉及图形节点进行重绘。
图7是根据本发明实施例的触屏终端实现图形绘制编排方法的流程图,如图7所示,支持触屏终端的图形编排方法主要包括以下步骤:
步骤S702,用户使用触屏终端访问图形设计器,用户在设计器内通过触碰及手势方式进行图形编排,这里的触碰和手势包括多点触碰和触碰后进行的手势动作。
步骤S704,触碰动作解析模块62解析触碰事件。
触碰动作解析模块62,设置为:解析用户触碰及手势动作信息,在支持 触碰的WEB页面中,w3c规范中支持的事件只包括touchstart、touchmove、touchend、touchcancel四个,本模块监听多点触碰产生的多个事件,并转换为设计器能够理解的动作如滑动、按某方向滑动、单触、双触、长触、预定手势等动作。每个触碰事件都会包含触摸点相对于页面的位置,本模块结合触碰起始位置、触碰位移信息、触碰时间可解析识别出是否符合预定义的动作和手势。
图8是根据本发明可选实施例的图形绘制编排的示意图一,如图8所示,上下滑动手势动作,首先通过touchstart事件,获取到触碰起始位置信息(位置点1),保存为x1,y1;其次在touchmove事件中,持续获取到触碰移动的信息(即位置点2,3,4),保存为变量x2,y2,该变量随着滑动的位置不断更新,在滑动过程中通过判断x2-x1绝对差值,y2-y1绝对差值在某个误差范围(5px)保证正确的上下滑动动作被触发;最后在touchend事件中(位置点5),通过检查x1与x2之间的绝对差值大于30px,并通过正负差值确定上下方向。在touchmove事件中,对于复杂手势这里需要记录多个位置信息,在touchend事件中,通过多个位置信息确定手势,其原理基本相似。
图9是根据本发明可选实施例的图形绘制编排的示意图二,如图9所示,根据时间间隔(100ms)取样记录多个滑动轨迹点位置[{x1,y1},{x2,y2}至{xn,yn}]形成多个点数组,并判定这多个点的都不在同一条直线上,触发圈选动作。其次,圈选动作的响应模块只需循环判断设计器中每个节点图形,组成图形框的每个点是否都在这个多点数组中(取多点位置数组中上下左右四个顶点,判断是否在其中),如果图形节点的每个点都在该范围内,则实现选中效果。关于触碰时间的利用,以双触为例,本模块在touchstart时记录第一次触碰起始时间,第二次触碰时如果两次触碰的时间小于某个值(250ms),则认为是双触,类似的大于某个值则认为是长触。
步骤S706,触碰动作监听模块64监听经过解析的动作。
触碰动作监听模块64,只监听解析模块产生的动作和手势,并通过预绑定对应的响应方法对动作进行响应,例如:轻触某节点并滑动的动作,绑定滑动连线响应模块,滑动。绑定操作在javascript实现中,可实现为多个callback方法。
步骤S708,触碰动作响应模块66对动作进行响应。
在web图形编排应用中,图形节点的展现无论是采用canvas、svg等技术,都可以获取每个图形节点的位置、节点面积大小。触碰动作响应模块66通过汇总动作所在位置、动作范围、范围内图形节点的信息,确定要执行的绘制动作。
图10是根据本发明可选实施例的图形绘制编排的示意图三,如图10所示,不同的响应模块对这些信息进行特定处理包括:
1、单个图形轻触滑动连线,当轻触滑动响应模块被调用时,模块判断动作起始位置在图形节点范围内,并且滑动距离不小于某值如(30px),则执行以该节点为起始,触碰动作终点位置为终点的连线操作。
2、两个图形间连线,当触碰动作发生在两个图形节点上,并且滑动方向相对时,且滑动距离不小于某值(30px),执行两个节点的连线动作。
3、单个图形的复制,当第一个触点动作是长触且触点范围在某图形上,第二触点动作起始点在图形范围内,且滑动距离大于某值,则执行图形复制动作,复制图形的位置在第二个触点的终点位置。
4、多点触碰同时连线,多个触点在不同的图形节点范围内,并滑动指向另一个图形,则执行多点同连的操作。
以上场景仅描述响应的部分场景,实际编排可根据复杂手势对响应模块进行进一步扩展,例如圈选手势,将圈选范围内图形全部选中;通过点选刷屏,进行批量节点创建等等。
步骤S710,图形绘制模块68根据绘制动作进行重绘。
图形绘制模块68根据绘制动作,绘制图形连线、图形节点,本模块采用的相关技术实现例如canvas,svg等html5支持的图形绘制技术。
与以往仅支持鼠标点击事件的WEB图形编排产品相比,上述方法具有如下特点:1)可在不改变原有web技术架构的前提下,利用对触碰事件的响应,实现对原有图形编排功能的扩展,适用于对触屏终端的功能优化。2)由于支持多点触碰的响应,在触屏终端可以定制更多不同的触碰动作和手势,支持对不同的动作进行图形的响应绘制,提升触碰设计的效率。通过上述方 法,可以实现在触屏终端上,用户无需接入鼠标,只需基于手指的多点触碰,可以轻松高效的完成以往在PC端才能完成的图形编排任务,大大提升图形化编排产品在触屏终端的用户体验。
显然,本领域的技术人员应该明白,上述的本发明实施例的模块或步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,可选地,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。
以上所述仅为本发明的可选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有多种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。
工业实用性
通过本发明实施例,可在不改变原有web技术架构的前提下,利用对触碰事件的响应,实现对原有图形编排功能的扩展,适用于对触屏终端的功能优化。另外,由于支持多点触碰的响应,在触屏终端可以定制更多不同的触碰动作和手势,支持对不同的动作进行图形的响应绘制,提升触碰设计的效率。通过本发明实施例,可以实现在触屏终端上,用户无需接入鼠标,只需基于手指的多点触碰,可以轻松高效的完成以往在PC端才能完成的图形编排任务,大大提升图形化编排产品在触屏终端的用户体验。

Claims (10)

  1. 一种图形编排处理方法,包括:
    接收图形处理界面产生的触屏信息;
    解析出所述触屏信息对应的预先定义的触屏动作;
    根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;
    根据确定的所述绘制动作对图形节点进行编排处理。
  2. 根据权利要求1所述的方法,其中,接收图形处理界面产生的触屏信息包括:
    接收图形处理界面产生的多点触碰信息,其中,所述多点触碰信息包括以下至少之一:触碰开始、触碰移动、触碰结束、触碰取消。
  3. 根据权利要求1所述的方法,其中,解析出所述触屏信息对应的预先定义的触屏动作包括:
    结合触碰起始位置、触碰位移信息以及触碰时间解析出所述触屏信息对应的符合预先定义的触屏动作。
  4. 根据权利要求1所述的方法,其中,根据解析出的触屏动作确定对图形节点进行绘制的绘制动作包括:
    通过监听所述触屏动作,对所述触屏动作进行响应,并产生上下文信息,其中,所述上下文信息中携带有触屏动作所在位置、触屏动作的范围以及所述范围内图形的节点信息;
    汇总所述上下文信息,确定对图形节点进行编排的绘制动作。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述触屏信息对应的所述触屏动作包括如下动作中的一种或多种:
    短触、长触、单触、双触、方向手势。
  6. 一种图形编排处理装置,包括:
    接收模块,设置为:接收图形处理界面产生的触屏信息;
    解析模块,设置为:解析出所述触屏信息对应的预先定义的触屏动作;
    确定模块,设置为:根据解析出的触屏动作确定对图形节点进行绘制的绘制动作;
    编排模块,设置为:根据确定的所述绘制动作对图形节点进行编排处理。
  7. 根据权利要求6所述的装置,其中,所述接收模块包括:
    接收单元,设置为:接收图形处理界面产生的多点触碰信息,其中,所述多点触碰信息包括以下至少之一:触碰开始、触碰移动、触碰结束、触碰取消。
  8. 根据权利要求6所述的装置,其中,所述解析模块包括:
    解析单元,设置为:结合触碰起始位置、触碰位移信息以及触碰时间解析出所述触屏信息对应的符合预先定义的触屏动作。
  9. 根据权利要求6所述的装置,其中,所述确定模块包括:
    响应单元,设置为:通过监听所述触屏动作,对所述触屏动作进行响应,并产生上下文信息,其中,所述上下文信息中携带有触屏动作所在位置、触屏动作的范围以及所述范围内图形的节点信息;
    汇总单元,设置为:汇总所述上下文信息,确定对图形节点进行编排的绘制动作。
  10. 根据权利要求6至9中任一项所述的装置,其中,所述触屏信息对应的所述触屏动作包括如下动作中的一种或多种:
    短触、长触、单触、双触、方向手势。
PCT/CN2016/082241 2015-09-23 2016-05-16 图形编排处理方法及装置 WO2017049920A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510610938.XA CN106547460A (zh) 2015-09-23 2015-09-23 图形编排处理方法及装置
CN201510610938.X 2015-09-23

Publications (1)

Publication Number Publication Date
WO2017049920A1 true WO2017049920A1 (zh) 2017-03-30

Family

ID=58365612

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/082241 WO2017049920A1 (zh) 2015-09-23 2016-05-16 图形编排处理方法及装置

Country Status (2)

Country Link
CN (1) CN106547460A (zh)
WO (1) WO2017049920A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7021146B2 (ja) * 2019-04-01 2022-02-16 ファナック株式会社 ラダー表示装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946787A (zh) * 2012-09-20 2014-07-23 卡西欧计算机株式会社 图形绘制装置、图形绘制方法和其上记录有图形绘制程序的记录介质
CN104850408A (zh) * 2015-05-28 2015-08-19 深圳市陨石通信设备有限公司 一种在智能手表上绘制图片的方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234309B2 (en) * 2005-01-31 2012-07-31 International Business Machines Corporation Method for automatically modifying a tree structure
CN103793178B (zh) * 2014-03-05 2017-02-01 成都乐创信息科技有限公司 一种移动设备触摸屏中矢量图形编辑方法
CN104574467B (zh) * 2014-12-04 2017-09-29 华中师范大学 一种概念图自动生成方法及系统

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103946787A (zh) * 2012-09-20 2014-07-23 卡西欧计算机株式会社 图形绘制装置、图形绘制方法和其上记录有图形绘制程序的记录介质
CN104850408A (zh) * 2015-05-28 2015-08-19 深圳市陨石通信设备有限公司 一种在智能手表上绘制图片的方法和装置

Also Published As

Publication number Publication date
CN106547460A (zh) 2017-03-29

Similar Documents

Publication Publication Date Title
US20210208776A1 (en) Techniques for image-based search using touch controls
US9886430B2 (en) Entity based content selection
US10832630B2 (en) Providing a display based electronic survey
JP6328947B2 (ja) マルチタスキング運用のための画面表示方法及びこれをサポートする端末機
US20140289597A1 (en) Method and device for displaying preview screen of hyperlink
US20140089824A1 (en) Systems And Methods For Dynamically Altering A User Interface Based On User Interface Actions
EP2360566A2 (en) Method and apparatus for selecting hyperlinks
US11340755B2 (en) Moving a position of interest on a display
US20120235933A1 (en) Mobile terminal and recording medium
CN109074375B (zh) web文档中的内容选择
KR102099995B1 (ko) 웹 페이지 애플리케이션 제어 기법
JP2014527673A (ja) ウィジェット処理方法及び装置並びに移動端末
US20140123036A1 (en) Touch screen display process
US20150169072A1 (en) Method, apparatus and computer readable medium for polygon gesture detection and interaction
KR101158679B1 (ko) 직접입력 방식의 전자문서 상의 도형입력 방법, 그리고 직접입력 방식의 전자문서 상의 도형입력 프로그램을 기록한 컴퓨터로 판독가능한 기록매체
US11455071B2 (en) Layout method, device and equipment for window control bars
US20130321303A1 (en) Touch detection
CN107577404B (zh) 信息处理方法、装置和电子设备
WO2017162031A1 (zh) 一种信息采集方法和装置,以及一种智能终端
US20210089177A1 (en) Method for providing contents for mobile terminal on the basis of user touch and hold time
US11237699B2 (en) Proximal menu generation
WO2017049920A1 (zh) 图形编排处理方法及装置
WO2016018682A1 (en) Processing image to identify object for insertion into document
EP2801920A1 (en) Method and apparatus for displaying web page
WO2017101340A1 (zh) 多点触控调整视频窗口的方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16847802

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16847802

Country of ref document: EP

Kind code of ref document: A1