CN109324864B - Method and device for acquiring man-machine interaction operation information - Google Patents

Method and device for acquiring man-machine interaction operation information Download PDF

Info

Publication number
CN109324864B
CN109324864B CN201811244256.1A CN201811244256A CN109324864B CN 109324864 B CN109324864 B CN 109324864B CN 201811244256 A CN201811244256 A CN 201811244256A CN 109324864 B CN109324864 B CN 109324864B
Authority
CN
China
Prior art keywords
interaction
operation interface
interface frame
information
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811244256.1A
Other languages
Chinese (zh)
Other versions
CN109324864A (en
Inventor
李东播
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Winchannel Software Technology Co ltd
Original Assignee
Beijing Winchannel Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Winchannel Software Technology Co ltd filed Critical Beijing Winchannel Software Technology Co ltd
Priority to CN201811244256.1A priority Critical patent/CN109324864B/en
Publication of CN109324864A publication Critical patent/CN109324864A/en
Application granted granted Critical
Publication of CN109324864B publication Critical patent/CN109324864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method and a device for acquiring human-computer interaction operation information, wherein when a human-computer interaction event occurs, the method acquires an operation interface frame associated with the interaction event when the interaction event occurs, determines a unique identification image of the operation interface frame, determines operation positioning information according to the unique identification image and interaction position information of the interaction event in the operation interface frame, generates interaction action information corresponding to the operation interface frame according to the operation positioning information and an action type of the interaction event, and further generates a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information to obtain the human-computer interaction operation information. The man-machine interaction behavior can be automatically simulated by operating the man-machine interaction operation information acquired by the method and the device through the corresponding functional entity, and the data butt joint between systems with different data structures is realized.

Description

Method and device for acquiring man-machine interaction operation information
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for acquiring human-computer interaction operation information.
Background
In the practical application of internet technology, it is often involved in acquiring data from one system and inputting it into another system. For example, in the field of electronic commerce, an enterprise or a dealer opens a shop on an e-commerce platform, and registered users purchase goods through the e-commerce platform to generate order data on the e-commerce platform. At this time, order data generated by the merchant platform needs to be acquired and input into an ERP (Enterprise Resource Planning) system or a purchase-sale-stock system of the Enterprise or the dealer, so that the Enterprise or the dealer can manage data of inventory, products, finance and the like.
The Data transmission process between the ERP system or the purchase-sale-storage system and the e-commerce platform can be theoretically realized by an EDI (Electronic Data exchange) integration technology, that is, the system and the system transmit and exchange structured Data through a specific Data interface, adopting a standardized format and utilizing a computer network. However, in the process of developing the interface, the docking code or the business rule needs to be formulated through negotiation according to the development characteristics of the systems of the two parties, so that the development of the specific data interface needs a long time and corresponding technical support, and the development efficiency is low. Especially when the systems involved have different kinds or versions, a specific interface needs to be developed for each system, which further takes a lot of time, thereby further reducing the development efficiency. For example, the number of different types or versions of ERP systems is up to 1500-. In addition, many developers of ERP or purchase-sale-storage systems used by small enterprises do not exist, or the system version is old, and the data interface cannot be supported by root pressing, so that the interface development of the system is difficult.
Therefore, it is unrealistic to rely on the EDI integration technology to solve the data interfacing problem between any different systems. In fact, currently, data interfacing among many systems is still completed by means of manual entry. For example, the dealer manually enters order data generated by the e-commerce platform into its own ERP system. Therefore, how to implement data interfacing between systems with different data structures to replace a manual entry manner remains a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides a method and a device for acquiring man-machine interaction operation information, which are used for realizing data butt joint among systems with different data structures so as to replace a manual input mode.
In a first aspect, the present application provides a method for acquiring human-computer interaction operation information, where the method includes:
when a human-computer interaction event occurs, acquiring interaction event information; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
determining a unique identification image of the operation interface frame;
determining operation positioning information according to the unique identification image and the interactive position information;
generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
and generating man-machine interaction operation information comprising the sub-scripts and the execution sequence of the sub-scripts according to the logic relation among the sub-scripts corresponding to the plurality of interaction events.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the unique identification image includes images corresponding to one or more rectangular areas in a preset area of the interaction location.
With reference to the first aspect or the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the determining a unique identification image of an operation interface frame includes:
selecting one or more rectangular areas related through position layout in a preset area of the interaction position;
judging whether the image corresponding to the selected rectangular area or the plurality of rectangular areas related through the position layout has uniqueness relative to the operation interface frame through pixel comparison;
if the image corresponding to the selected one or the plurality of rectangular areas associated through the position layout has uniqueness relative to the operation interface frame, determining the image corresponding to the one or the plurality of rectangular areas associated through the position layout as a unique identification image of the operation interface frame;
if the image corresponding to the selected rectangular area or the rectangular areas related through the position layout does not have uniqueness relative to the operation interface frame, the step of selecting another rectangular area in the preset area of the interaction position is executed, and the other rectangular area is related to the selected rectangular area or rectangular areas through the position layout.
With reference to the first aspect or the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the determining, by pixel comparison, whether the image corresponding to the selected one or the plurality of rectangular areas associated by the position layout has uniqueness with respect to the operation interface frame includes:
performing pixel comparison on the selected image corresponding to one or more rectangular areas associated through position layout and the images corresponding to the rest areas of the operation interface frame;
and if the same region as the image pixel corresponding to the selected rectangular region exists in the rest regions of the operation interface frame, or the same region as the image pixel corresponding to the plurality of rectangular regions associated with the selected position layout exists in the rest regions of the operation interface frame, and the position layout among the same regions is the same as the position layout of the plurality of rectangular regions, determining that the image corresponding to the selected rectangular region or the plurality of rectangular regions associated with the position layout has uniqueness relative to the operation interface frame.
With reference to the first aspect or the first three possible implementation manners of the first aspect, in a fourth possible implementation manner of the first aspect, the determining operation positioning information according to the unique identification image and the interaction position information includes:
acquiring the current position coordinates of the unique identification image in the operation interface frame;
analyzing the interactive position information to obtain an interactive position coordinate;
and determining the relative position relationship between the interaction position and the position of the unique identification image according to the current position coordinate and the interaction position coordinate of the unique identification image in the operation interface frame, and taking the unique identification image and the relative position relationship as operation positioning information.
With reference to the first aspect or the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the determining, according to the current position coordinate and the interaction position coordinate of the unique identification image in the operation interface frame, a relative position relationship between the interaction position and a position where the unique identification image is located includes:
if the unique identification image comprises an image corresponding to a rectangular area, determining the relative position relationship between the interaction position and the image corresponding to the rectangular area according to the current position coordinate and the interaction position coordinate of the rectangular area in the operation interface frame;
and if the identification image comprises a plurality of rectangular areas, determining the relative position relation between the interaction position and the corresponding image of each rectangular area according to the current position coordinate and the interaction position coordinate of each rectangular area in the operation interface frame.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, after the creating a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information, the method further includes:
checking whether there is an erroneous or redundant sub-script;
if so, deleting the erroneous or redundant sub-script.
With reference to the first aspect or the sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the first aspect, after the deleting an erroneous or redundant sub-script, the method further includes:
judging whether the occurred interactive events form a preset event complete set or not;
and if the occurred interactive events do not form a preset event complete set, triggering the events which do not occur in the event complete set, and executing the step of acquiring the interactive event information.
With reference to the first aspect, in an eighth possible implementation manner of the first aspect, after the obtaining interaction event information when the human-computer interaction event occurs, the method further includes:
and adding a sequence code to the operation interface frame associated with the interactive event according to the occurrence time sequence of the interactive event.
In a second aspect, an embodiment of the present application provides an apparatus for acquiring human-computer interaction operation information, where the apparatus includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring interaction event information when a human-computer interaction event occurs; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
the first determining unit is used for determining the unique identification image of the operation interface frame;
the second determining unit is used for determining operation positioning information according to the unique identification image and the interactive position information;
the first generating unit is used for generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
the establishing unit is used for establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
and the second generating unit is used for generating the man-machine interaction operation information comprising the sub-scripts and the execution sequence of the sub-scripts according to the logic relation among the sub-scripts corresponding to the plurality of interaction events.
The application provides a method and a device for acquiring human-computer interaction operation information, when a human-computer interaction event occurs, an operation interface frame associated with the interaction event when the interaction event occurs is acquired, a unique identification image of the operation interface frame is determined, operation positioning information is determined according to the unique identification image and interaction position information of the interaction event in the operation interface frame, interaction action information corresponding to the operation interface frame is generated according to the operation positioning information and the action type of the interaction event, and then a sub-script corresponding to the interaction event is generated based on the operation interface frame and the interaction action information, so that the human-computer interaction operation information is acquired. The man-machine interaction operation information acquired by the method and the device can be operated in the corresponding functional entity, so that the man-machine interaction behavior can be automatically simulated, the data butt joint between systems with different data structures can be realized, and the manual operation is replaced.
Drawings
In order to more clearly explain the technical solution of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a flowchart illustrating a method for acquiring information of human-computer interaction operation according to an exemplary embodiment of the present application;
FIG. 2 is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 3(a) is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 3(b) is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 4(a) is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 4(b) is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for determining a unique identification image of an operator interface frame according to an exemplary embodiment;
FIG. 6 is a schematic diagram of an application scenario according to an embodiment of the present application;
FIG. 7 is a flow chart illustrating a method of determining operational positioning information according to one exemplary embodiment of the present application;
FIG. 8 is a flowchart illustrating another method for obtaining information related to human-computer interaction operations according to an exemplary embodiment of the present application;
FIG. 9 is a flowchart illustrating a further method for obtaining information about human-computer interaction operations according to an exemplary embodiment of the present application;
FIG. 10 is a block diagram of an apparatus for obtaining information of human-computer interaction operation according to an exemplary embodiment of the present application;
fig. 11 is a block diagram of another apparatus for acquiring human-computer interaction operation information according to an exemplary embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method and the device for acquiring the human-computer interaction operation information provided by the application are not limited to be applied to technical scenes of acquiring data in one system and inputting the data into another system, and can also be applied to other technical scenes of repeated human-computer interaction operation with regularity so as to solve the technical problems of frequent operation and low efficiency of a user.
The embodiment of the application provides a method and a device for acquiring man-machine interaction operation information, and the man-machine interaction operation information is operated by using a corresponding functional entity, so that a large amount of data can be automatically input into a specific system, such as the ERP system, without developing a specific docking interface, development resources are saved, manual operation is replaced, and the problem of data docking between systems with different data structures is solved.
Example one
Referring to fig. 1, the method for acquiring human-computer interaction operation information provided by the present application includes the following steps:
step S110, when a human-computer interaction event occurs, acquiring interaction event information; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
the man-machine interaction means that information interaction between a user and a computer is realized through computer input equipment and computer output equipment. The user inputs information on the machine through corresponding interactive actions (response actions) according to a large amount of related information and prompt requests provided by the output or display equipment of the machine. In the present application, an interactive event may be understood as an event in which a user inputs information on a machine through a corresponding interactive action. The type of interaction that triggers an interaction event varies depending on the input mode, e.g., mouse click type (mouse input), keyboard input type, etc.
In this embodiment, when a human-computer interaction event occurs, interaction event information is obtained, where the interaction event information includes an operation interface frame when the interaction event occurs, that is, a current operation interface frame when the human-computer interaction event occurs; for example, when a human-computer interaction event is monitored to occur, a picture frame of a current human-computer operation interface is captured. Generally, the frame of the human-machine interface is a full-screen displayed picture of an operation window, such as the Auto-CAD operation window shown in fig. 2. The method and the device enable the interaction event to be associated with the specific operation interface frame by binding the acquired operation interface frame with the corresponding interaction event. The operation interface frame can carry a time stamp or sequence coding corresponding to the time stamp added thereto, and is used for distinguishing operation interface frames corresponding to different interaction events and determining the sequence among the operation interface frames.
The interactive event information also comprises interactive position information of the interactive event in the operation interface frame. In this embodiment, the interaction location may be understood as an occurrence location or an area of the operation interface frame where the user response action triggering the interaction event occurs, such as a mouse click location, a keyboard input area, and the like. It is to be understood that "interaction location information" is quantitative information for characterizing an interaction location or region, and should contain "location coordinates". That is, if the "position coordinates" is one index, the interactive position information is a set of indexes. For example, the position coordinates of the mouse click position in the operation interface frame, or the set of point coordinates of the boundary of the keyboard input area in the operation interface frame, or the like.
Step S120, determining a unique identification image of the operation interface frame;
in the present embodiment, the unique identification image is a part of the image of the operation interface frame. In a more specific implementation manner, the unique identification image may include an image corresponding to a rectangular area in a preset area of the interaction location, for example, an image corresponding to an area B shown in fig. 3 (a); it is also possible to include images corresponding to a plurality of rectangular regions within a preset region of the interaction location, in which case the unique identification image can be understood as a collection of several small image regions, such as the images corresponding to regions B1, B2, and B3 shown in fig. 3 (B).
It should be noted that the interaction location described in this embodiment corresponds to a preset area, and the preset area may be understood as a vicinity area of the interaction location. The preset area may be dynamically defined based on the position coordinates of the interaction position in the operation interface frame and/or the size (width, length, area, radius, etc.) of the preset area. For example, in the operation interface frame shown in fig. 4(a), the position coordinate of the interaction position is L1(l11,l12),L1The corresponding preset area is A1; in the operation interface frame of FIG. 4(b), the position coordinate of the interaction position is L2(l21,l22),L2The corresponding predetermined area is a 2.
In the technical scheme of the application, the unique identification image mainly plays two roles, one is that the unique identification image is used for representing a complete operation interface frame to which the unique identification image belongs, the unique identification image of the operation interface frame is stored in a processor cache to replace the complete operation interface frame, data processing amount and storage amount can be reduced, and the other is that the unique identification image is used for positioning an interaction position when the finally obtained human-computer interaction information is operated so as to execute corresponding action at the interaction position.
Based on the above two aspects, in this embodiment, the step shown in fig. 5 is adopted to determine the unique identification image of the operation interface frame:
in step S121, selecting one or more rectangular regions associated by position layout within a preset region of the interaction position;
in a specific implementation, an image corresponding to a preset region of an interaction position is scanned to obtain pixel distribution data of the preset region, and one or more rectangular regions with relatively more characteristics are selected by analyzing and processing the pixel distribution data.
When a plurality of rectangular regions are selected, the positional layout or relative positional relationship of the rectangular regions is deterministic, that is, the plurality of rectangular regions are associated with each other by the deterministic positional layout. When the position of any one rectangular area is determined, the positions of the rest rectangular areas are determined accordingly.
It should be noted that, the size of the rectangular area is not limited in the present application, and in a specific implementation, the size data may be preset, or may be dynamically adjusted according to the pixel distribution data.
In step S122, whether the image corresponding to the selected one or multiple rectangular areas associated with each other through position layout has uniqueness with respect to the operation interface frame is determined through pixel comparison;
and comparing the pixel sequence contained in the image corresponding to the selected one or more rectangular areas with the rest pixel areas (excluding the parts corresponding to the rectangular areas) of the operation interface frame to judge whether the image pixels corresponding to the rectangular areas are distributed in a consistent manner in the operation interface frame.
For a single rectangular area, if an area consistent with a pixel part of the rectangular area exists in the rest pixel areas of the operation interface frame, the image corresponding to the rectangular area does not have uniqueness relative to the operation interface frame; otherwise, it has uniqueness.
For a plurality of rectangular areas related through position layout, if the rest pixel areas of the operation interface frame have the same position layout as the rectangular areas and the pixel distribution of the areas corresponding to the positions is also consistent, the images corresponding to the rectangular areas do not have uniqueness relative to the operation interface frame; otherwise, it has uniqueness.
Based on this, the present embodiment determines whether the image corresponding to the selected one or the plurality of rectangular regions associated by the position layout has uniqueness with respect to the operation interface frame by:
performing pixel comparison on the selected image corresponding to one or more rectangular areas associated through position layout and the images corresponding to the rest areas of the operation interface frame;
and if the same region as the image pixel corresponding to the selected rectangular region exists in the rest regions of the operation interface frame, or the same region as the image pixel corresponding to the plurality of rectangular regions associated with the selected position layout exists in the rest regions of the operation interface frame, and the position layout among the same regions is the same as the position layout of the plurality of rectangular regions, determining that the image corresponding to the selected rectangular region or the plurality of rectangular regions associated with the position layout has uniqueness relative to the operation interface frame.
For example, referring to fig. 6, the selected rectangular region includes C1, C2, and C3 associated by a position layout, and in other regions in the operation interface shown in fig. 6, there are C11, C22, and C33 that are consistent with the pixels of C1, C2, and C3 and also have the same position layout, which indicates that C1, C2, and C3 do not have uniqueness with respect to the operation interface frame. Otherwise, the method has uniqueness.
In step S123, if the selected one or the plurality of rectangular areas associated by the position layout correspond to an image having uniqueness with respect to the operation interface frame, determining that the one or the plurality of rectangular areas associated by the position layout correspond to an image as a unique identification image of the operation interface frame; otherwise, step S121 is executed, namely the step of selecting another rectangular area within the preset area of the interaction position, where the another rectangular area has a position layout association with the selected one or more rectangular areas.
In some preferred embodiments, before the step S121, a step S120 is further included: preprocessing the operation interface frame;
in this embodiment, the preprocessing may include a gray processing and a binarization processing on the operation interface frame, so as to reduce the calculation amount of the subsequent pixel comparison process, and reduce or eliminate the error of the pixel comparison result.
According to the embodiment of the application, the frame image of the operation interface is preprocessed in advance, the calculation amount of the subsequent pixel comparison process is reduced, and the error of the pixel comparison result is reduced or eliminated. And on one hand, the data processing amount and the memory amount are reduced, and on the other hand, when the man-machine interaction operation information is executed, the interaction position of the operation interface can be positioned according to the relation between the identification image stored in the man-machine interaction operation information and the interaction position, so that the corresponding action is executed at the interaction position.
Step S130, determining operation positioning information according to the unique identification image and the interactive position information;
when the human-computer interaction operation information is operated through the functional entity, the operation positioning information is used for simulating human-computer interaction behaviors and positioning interaction positions so as to execute corresponding actions at the interaction positions and replace manual operation.
The present embodiment determines the operational positioning information by the steps shown in fig. 7:
step S131, acquiring the current position coordinates of the unique identification image in the operation interface frame;
it should be noted that, the two-dimensional coordinate system where the operation interface frame is located is pre-established, and the two-dimensional coordinate system is used as a reference standard to discuss any position or position relationship described in the present application. In addition, since the unique identification image is a part or a plurality of regions of the operation interface frame and is a set of innumerable points, the coordinates of a certain boundary point or the coordinates of a center point of the unique identification image can be taken as the current position coordinates of the unique identification image. Because the actual sizes of the operation interface frame and the identification image can change under different scaling ratios, the values of the position coordinates of the interaction position or the identification image are relative values adaptive to the scaling ratio, but not absolute values.
Step S132, analyzing the interactive position information to obtain an interactive position coordinate;
step S133, determining a relative position relationship between the interaction position and the position of the unique identification image according to the current position coordinate and the interaction position coordinate of the unique identification image in the operation interface frame, and taking the unique identification image and the relative position relationship as operation positioning information.
If the unique identification image comprises an image corresponding to a rectangular area, determining the relative position relationship between the interaction position and the image corresponding to the rectangular area according to the current position coordinate and the interaction position coordinate of the rectangular area in the operation interface frame;
and if the identification image comprises a plurality of rectangular areas, determining the relative position relation between the interaction position and the corresponding image of each rectangular area according to the current position coordinate and the interaction position coordinate of each rectangular area in the operation interface frame.
It should be noted that, in the application, the relative position relationship between the interaction position and the position of the unique identification image is used as the operation positioning information, when the human-computer interaction operation information is operated through the functional entity, the position of the unique identification image is firstly searched on the execution interface, and then the interaction position is positioned according to the relative position relationship between the unique identification image and the interaction position, so that the corresponding action of the human-computer interaction behavior at the interaction position is simulated, and the manual operation is replaced.
It should be noted that, by using the relative position relationship between the interaction position and the position of the unique identification image as the operation positioning information, even if the position (position coordinate) of the unique identification image on the execution interface changes, the interaction position can still be positioned through the actual position of the unique identification image because the relative position relationship between the unique identification and the interaction position cannot change.
In step S140, generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
in this embodiment, the interactive action information corresponding to the operation interface frame may be understood as information including an action type of the interactive event and the operation positioning information. When the human-computer interaction operation information is operated through the functional entity, the interaction action information is used for simulating human-computer interaction behaviors, positioning interaction positions and executing actions of corresponding action types at the interaction positions to replace manual operation.
In step S150, a sub-script corresponding to the interaction event is established based on the operation interface frame and the interaction action information; and in step S160, generating the human-computer interaction operation information including the sub-scripts and the execution sequence of the sub-scripts according to the logic relationship between the sub-scripts corresponding to the plurality of interaction events.
If a script is a combination of commands for controlling a functional entity to perform an action, the sub-script described herein can be understood as a command in the combination for controlling the functional entity to perform an action. It should be noted that, in the embodiment of the present application, each operation interface frame corresponds to a specific interaction event, and meanwhile, a sub-script generated according to the operation interface frame and the corresponding interaction action information also corresponds to the specific interaction event. Based on this, the human-computer interaction operation information can be understood as a complete execution script formed by a plurality of sub-scripts after logic processing. When the script is run through the functional entity, each sub-script is executed according to the determined logical relationship, so that the interaction event corresponding to each sub-script is automatically realized.
In this embodiment, the logical relationship between each sub-script includes judgment, jump, branch processing, and the like. For example, when the screen pop contains two prompts of "yes" and "no", the screen to which the point "yes" or "no" jumps belongs to two branches. When the screen pops up the login page, the current login may need to be logged in again when overtime happens, and the logic relationship between the login screen and the sub-script corresponding to the screen before jumping is jumping.
According to the embodiment, when the human-computer interaction event occurs, the method for acquiring the human-computer interaction operation information acquires the operation interface frame associated with the interaction event when the interaction event occurs, determines the unique identification image of the operation interface frame, determines the operation positioning information according to the unique identification image and the interaction position information of the interaction event in the operation interface frame, generates the interaction action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interaction event, and further generates the sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information to acquire the human-computer interaction operation information. The man-machine interaction operation information acquired by the method and the device can be operated in the corresponding functional entity, so that the man-machine interaction behavior can be automatically simulated, the data butt joint among systems with different data structures can be realized, and the manual operation is replaced.
Example two
Referring to fig. 8, the present embodiment provides another method for acquiring human-computer interaction operation information, including the following steps:
step S210, when a human-computer interaction event occurs, acquiring interaction event information; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
step S220, determining a unique identification image of the operation interface frame;
step S230, determining operation positioning information according to the unique identification image and the interactive position information;
step S240, generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
step S250, establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
step S260, checking whether an error or redundant sub-script exists; if yes, jumping to step S270, otherwise, jumping to step S280;
step S270, deleting the wrong or redundant sub-scripts;
step S280, generating man-machine interaction operation information including the sub-scripts and the execution sequence of the sub-scripts according to the logic relation among the sub-scripts corresponding to the plurality of interaction events.
The second embodiment is different from the first embodiment only in that after the step S150 of the first embodiment, a step of deleting an erroneous or redundant sub-script is further included.
In some practical application scenarios, errors or redundancies may occur in the acquired operation interface frames, and the sub-scripts established based on the erroneous or redundant operation interface frames are deleted in the embodiment, so that the finally generated human-computer interaction operation information is more accurate and has more performability.
EXAMPLE III
Referring to fig. 9, the present embodiment provides another method for acquiring human-computer interaction operation information, including the following steps:
step S310, when a human-computer interaction event occurs, acquiring interaction event information; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
step S320, determining a unique identification image of the operation interface frame;
step S330, determining operation positioning information according to the unique identification image and the interactive position information;
step S340, generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
step S350, establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
step S360, checking whether an error or redundant sub script exists; if yes, jumping to step S370, otherwise, jumping to step S380;
step S370, deleting the wrong or redundant sub-script;
step S380, judging whether the occurred interactive events form a preset event complete set; if the occurred interactive events do not form a preset event complete set, jumping to step S390; if the occurred interactive events form a preset event complete set, jumping to step S400;
step S390, triggering the events which do not occur in the event complete set, and jumping to the step S310;
and step S400, generating man-machine interaction operation information comprising the sub-scripts and the execution sequence of the sub-scripts according to the logic relation among the sub-scripts corresponding to the plurality of interaction events.
The difference between the third embodiment and the second embodiment is that after the second embodiment, step S270, a step of supplementing the ui frame by triggering an interaction event that does not occur is further included.
In this embodiment, all the interaction events that may occur are known based on a specific computer software application or system completing a specific event, such as an operation event of entering order data into the ERP system. According to the method and the device, an event complete set is preset according to all known interaction events, and each interaction event in the event complete set can be stored in the form of an operation interface frame corresponding to the interaction event.
In this embodiment, after deleting the wrong or redundant sub-script, by comparing the obtained operation interface frame (the occurred interaction event) with the operation interface frames in the event corpus, it is determined whether the occurred interaction event constitutes the preset event corpus, and if the occurred interaction event can constitute the preset event corpus, it indicates that the currently obtained operation interface frame is complete and does not need to be supplemented. And if the occurred interactive events do not form the preset event complete set, the missing operation interface frames are supplemented by triggering the events which do not occur in the event complete set.
In some practical application scenarios, some logical branches may occur, and at this time, a plurality of branch events cannot be triggered simultaneously, but only one of the branches can be selected, so that the obtained operation interface frame is incomplete. For the situation, the missing operation interface frame can be supplemented in a targeted manner by the method of the embodiment, so that the finally generated human-computer interaction operation information is more complete.
According to the method for acquiring the man-machine interaction operation information, when a man-machine interaction event occurs, an operation interface frame associated with the interaction event when the interaction event occurs is acquired, the uniqueness identification image of the operation interface frame is determined, operation positioning information is determined according to the uniqueness identification image and the interaction position information of the interaction event in the operation interface frame, interaction action information corresponding to the operation interface frame is generated according to the operation positioning information and the action type of the interaction event, and then a sub-script corresponding to the interaction event is generated based on the operation interface frame and the interaction action information, so that the man-machine interaction operation information is acquired. The man-machine interaction operation information acquired by the method and the device can be operated in the corresponding functional entity, so that the man-machine interaction behavior can be automatically simulated, the data butt joint among systems with different data structures can be realized, and the manual operation is replaced.
Example four
According to the method for acquiring human-computer interaction operation information provided in the foregoing embodiment, a fourth embodiment provides an apparatus for acquiring human-computer interaction operation information, with reference to fig. 10, the apparatus includes:
the acquisition unit U100 is used for acquiring interaction event information when a human-computer interaction event occurs; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
the first determining unit U200 is used for determining a unique identification image of the operation interface frame;
the second determining unit U300 is used for determining operation positioning information according to the unique identification image and the interactive position information;
the first generating unit U400 is configured to generate interaction action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interaction event;
the establishing unit U500 is used for establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
and a second generating unit U600, configured to generate human-computer interaction operation information including the sub-scripts and the execution sequences of the sub-scripts according to the logical relationship between the sub-scripts corresponding to the multiple interaction events.
Preferably, the unique identification image includes images corresponding to one or more rectangular regions within a preset region of the interaction position.
Preferably, the first determining unit U200 determines the unique identification image of the operation interface frame according to the following steps:
selecting one or more rectangular areas related through position layout in a preset area of the interaction position;
judging whether the image corresponding to the selected rectangular area or the plurality of rectangular areas related through the position layout has uniqueness relative to the operation interface frame through pixel comparison;
if the image corresponding to the selected one or the plurality of rectangular areas associated through the position layout has uniqueness relative to the operation interface frame, determining the image corresponding to the one or the plurality of rectangular areas associated through the position layout as a unique identification image of the operation interface frame;
and if the image corresponding to the selected rectangular area or the rectangular areas related through the position layout does not have uniqueness relative to the operation interface frame, executing the step of selecting another rectangular area in the preset area of the interaction position, wherein the another rectangular area and the selected rectangular area or rectangular areas related through the position layout have rectangular areas related through the position layout.
Preferably, the first determining unit U200 determines whether the image corresponding to the selected one or the plurality of rectangular regions associated with each other by the position layout has uniqueness with respect to the operation interface frame by pixel comparison according to the following steps:
performing pixel comparison on the selected image corresponding to one or more rectangular areas associated through position layout and the images corresponding to the rest areas of the operation interface frame;
and if the same region as the image pixel corresponding to the selected rectangular region exists in the rest regions of the operation interface frame, or the same region as the image pixel corresponding to the plurality of rectangular regions associated with the selected position layout exists in the rest regions of the operation interface frame, and the position layout among the same regions is the same as the position layout of the plurality of rectangular regions, determining that the image corresponding to the selected rectangular region or the plurality of rectangular regions associated with the position layout has uniqueness relative to the operation interface frame.
Preferably, the second determining unit U300 determines the operation positioning information according to the following steps:
acquiring the current position coordinates of the unique identification image in the operation interface frame;
analyzing the interactive position information to obtain an interactive position coordinate;
and determining the relative position relationship between the interaction position and the position of the unique identification image according to the current position coordinate and the interaction position coordinate of the unique identification image in the operation interface frame, and taking the unique identification image and the relative position relationship as operation positioning information.
Preferably, the second determining unit U300 determines the relative position relationship between the interaction position and the position of the unique identification image according to the following steps:
if the unique identification image comprises an image corresponding to a rectangular area, determining the relative position relationship between the interaction position and the image corresponding to the rectangular area according to the current position coordinate and the interaction position coordinate of the rectangular area in the operation interface frame;
and if the identification image comprises a plurality of rectangular areas, determining the relative position relation between the interaction position and the corresponding image of each rectangular area according to the current position coordinate and the interaction position coordinate of each rectangular area in the operation interface frame.
Preferably, referring to fig. 11, the apparatus further comprises:
a deletion unit U700 for checking whether there is an erroneous or redundant sub-script;
if so, deleting the erroneous or redundant sub-script.
Preferably, the apparatus further comprises:
the triggering unit U800 is used for judging whether the occurred interactive events form a preset event complete set or not;
and if the occurred interactive events do not form a preset event complete set, triggering the events which do not occur in the event complete set, and executing the step of acquiring the interactive event information.
Preferably, the obtaining unit U100 is further configured to: and adding a sequence code for the acquired operation interface frame associated with the interactive event according to the sequence of the occurrence time of the interactive event.
The application provides a method and a device for acquiring human-computer interaction operation information, when a human-computer interaction event occurs, an operation interface frame associated with the interaction event when the interaction event occurs is acquired, a unique identification image of the operation interface frame is determined, operation positioning information is determined according to the unique identification image and interaction position information of the interaction event in the operation interface frame, interaction action information corresponding to the operation interface frame is generated according to the operation positioning information and the action type of the interaction event, and then a sub-script corresponding to the interaction event is generated based on the operation interface frame and the interaction action information, so that the human-computer interaction operation information is acquired. The man-machine interaction operation information acquired by the method and the device can be operated in the corresponding functional entity, so that the data butt joint between systems with different data structures can be realized by automatically simulating the man-machine interaction behavior, and the manual operation is replaced.
In specific implementation, the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in each embodiment of the obtaining method provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the description in the method embodiment.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (8)

1. A method for acquiring human-computer interaction operation information is characterized by comprising the following steps:
when a human-computer interaction event occurs, acquiring interaction event information; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
determining a unique identification image of the operation interface frame;
determining operation positioning information according to the unique identification image and the interactive position information;
generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
generating man-machine interaction operation information comprising the sub-scripts and the execution sequence of the sub-scripts according to the logic relation among the sub-scripts corresponding to the plurality of interaction events;
the unique identification image comprises images corresponding to one or more rectangular areas in a preset area of the interaction position;
the determining of the unique identification image of the operation interface frame comprises the following steps:
selecting one or more rectangular areas related through position layout in a preset area of the interaction position;
judging whether the image corresponding to the selected rectangular area or the plurality of rectangular areas related through the position layout has uniqueness relative to the operation interface frame through pixel comparison;
if the image corresponding to the selected one or the plurality of rectangular areas associated through the position layout has uniqueness relative to the operation interface frame, determining the image corresponding to the one or the plurality of rectangular areas associated through the position layout as a unique identification image of the operation interface frame;
if the image corresponding to the selected rectangular area or the rectangular areas related through the position layout does not have uniqueness relative to the operation interface frame, the step of selecting another rectangular area in the preset area of the interaction position is executed, and the other rectangular area is related to the selected rectangular area or rectangular areas through the position layout.
2. The method according to claim 1, wherein the determining whether the selected one or more rectangular areas associated by the position layout correspond to an image having uniqueness with respect to the operation interface frame by pixel comparison comprises:
performing pixel comparison on the selected image corresponding to one or more rectangular areas associated through position layout and the images corresponding to the rest areas of the operation interface frame;
and if the same region as the image pixel corresponding to the selected rectangular region exists in the rest regions of the operation interface frame, or the same region as the image pixel corresponding to the plurality of rectangular regions associated with the selected position layout exists in the rest regions of the operation interface frame, and the position layout among the same regions is the same as the position layout of the plurality of rectangular regions, determining that the image corresponding to the selected rectangular region or the plurality of rectangular regions associated with the position layout has uniqueness relative to the operation interface frame.
3. The method according to any one of claims 1-2, wherein determining operational positioning information from the unique identification image and the interactive position information comprises:
acquiring the current position coordinates of the unique identification image in the operation interface frame;
analyzing the interactive position information to obtain an interactive position coordinate;
and determining the relative position relationship between the interaction position and the position of the unique identification image according to the current position coordinate and the interaction position coordinate of the unique identification image in the operation interface frame, and taking the unique identification image and the relative position relationship as operation positioning information.
4. The method according to claim 3, wherein the determining the relative position relationship between the interaction position and the position of the unique identification image according to the current position coordinate and the interaction position coordinate of the unique identification image in the operation interface frame comprises:
if the unique identification image comprises an image corresponding to a rectangular area, determining the relative position relationship between the interaction position and the image corresponding to the rectangular area according to the current position coordinate and the interaction position coordinate of the rectangular area in the operation interface frame;
and if the identification image comprises a plurality of rectangular areas, determining the relative position relation between the interaction position and the corresponding image of each rectangular area according to the current position coordinate and the interaction position coordinate of each rectangular area in the operation interface frame.
5. The method of claim 1, wherein after establishing a sub-script corresponding to the interaction event based on the operator interface frame and the interaction action information, the method further comprises:
checking whether there is an erroneous or redundant sub-script;
if so, deleting the erroneous or redundant sub-script.
6. The method of claim 5, wherein after the deleting of the erroneous or redundant sub-script, the method further comprises:
judging whether the occurred interactive events form a preset event complete set or not;
and if the occurred interactive events do not form a preset event complete set, triggering the events which do not occur in the event complete set, and executing the step of acquiring the interactive event information.
7. The method according to claim 1, wherein after the obtaining interaction event information when the human-computer interaction event occurs, the method further comprises:
and adding a sequence code to the operation interface frame associated with the interactive event according to the occurrence time sequence of the interactive event.
8. An apparatus for acquiring human-computer interaction operation information, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring interaction event information when a human-computer interaction event occurs; the interactive event information comprises a current operation interface frame associated with the interactive event and interactive position information of the interactive event in the operation interface frame;
the first determining unit is used for determining the unique identification image of the operation interface frame;
the second determining unit is used for determining operation positioning information according to the unique identification image and the interactive position information;
the first generating unit is used for generating interactive action information corresponding to the operation interface frame according to the operation positioning information and the action type of the interactive event;
the establishing unit is used for establishing a sub-script corresponding to the interaction event based on the operation interface frame and the interaction action information;
the second generation unit is used for generating human-computer interaction operation information comprising the sub-scripts and the execution sequence of the sub-scripts according to the logic relation among the sub-scripts corresponding to the plurality of interaction events;
the unique identification image comprises images corresponding to one or more rectangular areas in a preset area of the interaction position;
the first determining unit determines the unique identification image of the operation interface frame according to the following steps:
selecting one or more rectangular areas related through position layout in a preset area of the interaction position;
judging whether the image corresponding to the selected rectangular area or the plurality of rectangular areas related through the position layout has uniqueness relative to the operation interface frame through pixel comparison;
if the image corresponding to the selected one or the plurality of rectangular areas associated through the position layout has uniqueness relative to the operation interface frame, determining the image corresponding to the one or the plurality of rectangular areas associated through the position layout as a unique identification image of the operation interface frame;
if the image corresponding to the selected rectangular area or the rectangular areas related through the position layout does not have uniqueness relative to the operation interface frame, the step of selecting another rectangular area in the preset area of the interaction position is executed, and the other rectangular area is related to the selected rectangular area or rectangular areas through the position layout.
CN201811244256.1A 2018-10-24 2018-10-24 Method and device for acquiring man-machine interaction operation information Active CN109324864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811244256.1A CN109324864B (en) 2018-10-24 2018-10-24 Method and device for acquiring man-machine interaction operation information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811244256.1A CN109324864B (en) 2018-10-24 2018-10-24 Method and device for acquiring man-machine interaction operation information

Publications (2)

Publication Number Publication Date
CN109324864A CN109324864A (en) 2019-02-12
CN109324864B true CN109324864B (en) 2021-09-21

Family

ID=65263269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811244256.1A Active CN109324864B (en) 2018-10-24 2018-10-24 Method and device for acquiring man-machine interaction operation information

Country Status (1)

Country Link
CN (1) CN109324864B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114430823A (en) * 2019-11-06 2022-05-03 西门子股份公司 Software knowledge capturing method, device and system
CN111459837B (en) * 2020-04-16 2021-03-16 大连即时智能科技有限公司 Conversation strategy configuration method and conversation system
CN112347176A (en) * 2020-11-11 2021-02-09 天津汇商共达科技有限责任公司 Data docking method and device based on human-computer interaction behavior
CN112347177A (en) * 2020-11-11 2021-02-09 天津汇商共达科技有限责任公司 Data docking equipment based on human-computer interaction behavior
CN112347178A (en) * 2020-11-11 2021-02-09 天津汇商共达科技有限责任公司 Data docking method and device based on human-computer interaction behavior, terminal and server

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951393A (en) * 2014-03-25 2015-09-30 中国电信股份有限公司 Testing method and device
CN108459848A (en) * 2017-02-20 2018-08-28 深圳市北斗智研科技有限公司 A kind of script acquisition methods and system applied to Excel softwares
CN108579094A (en) * 2018-05-11 2018-09-28 深圳市腾讯网络信息技术有限公司 A kind of user interface detection method and relevant apparatus, system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946638B1 (en) * 2016-03-30 2018-04-17 Open Text Corporation System and method for end to end performance response time measurement based on graphic recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951393A (en) * 2014-03-25 2015-09-30 中国电信股份有限公司 Testing method and device
CN108459848A (en) * 2017-02-20 2018-08-28 深圳市北斗智研科技有限公司 A kind of script acquisition methods and system applied to Excel softwares
CN108579094A (en) * 2018-05-11 2018-09-28 深圳市腾讯网络信息技术有限公司 A kind of user interface detection method and relevant apparatus, system and storage medium

Also Published As

Publication number Publication date
CN109324864A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN109324864B (en) Method and device for acquiring man-machine interaction operation information
US9841956B2 (en) User interface style guide compliance reporting
US8015239B2 (en) Method and system to reduce false positives within an automated software-testing environment
EP3476092B1 (en) Automation of image validation
JP6653929B1 (en) Automatic determination processing device, automatic determination processing method, inspection system, program, and recording medium
CN109815119B (en) APP link channel testing method and device
US20090234749A1 (en) Order Processing Analysis Tool
US20120198367A1 (en) User interface style guide compliance forecasting
CN109445893A (en) A kind of method and device of the unique identification of determining operation interface frame
CN111435367A (en) Knowledge graph construction method, system, equipment and storage medium
CN107533544B (en) Element identifier generation
CN112817817B (en) Buried point information query method, buried point information query device, computer equipment and storage medium
US20240119481A1 (en) Collaboration management system and method
CN109343852B (en) Method and device for displaying frame pictures of operation interface
CN113687884B (en) File delivery method, device, system, equipment and storage medium
CN113296912A (en) Task processing method, device and system, storage medium and electronic equipment
CN112347177A (en) Data docking equipment based on human-computer interaction behavior
CN112347178A (en) Data docking method and device based on human-computer interaction behavior, terminal and server
CN112347176A (en) Data docking method and device based on human-computer interaction behavior
US11711276B2 (en) Providing fast trigger matching to support business rules that modify customer-support tickets
CN113344550B (en) Flow processing method, device, equipment and storage medium
US20060074936A1 (en) Method and system for generating a report using an object-oriented approach
CN111338941B (en) Information processing method and device, electronic equipment and storage medium
CN112306333A (en) Data filling method and device
CN117093185A (en) Electric control software development method and electric control software development system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant