CN113168283A - Knowledge acquisition method, device and system for modeling - Google Patents

Knowledge acquisition method, device and system for modeling Download PDF

Info

Publication number
CN113168283A
CN113168283A CN201880099861.6A CN201880099861A CN113168283A CN 113168283 A CN113168283 A CN 113168283A CN 201880099861 A CN201880099861 A CN 201880099861A CN 113168283 A CN113168283 A CN 113168283A
Authority
CN
China
Prior art keywords
event
modeling
mouse
action
knowledge acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880099861.6A
Other languages
Chinese (zh)
Inventor
陈雪
曹佃松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Publication of CN113168283A publication Critical patent/CN113168283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

The modeling knowledge acquisition method, device and system comprise the following steps: s1, capturing modeling information based on modeling software on a modeling device, wherein the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent to the modeling device by responding to mouse actions, keyboard actions or touch screen actions; s2, extracting and generating a semantic model of the event based on the event and the screen capture analysis modeling intention when the event occurs; s3, storing the semantic model of the event in a knowledge base. The method is suitable for various modeling software, can be independently executed, can realize the reproduction of modeling, and has high efficiency and strong reliability.

Description

Knowledge acquisition method, device and system for modeling Technical Field
The invention relates to the field of modeling, in particular to a method, a device and a system for acquiring modeling knowledge.
Background
Engineering modeling processes cover a large amount of knowledge that is of great benefit, especially expert knowledge. Based on this knowledge, some primary engineers are able to gain insight (insights) into their modeling work, and thus may help them improve the quality and efficiency of the modeling work. The above process is called knowledge capture and reuse (capturing and reusing).
Among them, the prior art generally has three ways to capture knowledge. One way of capturing knowledge is to manually input and compose knowledge. Specifically, it takes time and requires cooperation of experts to communicate with specialists and experts to obtain necessary information and then to manually construct knowledge from the information. Another way of knowledge capture is to obtain information from log files. In particular, Software event log files (Software event log files) are considered to be the most important resource capable of providing first-hand process information, however, most of the time, business modeling Software does not provide these log files to users, and thus it is difficult for users to achieve knowledge capture in this way. The last approach is based on Application Program Interfaces (APIs), which some industrial software provides for developing custom functions.
However, some industrial software cannot provide a log file that includes information that is provided to a user to model the process, or develop a log file that captures the user application interface of the tool to model the process information, where developing a user interface is more vivid than a script (script), which more easily recognizes knowledge acquisition in the modeling process.
Disclosure of Invention
The invention provides a modeling knowledge acquisition method in a first aspect, which comprises the following steps: s1, capturing modeling information based on modeling software on a modeling device, wherein the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent to the modeling device by responding to mouse actions, keyboard actions or touch screen actions; s2, extracting and generating a semantic model of the event based on the event and the screen capture analysis modeling intention when the event occurs; s3, storing the semantic model of the event in a knowledge base.
Further, the mouse actions include at least one of:
-clicking the left mouse button;
-clicking the right mouse button;
-mouse movement.
Further, the keyboard action includes pressing any key of the keyboard.
Further, the touch screen action includes at least one of:
-clicking on the screen;
-sliding the screen;
-shaking the screen.
Further, the step S1 further includes: capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
Further, the key value comprises a coordinate, an input text and an input numerical value.
Further, the step S2 includes positioning the motion in the first event record table in the screen capture, and determining the object selected by the motion based on the object characteristics according to image recognition; generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image; executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table; and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
Further, the second event record table includes: object, object type, object characteristics, object location.
Further, when the object in the second event record table is a numerical box, the object is characterized by "rectangle with a large blank area", and the position thereof is "left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
Further, semantic models of a plurality of events in the knowledge base are stored as event streams based on the sequence of the events, the knowledge base recommends the next event of the event stream to which the event belongs or the event executed at the same time when the user performs the event operation when modeling again, and the next event or the event executed at the same time is automatically matched according to the selection of the user.
The second aspect of the present invention provides a modeling knowledge acquisition apparatus, including: the device comprises a capturing device, a judging device and a control device, wherein the capturing device captures modeling information based on modeling software on a modeling device, the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent by the modeling device in response to mouse actions, keyboard actions or touch screen actions; and the analysis device analyzes the modeling intention based on the event and the screen capture when the event occurs, extracts and generates a semantic model of the event, and stores the semantic model of the event in a knowledge base.
Further, the mouse actions include at least one of:
-clicking the left mouse button;
-clicking the right mouse button;
-mouse movement.
Further, the keyboard action includes pressing any key of the keyboard.
Further, the touch screen action includes at least one of:
-clicking on the screen;
-sliding the screen;
-shaking the screen.
Further, the action S1 further includes: capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
Further, the key value comprises a coordinate, an input text and an input numerical value.
Further, the action S2 includes locating the action in the first event record table in the screen capture, and determining the object selected by the action based on the object characteristics according to image recognition; generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image; executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table; and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
Further, the second event record table includes: object, object type, object characteristics, object location.
Further, when the object in the second event record table is a numerical box, the object is characterized by "rectangle with a large blank area", and the position thereof is "left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
Further, semantic models of a plurality of events in the knowledge base are stored as event streams based on the sequence of the events, the knowledge base recommends the next event of the event stream to which the event belongs or the event executed at the same time when the user performs the event operation when modeling again, and the next event or the event executed at the same time is automatically matched according to the selection of the user.
The third aspect of the present invention provides a modeling knowledge acquisition system, including: a processor; and a memory coupled with the processor, the memory having instructions stored therein that, when executed by the processor, cause the electronic device to perform acts comprising: s1, capturing modeling information based on modeling software on a modeling device, wherein the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent to the modeling device by responding to mouse actions, keyboard actions or touch screen actions; s2, extracting and generating a semantic model of the event based on the event and the screen capture analysis modeling intention when the event occurs; s3, storing the semantic model of the event in a knowledge base.
Further, the mouse actions include at least one of:
-clicking the left mouse button;
-clicking the right mouse button;
-mouse movement.
Further, the keyboard action includes pressing any key of the keyboard.
Further, the touch screen action includes at least one of:
-clicking on the screen;
-sliding the screen;
-shaking the screen.
Further, the step S1 further includes: capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
Further, the key value comprises a coordinate, an input text and an input numerical value.
Further, the step S2 includes positioning the motion in the first event record table in the screen capture, and determining the object selected by the motion based on the object characteristics according to image recognition; generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image; executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table; and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
Further, the second event record table includes: object, object type, object characteristics, object location.
Further, when the object in the second event record table is a numerical box, the object is characterized by "rectangle with a large blank area", and the position thereof is "left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
Further, semantic models of a plurality of events in the knowledge base are stored as event streams based on the sequence of the events, the knowledge base recommends the next event of the event stream to which the event belongs or the event executed at the same time when the user performs the event operation when modeling again, and the next event or the event executed at the same time is automatically matched according to the selection of the user.
A fourth aspect of the invention provides a computer program product, tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform the method of the first aspect of the invention.
A fifth aspect of the invention provides a computer-readable medium having stored thereon computer-executable instructions that, when executed, cause at least one processor to perform the method of the first aspect of the invention.
The modeling knowledge acquisition mechanism provided by the invention can reproduce a modeling process based on events or data and screen shots thereof, and provides good resources for knowledge base extraction. The invention can be used for various modeling software, does not depend on specific modeling software and does not interfere with the function of any specific modeling software, and the invention can be independently executed. The invention is not only applicable to one modeling software tool, but can be applied to multiple software tools, and the invention is still applicable when a user needs to switch work between multiple modeling software tools (e.g., multiple software co-simulation). For modeling software tools that provide an Application Program Interface (API), the present invention can still be used to perform modeling knowledge acquisition functions without the need for additional software development efforts. The modeling knowledge acquisition mechanism of the invention runs in the background and does not disturb the modeling work of designers or the working mode thereof.
In addition, the invention is not only suitable for Windows systems, but also suitable for MAC or Linux systems and the like. The present invention is based on os (operation system) functionality, such as Windows hook or screen capture functionality. The invention can also be used with many other types of software that do not require changes to their original functionality or UI, as long as the software does not mask the Windows hook or screen capture functionality. The present invention is independent of the UI mode or function of the modeling software itself, so the user does not need to develop or revise its functionality based on different modeling software. For a Windows system, a MAC or Linux system, etc., which are all suitable for Windows hook, a screen capture function or an Optical Character Recognition (OCR) function, therefore, different OS platforms, the modeling knowledge acquisition function provided by the present invention can be developed.
In a word, the modeling knowledge acquisition mechanism provided by the invention has a wide application range, can meet the multiplexing requirement of modeling knowledge acquisition, and can improve the function expansion of modeling software.
Drawings
FIG. 1 is an architectural diagram of a modeling knowledge acquisition mechanism in accordance with a specific embodiment of the present invention;
FIG. 2 is a first screen shot of a modeling knowledge acquisition mechanism, according to a specific embodiment of the present invention;
FIG. 3 is an object mapping table of a modeling knowledge acquisition mechanism, according to a specific embodiment of the present invention;
FIG. 4 is a diagram of a semantic model structure for modeling events of a knowledge acquisition mechanism in accordance with a specific embodiment of the present invention;
FIGS. 5 a-5 c are first screen shots of a modeling knowledge acquisition mechanism, according to another embodiment of the invention;
FIG. 6 is a diagram of a semantic model structure for modeling events of a knowledge acquisition mechanism in accordance with yet another specific embodiment of the present invention.
Detailed Description
The following describes a specific embodiment of the present invention with reference to the drawings.
The invention provides a knowledge acquisition mechanism for modeling. Engineers typically perform software modeling processes through specific software, and similar or identical modeling processes can be reused to save subsequent modeling events, especially with reference to modeling experience prior to a senior expert. The invention can carry out the operation of an executive person (such as mouse, keyboard or touch screen operation) in all historical modeling, judge the modeling intention through analysis, interpret the modeling intention into a semantic model of a meaningful series of events, and store the semantic model in a knowledge base as the basis of the next modeling.
FIG. 1 is an architectural diagram of a modeling knowledge acquisition mechanism in accordance with a specific embodiment of the present invention. An engineer typically performs a modeling process on a tool interface 100 of modeling software installed on a modeling device on which a modeling knowledge acquisition device 200 provided by the present invention is embedded for performing modeling knowledge acquisition while the engineer performs the modeling process. Specifically, the modeling knowledge acquisition apparatus 200 includes a capturing apparatus 210 and an analyzing apparatus 220, wherein the capturing apparatus 210 captures an operation of an engineer on modeling software and then performs screen capture thereof as a series of events and at the same time, and the analyzing apparatus 220 analyzes a modeling intention and generates a semantic model based on the captured events and screen capture, and then stores the semantic model in the knowledge base 230.
In the present embodiment, the modeling apparatus is a personal computer, wherein the operating system of the personal computer is Windows.
First, step S1 is executed, the capturing device 210 captures modeling information based on modeling software on a personal computer, wherein the modeling information includes an event and a screen shot of the personal computer when the event occurs, wherein the event is a command for operating the modeling software issued to the personal computer in response to a mouse action, a keyboard action or a touch screen action. Specifically, the capturing device 210 captures mouse actions, keyboard actions or touch screen actions based on modeling software, and generates a first event record table based on the actions and timestamps of the actions when the actions occur, related child panel names, parent panel names and key values of the actions on the modeling software. Wherein the key value comprises a coordinate, an input text and an input numerical value.
The knowledge acquisition apparatus 200 includes a capture module 211 and a screen capture module 212. The step module 210 is mainly used for capturing mouse actions, keyboard actions or touch screen actions.
Wherein the mouse action comprises at least one of: clicking a left mouse button; clicking a right mouse button; the mouse is moved. Optionally, any of the above items are arranged and combined, and the mouse action includes: double clicking the left or right mouse button, clicking the left or right mouse button for a period of time, simultaneously clicking the left or right mouse button, clicking the right and/or left mouse button, and simultaneously moving for a period of time. In short, all mouse operations are covered in the protection scope of the present invention, and for the sake of brevity, the detailed description is omitted.
Wherein the keyboard action comprises pressing any key of the keyboard.
Wherein the touch screen action comprises at least one of: clicking a screen; sliding the screen; shaking the screen, etc.
Illustratively, as shown in FIG. 2, if the engineer clicks on the position P1 on the modeling software, the knowledge acquisition device 200 captures modeling information based on the modeling software on a personal computer as shown in the following table.
TABLE 1 first event record Table
Time stamp Movement of Name of sub-panel Name of mother panel Critical value
Thu Apr
26 14:59:30 2018 Mouse left down Models.Frame.TestStation1 Technomatix Observer_example1_rev3_0.spp (1340,630)
Thu Apr 26 14:59:31 2018 Mouse left up Models.Frame.TestStation1 Technomatix Observer_example1_rev3_0.spp (1340,630)
Optionally, Table 1 holds event records in the csv format. As shown in table 1, a time stamp (time stamp) indicates the time of execution of the action, and specifically, "Thu Apr 2614: 59: 302018" indicates that the action of "Mouse left down", i.e., clicking the left button of the Mouse, was made 59 minutes and 30 seconds at 14 pm on the 4-month 26 th day of 2018, and then the action of "Mouse left up", i.e., lifting the left button of the Mouse, was made 59 minutes and 31 seconds at 14 pm on the 4-month 26 th day of "Thu Apr 2614: 59: 312018" 2018. This indicates that the engineer has completed a left mouse click on the user interface of the modeling software. Then, the child panel name and the parent panel name both indicate the panels on the user interface of the modeling software, and the key value represents the coordinates on the user interface in this embodiment, so that the position of the left click on the mouse is specifically the coordinates on the child panel "models. Wherein, the child panel "models.frame.teststate 1" belongs to the mother panel "technical Observer _ example1_ rev3_0. spp". According to the first time record table, between 59 minutes, 30 seconds and 31 seconds at 14 pm on 26 th 4 th month in 2018, the user completes the action of clicking the left mouse button on the child panel "models, frame, teststate 1" of the mother panel "technical Observer _ example1_ rev3_0. spp" on the modeling software interface at the coordinate (1340,630). However, the above information is not sufficient to recognize the object name. Therefore, the Windows system (Windows Hook) cannot obtain meaningful information enough for the modeling software, such as a text box name (text box name) or a numerical box name (box name), which requires a screen shot by the modeling software.
When the engineer operates the modeling software through actions such as a mouse, a keyboard, or a touch screen, the screen capture module 212 is triggered to execute the screen capture function, so that the whole screen capture of the user interface of the modeling software is obtained. Wherein, in order to make the text or the number on the screen larger, the other edge part will be adjusted.
Then, step S2 is executed, and a semantic model of the event is extracted and generated based on the event and the screen capture analysis modeling intention when the event occurs. Specifically, mapping is performed on an event and a screen shot when the continuous event occurs, the operation of the modeling software is analyzed based on the mapping, and a semantic model of the event is extracted and generated. As shown in fig. 1, the analysis device 220 includes a mapping module 221, a recognition module 222, and an analysis module 223. The step S2 includes a sub-step S21, a sub-step S22 and a sub-step S23
Specifically, sub-step S21 is first performed to locate the action in the first event log table in the screenshot and determine the object selected by the action based on the object characteristics based on image recognition the mapping module 221 can locate the action in the first screenshot based on the action in Table 1 and its location of occurrence and the first screenshot as shown in FIG. 2. The mapping module 221 is further configured to determine the above actions based on an image recognition algorithm, which in this embodiment needs to determine the object selected by the mouse.
As shown in the table of fig. 3, objects of mouse actions, and types, features, and positions thereof are listed on a sub-panel "models. The object of the numerical box is characterized by "rectangle with a large blank area" (A rectangle with a large blank area), and the position of the rectangle is "left of mouse cursor" (on the left of mouse click "). The object of the tag item is characterized by "a narrow rectangle-like shape, but no 4 edge-contour" (A narrow rectangle-like shape, but not yet 4 edge-contour), and its position is "at the mouse cursor position". The object of the checkbox is characterized as "one square blank area" (A square blank) and is positioned as "the right of the mouse cursor" (On the right of the mouse). The menu items are characterized by an object that is "located in Y-coordinate very close to the origin of the window (upper left corner of the window), with no outline around the mouse position" (Y-position is close to the origin of the window, no touch near the mouse position), and "at the mouse cursor position".
Step S22 is then performed to generate a region of interest based on the object type and object location and store the region of interest as an enlarged sub-image. The mapping module 221 is capable of determining the object type based on the above object features and generating a region of interest ROI (region of interest) as shown in fig. 2 based on the object type and the object position, wherein the region of interest ROI is generated based on the object position. The mapping module 221 crops the original screen shot shown in fig. 2 into one sub-image (a sub-image) ROI' based on the region of interest ROI. To improve the text extraction quality, the mapping module 221 enlarges the sub-image ROI'.
Then, the sub-step S23 is executed)And the optical character recognition function is used for obtaining characters on the sub-image, and the information in the time table is corresponding to the characters of the object to generate a second event record table. The Recognition module 222 is used to perform Optical Character Recognition (OCR), when the region of interest ROI of the first screenshot is enlarged and cropped to a sub-image, the Recognition module 222 extracts the text on the sub-image, so we can get the name of the numerical box and get the text "Processing time: Const" on the sub-image. Thus, we can now record the timestamps, actions, sub-panels in the table 1 event record tableThe name, the name of the mother panel and the key value correspond to the characters of the specific object, as shown in the following table:
TABLE 2 second event log Table
Time stamp Movement of Name of sub-panel Name of mother panel Critical value Text of an object
Thu Apr
26 14:59:30 2018 Mouse left down Models.Frame.TestStation1 Technomatix Observer_example1_rev3_0.spp (1340,630) Processing Time
Thu Apr
26 14:59:31 2018 Mouse left up Models.Frame.TestStation1 Technomatix Observer_example1_rev3_0.spp (1340,630) Processing Time
Finally, a substep S24 is performed to analyze the operation of the modeling software according to the second time recording table, extract and generate a semantic model of the event. The analysis module 223 is used to analyze the modeling intent of the engineer based on all of the information described above. In the present embodiment, the analysis module 223 interprets the above-mentioned information related to the object as shown in table 3 as meaningful process-related information as shown in table 4:
TABLE 3 second event log Table
Time stamp Movement of Name of sub-panel Name of mother panel Critical value Text of an object
Thu Apr
26 14:59:30 2018 Mouse left down Models.Frame.TestStation1 Technomatix Observer_example1_rev3_0.spp (1340,630) Processing Time
Thu Apr
26 14:59:31 2018 Mouse left up Models.Frame.TestStation1 Technomatix Observer_example1_rev3_0.spp (1340,630) Processing Time
Thu Apr
26 14:59:32 2018 Key down Models.Frame.TestStation1 .Models.Frame Numpad2 Processing Time
Thu Apr
26 14:59:33 2018 Key down Models.Frame.TestStation1 .Models.Frame Numpad0 Processing Time
Table 4 third event log table
Time stamp Position of Critical value Type (B) Event(s)
Thu Apr 26 14:59:31 2018 Technomatix Observer_example1_rev3_0.spp-.[.Models.Frame.Test Station1] (1340,630) Mouse (Saggar) Set_item[Processing time]on_sub_panel
Thu Apr
26 14:59:33 2018 Technomatix Observer_example1_rev3_0.spp-.[.Models.Frame.Test Station1] 20 Keyboard with a keyboard body Keyboard input[20]for[Processing time]on_sub_panel
Specifically, as shown in table 3, between 59 minutes, 30 seconds and 31 seconds at 14 pm on 26 pm on 4 months of 2018, the user completes the action of one mouse left click on the coordinates (1340,630) on the child panel "models.frame.teststate 1" of the mother panel "technical Observer _ example1_ rev3_0. spp" on the modeling software interface. Then, between 59 minutes 32 seconds and 33 seconds at 14 pm on 26/4/2018, the user completes two keyboard input actions on the child panel "model.frame.teststate 1" of the mother panel ". model.frame" on the modeling software interface, and the user presses the numeric key 2(Numpad2) and the numeric key 0(Numpad2) of the keyboard in sequence. Therefore, as shown in the third event record table of table 4, the modeling intentions of the engineer are analyzed by the analysis module 223, and the modeling intentions of the user on the modeling software are the events "Set _ item [ Process time ] on _ sub _ panel" and "Keyboard input [20] for [ Process time ] on _ sub _ panel". As shown in fig. 2, the event "Set _ item [ Process time ] on _ sub _ panel" indicates that the modeling is intended to issue a command for operating the modeling software to Set "Process time" (execution time) to the modeling apparatus in response to the above-mentioned mouse motion, and the event "Keyboard input [20] for [ Process time ] on _ sub _ panel" indicates that the modeling is intended to issue a command for operating the modeling software input execution time "20" to the modeling apparatus in response to the above-mentioned Keyboard motion. It should be noted that, in fig. 2, the modeling software automatically recognizes "20" input by the user as "20: 00".
Then, step S3 is executed to store the semantic model of the event in the knowledge base 230. In particular, the semantic model of the event includes relationships between events and data points in a graph. Illustratively, the semantic model of the events stored in the knowledge base 230 is shown in FIG. 4, where event 1 is of the type mouse action, event 1 "Set _ item [ Process time ] on _ sub _ panel" has a timestamp "Thu Apr 2614: 59: 312018", event 1 has the position "Technomix Observer _ example1_ rev3_0.spp- [. Models. Frame. test State 1 ]", and event 1 has the key value of coordinates "(1340, 630)". Wherein, the event 2 "Keyboard input [20] for [ Process time ] on _ sub _ panel" is of the Keyboard action type, the event 1 has a timestamp "Thu Apr 2614: 59: 332018", the event 2 has a position "technical Observer _ example1_ rev3_0.spp- [. model.frame.test Station1 ]" and the event 2 has a key value of the Keyboard input value "20". The relationship of event 1 and event 2 is: event 2 has a previous event "event 1" and event 1 has a next event "event 2".
Preferably, the semantic models of the events in the knowledge base are stored as an event stream based on the sequence of the events, the knowledge base recommends the next event or the event executed at the same time of the event stream to which the event belongs when the user performs the event operation when modeling again, and the next event or the event executed at the same time is automatically matched according to the selection of the user. Semantic models of multiple events in the knowledge base are correlated to become event flows (event flows). Therefore, when any user executes the modeling software again on the modeling apparatus, once the user executes event 1 as shown in fig. 4, the knowledge base 230 recognizes event 1 and thus recommends event 2 correlated to the event 1, and the user can select whether to continue executing event 2. Once the knowledge base identifies a different event stream, it is updated and stored.
5 a-5 c are first screen shots of a modeling knowledge acquisition mechanism according to another embodiment of the invention. Wherein fig. 5a shows a modeling interface of the modeling software on which the user has placed four elements as shown in fig. 5b, respectively: a first element Source, a second element SingleProc, a third element Parallelproc and a fourth element Drain. As shown in FIG. 5c, a mouse click opens the second element singleProc to set a parameter for this element, setting the Processing time therein to 2: 00.
first, step S1 is executed, the capturing device 210 captures modeling information based on modeling software on a personal computer, wherein the modeling information includes an event and a screen shot of the personal computer when the event occurs, wherein the event is a command for operating the modeling software issued to the personal computer in response to a mouse action, a keyboard action or a touch screen action.
Therefore, the knowledge acquisition apparatus 200 captures modeling information based on modeling software on a personal computer as shown in the following table. Wherein, because the Mouse left down can already show the single click action of the left Mouse button, the record table omits the Mouse left up event.
TABLE 5 first event record Table
Time stamp Movement of Name of sub-panel Name of mother panel Critical value
5/22/2018 10:53:19 Mouse left down .Models.Frame example.spp (534,219)
5/22/2018 10:53:20 Mouse left down .Models.Frame example.spp (515,551)
5/22/2018 10:53:21 Mouse left down .Models.Frame example.spp (621,211)
5/22/2018 10:53:23 Mouse left down .Models.Frame example.spp (587,546)
5/22/2018 10:53:24 Mouse left down .Models.Frame example.spp (662,218)
5/22/2018 10:53:25 Mouse left down .Models.Frame example.spp (673,552)
5/22/2018 10:53:26 Mouse left down .Models.Frame example.spp (586,218)
5/22/2018 10:53:28 Mouse left down .Models.Frame example.spp (757,550)
5/22/2018 10:53:30 Mouse left down .Models.Frame example.spp (590,550)
5/22/2018 10:53:34 Mouse left down .Models.Frame.SingleProc .Models.Frame (21,1187)
5/22/2018 10:53:37 key down .Models.Frame.SingleProc .Models.Frame Numpad2
As shown in table 7, the user first compares the time stamps at "5/22/201810: 53: 19 "(i.e., 2018, 5, 22, 10 am, 53 minutes, 19 seconds) completed a mouse click at a location where the mother panel is" example. spp "and the child panel is". model. frame ", the coordinate location of this mouse click being (534,219), then at a time stamp of" 5/22/201810: 53: another mouse click is completed 20 "(i.e., 2018, 5, 22, 10 am, 53 min, 20 sec.) at a position where the mother panel is" example. spp "and the child panel is". model. frame ", and the coordinate position of this mouse click is (515,551). The operation of the user on the modeling software corresponding to the click is to click the icon of the first element Source and drag the icon to the position shown in fig. 5 b.
Similarly, the user first has a timestamp "5/22/201810: 53: 21 "(i.e., 2018, 5, 22, 10 am, 53 min, 21 sec) completed a mouse click at a location where the mother panel is" example. spp "and the child panel is". model. frame ", the coordinate location of this mouse click being (621,211), then at a time stamp of" 5/22/201810: 53: another mouse click is completed 23 "(i.e., 2018, 5, 22, 10 am, 53 min, 23 sec.) at a position where the mother panel is" example. spp "and the child panel is". model. frame ", and the coordinate position of this mouse click is (587,546). The operation of the user on the modeling software corresponding to the click is to click the icon of the second element Singleproc and drag the icon to the position shown in fig. 5 b.
Similarly, the user first has a timestamp "5/22/201810: 53: 24 "(i.e., 2018, 5, 22, 10 am, 53 min, 24 sec.) a mouse click was made at a location where the mother panel is" example. spp "and the child panel is". model. frame ", the coordinate location of this mouse click being (662,218), then at a time stamp of" 5/22/201810: 53: 25 "(i.e., 2018, 5, 22, 10 am, 53 min, 25 sec) another mouse click was completed at the position where the mother panel is" example. spp "and the child panel is". model. frame ", and the coordinate position of this mouse click is (673,552). The operation of the user on the modeling software corresponding to the click is to click the icon of the third element Parallelstation and drag the icon to the position shown in fig. 5 b.
Similarly, the user first has a timestamp "5/22/201810: 53: 26 "(i.e., 2018, 5, 22, 10 am, 53 min, 26 sec.) a mouse click was made at a location where the mother panel is" example. spp "and the child panel is". model. frame ", the coordinate location of this mouse click being (586,218), then the mouse click was completed at a time stamp of" 5/22/201810: 53: 28 "(i.e., 2018, 5, 22, 10 am, 53 min, 28 sec) another mouse click was made at the location where the mother panel was" example. spp "and the child panel was". model. frame ", and the coordinate location of this mouse click was (757,550). The operation of the user on the modeling software corresponding to this click is to click the icon of the fourth element Drain and drag the icon to the position shown in fig. 5 b.
Next, the user updates the user at a timestamp "5/22/201810: 53: 30 "(i.e., 2018, 5, 22, 10 am, 53 min, 30 sec.) a mouse click was made at a location where the mother panel is" example. spp "and the child panel is". model. frame ", and the coordinate location of this mouse click is (590,550). This time a mouse click opens the second element SingleProc, which wants to have parameters set to it.
Finally, the user responds at a timestamp "5/22/201810: 53: 37 "(i.e., 2018, 5, 22, 10 am, 53 min, 37 sec.) a mouse click was made at the position of the parent panel". model.frame "and the child panel". model.frame.singleproc ", and the coordinate position of this mouse click was (21,1187). The user has a timestamp "5/22/201810: 53: 34 "(i.e., 10 am, 53 min, 34 sec. on 22 am, 5/2018) a keypad entry is made in the positions of the mother panel". model. These two operations set the parameter Processing time of the second element SingleProc to 2, as shown in fig. 5c, and the keypad entry number 2 is recognized as "2: 00".
Then, step S2 is executed, and a semantic model of the event is extracted and generated based on the event and the screen capture analysis modeling intention when the event occurs. Specifically, mapping is performed on an event and a screen shot when the continuous event occurs, the operation of the modeling software is analyzed based on the mapping, and a semantic model of the event is extracted and generated.
Specifically, the sub-step S21 is first performed, the action in the first event record table is located in the screen capture, and the object selected by the action is determined based on the object feature according to the image recognition.
Step S22 is then performed to generate a region of interest based on the object type and object location and store the region of interest as an enlarged sub-image.
Sub-step S23 is then performed to perform an optical character recognition function to obtain text on the sub-image, to correlate the information in the schedule with the text of the object and to generate a second event log. The Recognition module 222 is used for performing an Optical Character Recognition (OCR) function. Thus, we can now associate the timestamp, action, child panel name, parent panel name and key value in the event record table of Table 1 with the text of the specific object, as shown in the following table:
TABLE 6 second event log Table
Time stamp Movement of Name of sub-panel Name of mother panel Critical value Object
5/22/2018 10:53:19 Mouse left down .Models.Frame example.spp (534,219) source
5/22/2018 10:53:20 Mouse left down .Models.Frame example.spp (515,551) Blank Page
5/22/2018 10:53:21 Mouse left down .Models.Frame example.spp (621,211) SingleProc
5/22/2018 10:53:23 Mouse left down .Models.Frame example.spp (587,546) Blank Page
5/22/2018 10:53:24 Mouse left down .Models.Frame example.spp (662,218) ParallelProc
5/22/2018 10:53:25 Mouse left down .Models.Frame example.spp (673,552) Blank Page
5/22/2018 10:53:26 Mouse left down .Models.Frame example.spp (586,218) Drain
5/22/2018 10:53:28 Mouse left down .Models.Frame example.spp (757,550) Blank Page
5/22/2018 10:53:30 Mouse left down .Models.Frame example.spp (590,550) SingleProc
5/22/2018 10:53:34 Mouse left down .Models.Frame.SingleProc .Models.Frame (21,1187) Processing time
5/22/2018 10:53:37 key down .Models.Frame.SingleProc .Models.Frame Numpad2 Processing time
In this step, it is recognized that the object of the first mouse operation by the user is the first element source icon, and the source icon is dragged to the Blank modeling interface (Blank Page), and in the same way, the object of the second mouse operation by the user is the second element Singleproc icon, and the source icon is dragged to the Blank modeling interface (Blank Page), the object of the third mouse operation by the user is the third element Parallelproc icon, and the Parallelproc icon is dragged to the Blank modeling interface (Blank Page), the object of the fourth mouse operation by the user is the fourth element Drain icon, and the Drain icon is dragged to the Blank modeling interface (Blank Page). Next, this step identifies that the next operation object of the user is a second element Singleproc on the modeling software interface, and identifies that the object with the parameter setting is "Processing time".
Finally, a substep S24 is performed to analyze the operation of the modeling software according to the second time recording table, extract and generate a semantic model of the event. The analysis module 223 is used to analyze the modeling intent of the engineer based on all of the information described above. In the present embodiment, the analysis module 223 interprets the above-mentioned information about the object as shown in table 6 as meaningful process-related information as shown in table 7:
TABLE 7 third event log Table
Time stamp Position of Critical value Type (B) Event(s)
5/22/2018 10:53:20 Example.spp–[.Model.Frame] (515,551) Mouse (Saggar) Create_object[Source]
5/22/2018 10:53:23 Example.spp–[.Model.Frame] (587,546) Mouse (Saggar) Create_object[SingleProc]
5/22/2018 10:53:25 Example.spp–[.Model.Frame] (673,552) Mouse (Saggar) Create_object[ParallelProc]
5/22/2018 10:53:28 Example.spp–[.Model.Frame] (757,550) Mouse (Saggar) Create_object[Drain]
5/22/2018 10:53:34 Example.spp–[.Model.Frame.SingleProc] (21,1187) Mouse (Saggar) Set_item[Processing Time]on_sub_panel
5/22/2018 10:53:37 Example.spp–[.Model.Frame.SingleProc] 2 Keyboard with a keyboard body keyboard input[2]for[Processing Time]on_sub_panel
Specifically, as shown in table 7, 53 minutes and 20 seconds at 10 am on 5/22/2018, the user completes the action of clicking the left mouse button with the coordinate (515,551) on the child panel of the mother panel "example. And the event "Create _ object [ Source ]" represents that the modeling is intended to respond to the mouse action and send a command for operating the modeling device to establish the first element Source on the modeling interface of the modeling software. The coordinates on model.frame "are (587,546) to complete the action of one mouse left click at child panel of parent panel" example.spp "on the modeling software interface at 53 minutes 23 seconds at 10 am of 22 days in 2018. Wherein the event "Create _ object [ SingleProc ]" represents that the modeling is intended to issue a command to the modeling apparatus to operate the second element SingleProc on the modeling interface of the modeling software in response to the above-mentioned mouse action. The coordinates on model.frame "are (673,552) to complete the action of one mouse left click at child panel of parent panel" example.spp "on the modeling software interface at 53 minutes 25 seconds at 10 am of 22 days in 2018. Wherein the event "Create _ object ParallelProc" represents that the modeling is intended to respond to the above-mentioned mouse action to issue a command to the modeling apparatus to operate the modeling software to establish the third element ParallelProc on the modeling interface. Coordinate (757,550) on mother panel "example. spp" on the modeling software interface, the user completed one mouse left click action at 53 minutes 28 seconds at 10 am, 5/22/2018. Wherein the event "Create _ object [ Drain ]" indicates that the modeling is intended to issue a command to the modeling apparatus to operate the fourth element Drain on the modeling interface of the modeling software in response to the above-mentioned mouse action.
The child panel of the mother panel "example. spp" on the modeling software interface of the user is coordinate (21,1187) to complete the action of clicking the left mouse button at 53 minutes 34 seconds at 10 am of 22 days in 2018, 5 months. Wherein the event "Set _ item [ Processing Time ] on _ sub _ panel" represents that the modeling is intended to issue a command to the modeling apparatus to operate the modeling software to Set "Processing Time" (execution Time) in response to the above-mentioned mouse action. The user completed one keyboard action on the child panel of the mother panel "example. spp" on the modeling software interface with coordinates (21,1187) at 53 minutes 37 seconds at 10 am, 22 days 5 and 22 in 2018. Wherein the event "keyboard input [2] for [ Processing Time ] on _ sub _ panel" represents that the modeling is intended to issue a command to the modeling means to operate the modeling software with an input execution Time of "2" in response to the above-mentioned keyboard action. It should be noted that, in fig. 5c, the modeling software automatically recognizes "2" input by the user as "2: 00".
Then, step S3 is executed to store the semantic model of the event in the knowledge base 230. In particular, the semantic model of the event includes relationships between events and data points in a graph. Illustratively, in this embodiment, the semantic model of the events stored in the knowledge base 230 is shown in fig. 6, where event 3, event 4, event 5, and event 6 form an event stream according to the execution sequence. Wherein, the type of the event 3 "Create _ object [ Source ]" is a mouse action, and the event 3 has a timestamp "5/22/201810: 53: 20 ", event 3 has the position" example. spp- [. model. frame ] ", and event 4" Create _ object [ SingleProc ] "has the key value as the coordinate" (515,551) ". Wherein, event 4 is of the type of mouse action, event 4 has a time stamp "5/22/201810: 53: 23 ", event 4 has the position" example. spp- [. model. frame ] ", and event 4 has the key value as the coordinates" (587,546) ". Wherein, the event 5 "Create _ object [ ParallelProc ]" has a type of mouse action, and the event 5 has a timestamp "5/22/201810: 53: 25 ", event 5 has the position" example. spp- [. model. frame ] ", and event 5 has the key value as the coordinates" (673,552) ". Wherein, the event 6 "Create _ object [ Drain ]" has a type of mouse action, and the event 6 has a time stamp "5/22/201810: 53: 28 ", event 6 has the position" example. spp- [. model. frame ] ", event 6 has the key value as the coordinates" (757,550) ". Wherein, the type of the event 7 is mouse action, and the event 7 has a time stamp of "5/22/201810: 53: 34 ", event 7 has the position" example. spp- [. model. frame. singleproc ] ", event 7" Set _ item [ Processing Time ] on _ sub _ panel "has the key value as the coordinates" (21,1187) ". Wherein event 8 is of the type keyboard action, event 8 having a timestamp "5/22/201810: 53: 37 ", event 8" keyboard input [2] for [ Processing Time ] on _ sub _ panel "has a location" example. spp- [. model. frame. singleproc ] "and event 8 has a key value of keypad input" 2 ".
Wherein, the relationship between the event 3 and the event 4 is as follows: event 4 has a previous event "event 3" and event 3 has a next event "event 4". The relationship of event 4 and event 5 is: event 5 has a previous event "event 4" and event 4 has a next event "event 5". The relationship of event 5 and event 6 is: event 6 has a previous event "event 5" and event 5 has a next event "event 6". The relationship of event 6 and event 7 is: event 7 has a previous event "event 6" and event 6 has a next event "event 7". The relationship of event 7 and event 8 is: event 8 has a previous event "event 7" and event 7 has a subsequent event "event 8".
The second aspect of the present invention provides a modeling knowledge acquisition apparatus, including: the device comprises a capturing device, a judging device and a control device, wherein the capturing device captures modeling information based on modeling software on a modeling device, the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent by the modeling device in response to mouse actions, keyboard actions or touch screen actions; and the analysis device analyzes the modeling intention based on the event and the screen capture when the event occurs, extracts and generates a semantic model of the event, and stores the semantic model of the event in a knowledge base.
Further, the mouse actions include at least one of:
-clicking the left mouse button;
-clicking the right mouse button;
-mouse movement.
Further, the keyboard action includes pressing any key of the keyboard.
Further, the touch screen action includes at least one of:
-clicking on the screen;
-sliding the screen;
-shaking the screen.
Further, the step S1 further includes: capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
Further, the key value comprises a coordinate, an input text and an input numerical value.
Further, the step S2 includes positioning the motion in the first event record table in the screen capture, and determining the object selected by the motion based on the object characteristics according to image recognition; generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image; executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table; and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
Further, the second event record table includes: object, object type, object characteristics, object location.
Further, when the object in the second event record table is a numerical box, the object is characterized by "rectangle with a large blank area", and the position thereof is "left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
Further, semantic models of a plurality of events in the knowledge base are stored as event streams based on the sequence of the events, the knowledge base recommends the next event of the event stream to which the event belongs or the event executed at the same time when the user performs the event operation when modeling again, and the next event or the event executed at the same time is automatically matched according to the selection of the user.
The third aspect of the present invention provides a modeling knowledge acquisition system, including: a processor; and a memory coupled with the processor, the memory having instructions stored therein that, when executed by the processor, cause the electronic device to perform acts comprising: s1, capturing modeling information based on modeling software on a modeling device, wherein the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent to the modeling device by responding to mouse actions, keyboard actions or touch screen actions; s2, extracting and generating a semantic model of the event based on the event and the screen capture analysis modeling intention when the event occurs; s3, storing the semantic model of the event in a knowledge base.
Further, the mouse actions include at least one of:
-clicking the left mouse button;
-clicking the right mouse button;
-mouse movement.
Further, the keyboard action includes pressing any key of the keyboard.
Further, the touch screen action includes at least one of:
-clicking on the screen;
-sliding the screen;
-shaking the screen.
Further, the action S1 further includes: capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
Further, the key value comprises a coordinate, an input text and an input numerical value.
Further, the action S2 includes locating the action in the first event record table in the screen capture, and determining the object selected by the action based on the object characteristics according to image recognition; generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image; executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table; and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
Further, the second event record table includes: object, object type, object characteristics, object location.
Further, when the object in the second event record table is a numerical box, the object is characterized by "rectangle with a large blank area", and the position thereof is "left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
Further, semantic models of a plurality of events in the knowledge base are stored as event streams based on the sequence of the events, the knowledge base recommends the next event of the event stream to which the event belongs or the event executed at the same time when the user performs the event operation when modeling again, and the next event or the event executed at the same time is automatically matched according to the selection of the user.
A fourth aspect of the invention provides a computer program product, tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform the method of the first aspect of the invention.
A fifth aspect of the invention provides a computer-readable medium having stored thereon computer-executable instructions that, when executed, cause at least one processor to perform the method of the first aspect of the invention.
The modeling knowledge acquisition mechanism provided by the invention can reproduce a modeling process based on events or data and screen shots thereof, and provides good resources for knowledge base extraction. The invention can be used for various modeling software, does not depend on specific modeling software and does not interfere with the function of any specific modeling software, and the invention can be independently executed. The invention is not only applicable to one modeling software tool, but can be applied to multiple software tools, and the invention is still applicable when a user needs to switch work between multiple modeling software tools (e.g., multiple software co-simulation). For modeling software tools that provide an Application Program Interface (API), the present invention can still be used to perform modeling knowledge acquisition functions without the need for additional software development efforts. The modeling knowledge acquisition mechanism of the invention runs in the background and does not disturb the modeling work of designers or the working mode thereof.
In addition, the invention is not only suitable for Windows systems, but also suitable for MAC or Linux systems and the like. The present invention is based on os (operation system) functionality, such as Windows hook or screen capture functionality. The invention can also be used with many other types of software that do not require changes to their original functionality or UI, as long as the software does not mask the Windows hook or screen capture functionality. The present invention is independent of the UI mode or function of the modeling software itself, so the user does not need to develop or revise its functionality based on different modeling software. For a Windows system, a MAC system or a Linux system, etc., they are all suitable for Windows hook, a screen capture function or an optical character recognition function (OCR), so that the modeling knowledge acquisition function provided by the invention can be developed for different OS platforms.
In a word, the modeling knowledge acquisition mechanism provided by the invention has a wide application range, can meet the multiplexing requirement of modeling knowledge acquisition, and can improve the function expansion of modeling software.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims. Furthermore, any reference signs in the claims shall not be construed as limiting the claim concerned; the word "comprising" does not exclude the presence of other devices or steps than those listed in a claim or the specification; the terms "first," "second," and the like are used merely to denote names, and do not denote any particular order.

Claims (23)

  1. The modeling knowledge acquisition method comprises the following steps:
    s1, capturing modeling information based on modeling software on a modeling device, wherein the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent to the modeling device by responding to mouse actions, keyboard actions or touch screen actions;
    s2, extracting and generating a semantic model of the event based on the event and the screen capture analysis modeling intention when the event occurs;
    s3, storing the semantic model of the event in a knowledge base.
  2. The modeling knowledge acquisition method according to claim 1, wherein the mouse action includes at least one of:
    -clicking the left mouse button;
    -clicking the right mouse button;
    -mouse movement.
  3. The modeling knowledge acquisition method according to claim 1, wherein the keyboard action includes pressing any key of the keyboard.
  4. The modeling knowledge acquisition method of claim 1, wherein the touch screen action comprises at least one of:
    -clicking on the screen;
    -sliding the screen;
    -shaking the screen.
  5. The modeling knowledge acquisition method according to claim 1, wherein said step S1 further includes:
    capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
  6. The modeling knowledge acquisition method according to claim 5, wherein the key values include coordinates, input text, and input numerical values.
  7. The modeling knowledge acquisition method according to claim 5, wherein said step S2 further includes:
    positioning the action in the first event record table in the screen capture, and judging the object selected by the action based on the object characteristics according to image identification;
    generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image;
    executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table;
    and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
  8. The modeling knowledge acquisition method according to claim 7, wherein the second event record table includes: object, object type, object characteristics, object location.
  9. The modeling knowledge acquisition method according to claim 7, wherein when the object in the second event log table is a numerical box, the object is characterized by "rectangle with one large blank area" and is positioned "to the left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
  10. The modeling knowledge acquisition method according to claim 7, wherein semantic models of a plurality of events in the knowledge base are stored as an event stream based on the sequence of the plurality of events, the knowledge base recommends the next event of the event stream to which the event belongs or the event to be executed simultaneously when the user performs the event operation when modeling again, and the next event or the event to be executed simultaneously is automatically matched according to the selection of the user.
  11. The modeling knowledge acquisition device includes:
    the device comprises a capturing device, a judging device and a control device, wherein the capturing device captures modeling information based on modeling software on a modeling device, the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent by the modeling device in response to mouse actions, keyboard actions or touch screen actions;
    and the analysis device analyzes the modeling intention based on the event and the screen capture when the event occurs, extracts and generates a semantic model of the event, and stores the semantic model of the event in a knowledge base.
  12. A modeling knowledge acquisition system, comprising:
    a processor; and
    a memory coupled with the processor, the memory having instructions stored therein that, when executed by the processor, cause the electronic device to perform acts comprising:
    s1, capturing modeling information based on modeling software on a modeling device, wherein the modeling information comprises an event and a screen shot when the event occurs, and the event is a command for operating the modeling software sent to the modeling device by responding to mouse actions, keyboard actions or touch screen actions;
    s2, extracting and generating a semantic model of the event based on the event and the screen capture analysis modeling intention when the event occurs;
    s3, storing the semantic model of the event in a knowledge base.
  13. The modeling knowledge acquisition system of claim 12 wherein the mouse actions include at least one of:
    -clicking the left mouse button;
    -clicking the right mouse button;
    -mouse movement.
  14. The modeling knowledge acquisition system of claim 12 wherein the keyboard action comprises pressing any key of the keyboard.
  15. The modeling knowledge acquisition system of claim 12, wherein the touch screen action comprises at least one of:
    -clicking on the screen;
    -sliding the screen;
    -shaking the screen.
  16. The modeling knowledge acquisition system of claim 12, said act S1 further comprising:
    capturing mouse action, keyboard action or touch screen action based on modeling software, and generating a first event record table based on the action and a timestamp of the action when the action occurs, the name of a related child panel, the name of a mother panel and a key value of the action on the modeling software.
  17. The modeling knowledge acquisition system of claim 16 wherein the key values include coordinates, input text, input numerical values.
  18. The modeling knowledge acquisition system of claim 16, said act S2 further comprising:
    positioning the action in the first event record table in the screen capture, and judging the object selected by the action based on the object characteristics according to image recognition;
    generating a region of interest based on said object type and object location and storing said region of interest as a magnified sub-image;
    executing an optical character recognition function to obtain characters on the sub-image, and associating information in the schedule with the characters of the object and generating a second event record table;
    and analyzing the operation of the modeling software according to the second time record table, and extracting and generating a semantic model of the event.
  19. The modeling knowledge acquisition system of claim 18 wherein the second event record table comprises: object, object type, object characteristics, object location.
  20. The modeling knowledge acquisition system according to claim 18, wherein when the object in the second event log table is a numerical box, the object is characterized by "rectangle with one large blank area" and is positioned "to the left of mouse cursor"; when the object in the second event record table is a tag box, the object is characterized by a narrow rectangular-like shape without 4 edge outlines, and the position of the object is the position of a mouse cursor; when the object in the second event record table is a check box, the object is characterized as a square blank area and is positioned on the right side of a mouse cursor; when the object in the second event record table is a menu item, the object is characterized in that "the position on the Y coordinate is very close to the origin of the window (the upper left corner of the window), there is no outline near the mouse position, and the position is" at the mouse cursor position ".
  21. The modeling knowledge acquisition system according to claim 18, wherein semantic models of a plurality of events in the knowledge base are stored as an event stream based on a sequence of the plurality of events, and when a user performs an event operation again, the knowledge base recommends a next event of the event stream to which the event belongs or an event to be performed simultaneously, and automatically matches the next event or the event to be performed simultaneously in response to a selection of the user.
  22. A computer program product tangibly stored on a computer-readable medium and comprising computer-executable instructions that, when executed, cause at least one processor to perform the method of any one of claims 1 to 10.
  23. A computer-readable medium having stored thereon computer-executable instructions that, when executed, cause at least one processor to perform the method of any one of claims 1 to 10.
CN201880099861.6A 2018-12-18 2018-12-18 Knowledge acquisition method, device and system for modeling Pending CN113168283A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/121814 WO2020124377A1 (en) 2018-12-18 2018-12-18 Modeling knowledge acquisition method and apparatus, and system

Publications (1)

Publication Number Publication Date
CN113168283A true CN113168283A (en) 2021-07-23

Family

ID=71102421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880099861.6A Pending CN113168283A (en) 2018-12-18 2018-12-18 Knowledge acquisition method, device and system for modeling

Country Status (2)

Country Link
CN (1) CN113168283A (en)
WO (1) WO2020124377A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116400907A (en) * 2023-06-08 2023-07-07 四川云申至诚科技有限公司 Knowledge learning-based automatic programming method, storage medium and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001082124A1 (en) * 2000-04-25 2001-11-01 Invention Machine Corporation, Inc. Modeling of graphic images from text
CN108139849A (en) * 2015-10-01 2018-06-08 谷歌有限责任公司 For the action suggestion of user in selecting content
CN108268505A (en) * 2016-12-30 2018-07-10 西门子公司 Modeling method and device based on semantic knowledge
CN108345622A (en) * 2017-01-25 2018-07-31 西门子公司 Model retrieval method based on semantic model frame and device
CN108509519A (en) * 2018-03-09 2018-09-07 北京邮电大学 World knowledge collection of illustrative plates enhancing question and answer interactive system based on deep learning and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120023054A1 (en) * 2009-03-30 2012-01-26 Siemens Aktiengesellschaft Device and Method for Creating a Process Model
US9870623B2 (en) * 2016-05-14 2018-01-16 Google Llc Segmenting content displayed on a computing device into regions based on pixels of a screenshot image that captures the content

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001082124A1 (en) * 2000-04-25 2001-11-01 Invention Machine Corporation, Inc. Modeling of graphic images from text
CN108139849A (en) * 2015-10-01 2018-06-08 谷歌有限责任公司 For the action suggestion of user in selecting content
CN108268505A (en) * 2016-12-30 2018-07-10 西门子公司 Modeling method and device based on semantic knowledge
CN108345622A (en) * 2017-01-25 2018-07-31 西门子公司 Model retrieval method based on semantic model frame and device
CN108509519A (en) * 2018-03-09 2018-09-07 北京邮电大学 World knowledge collection of illustrative plates enhancing question and answer interactive system based on deep learning and method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116400907A (en) * 2023-06-08 2023-07-07 四川云申至诚科技有限公司 Knowledge learning-based automatic programming method, storage medium and apparatus
CN116400907B (en) * 2023-06-08 2024-02-02 四川云申至诚科技有限公司 Knowledge learning-based automatic programming method, storage medium and apparatus

Also Published As

Publication number Publication date
WO2020124377A1 (en) 2020-06-25

Similar Documents

Publication Publication Date Title
US10354225B2 (en) Method and system for process automation in computing
JP2018535459A (en) Robotic process automation
US20060271322A1 (en) Systems and Methods Providing A Normalized Graphical User Interface For Testing Disparate Devices
CN109189519B (en) Universal user desktop behavior simulation system and method
Linn et al. Desktop activity mining-a new level of detail in mining business processes
CN101930399A (en) Method for recording software test
CN111857470B (en) Unattended control method and device for production equipment and controller
CN110941467A (en) Data processing method, device and system
CN113168283A (en) Knowledge acquisition method, device and system for modeling
CN114430823A (en) Software knowledge capturing method, device and system
KR102373942B1 (en) Text detection, caret tracking, and active element detection
WO2021176523A1 (en) Screen recognition device, screen recognition method, and program
KR102176458B1 (en) Method and apparatus for Performing Box Drawing for Data Labeling
JP7380714B2 (en) Operation log acquisition device and operation log acquisition method
CN116126697A (en) Test case generation method, device, equipment and computer readable storage medium
Bashatah et al. Method for Formal Analysis of the Type and Content of Airline Standard Operating Procedures
CN115917446A (en) System and method for robotic process automation
CN101183400A (en) Debugging and checking method and system in graph hardware design
CN116700583A (en) Implementation method and device for process automation and storage medium
CN113807698A (en) Work order generation method and device, electronic equipment and readable storage medium
Wu et al. A model based testing approach for mobile device
CN107679264B (en) Method for assisting in checking dislocation of bit numbers in PCB design
JP2000163602A (en) Input history storage device
WO2023195139A1 (en) Display data creation device, operation system, display data creation method, and display data creation program
RU2786951C1 (en) Detection of repeated patterns of actions in user interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination