CN117539367A - Image recognition tracking method based on interactive intelligent experiment teaching system - Google Patents

Image recognition tracking method based on interactive intelligent experiment teaching system Download PDF

Info

Publication number
CN117539367A
CN117539367A CN202311546162.0A CN202311546162A CN117539367A CN 117539367 A CN117539367 A CN 117539367A CN 202311546162 A CN202311546162 A CN 202311546162A CN 117539367 A CN117539367 A CN 117539367A
Authority
CN
China
Prior art keywords
experiment
feature
item
operation behavior
gui
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311546162.0A
Other languages
Chinese (zh)
Other versions
CN117539367B (en
Inventor
陈春亮
黎海情
冯卓文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202311546162.0A priority Critical patent/CN117539367B/en
Publication of CN117539367A publication Critical patent/CN117539367A/en
Application granted granted Critical
Publication of CN117539367B publication Critical patent/CN117539367B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Educational Administration (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Algebra (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Educational Technology (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, in particular to an image recognition tracking method based on an interactive intelligent experiment teaching system, which is characterized in that feature induction is carried out through graphic operation behavior description features of GUI experiment teaching operation behaviors, so that feature induction precision of a graphic layer and feature expression quality of dynamic simulation experiment item convolution features can be improved, and experiment effect conclusion judgment processing is carried out on the dynamic simulation experiment item convolution features to obtain conclusion viewpoint hit scores of matching of a target dynamic simulation experiment item and each initial experiment effect conclusion viewpoint; when the conclusion point hit score of any initial experimental effect conclusion point is larger than the conclusion point hit score threshold, the target dynamic simulation experiment item is determined to belong to any initial experimental effect conclusion point, so that the experimental effect conclusion point corresponding to the target dynamic simulation experiment item can be accurately and efficiently determined, and the intelligent degree of the interactive intelligent experiment teaching system and the interpretability of the dynamic simulation experiment item are improved.

Description

Image recognition tracking method based on interactive intelligent experiment teaching system
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image recognition tracking method based on an interactive intelligent experiment teaching system.
Background
In an interactive intelligent experiment teaching system (Interactive Intelligent Experimental Teaching System), image recognition and tracking techniques can provide a more vivid, realistic simulated experiment environment, such as creating virtual objects, simulating dynamic processes, supporting user interactions, providing real-time feedback, and the like. Through the modes, the image recognition and tracking technology enables the interactive intelligent experiment teaching system to simulate the physical environment of the real world and also support the user to perform various experiment operations in the virtual environment, so that a deeper and visual learning experience is provided. In the practical application process, how to further improve the intelligent degree of the interactive intelligent experiment teaching system and the interpretability of related simulation experiments is a technical problem which needs to be improved at present.
Disclosure of Invention
In order to improve the technical problems in the related art, the invention provides an image identification tracking method based on an interactive intelligent experiment teaching system.
In a first aspect, an embodiment of the present invention provides an image recognition tracking method based on an interactive intelligent experiment teaching system, which is applied to an image recognition tracking system, and the method includes:
Acquiring graphic operation behavior description characteristics of a plurality of GUI experiment teaching operation behaviors of a target dynamic simulation experiment item;
performing feature induction and arrangement on each graphic operation behavior description feature to obtain an image description feature cluster corresponding to each graphic operation behavior description feature;
performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description features to obtain dynamic simulation experiment item convolution features of the target dynamic simulation experiment item;
performing experimental effect conclusion discrimination processing on the convolution characteristics of the dynamic simulation experimental project to obtain conclusion viewpoint hit scores of the target dynamic simulation experimental project matched with each initial experimental effect conclusion viewpoint;
and when the conclusion point hit score corresponding to any initial experimental effect conclusion point is larger than the conclusion point hit score threshold, determining the any initial experimental effect conclusion point as the current experimental effect conclusion point of the target dynamic simulation experiment item.
In some possible embodiments, the obtaining graphical operational behavior descriptive features of the plurality of GUI experiment teaching operational behaviors of the target dynamic simulation experiment item includes:
The following steps are implemented for each GUI experiment teaching operation behavior:
performing feature extraction operation on the gesture input command in the GUI experiment teaching operation behavior to obtain gesture input command image features;
performing interval numerical mapping on the number of tracked behavior nodes in the GUI experiment teaching operation behavior to obtain behavior node quantization characteristics;
and carrying out feature integration on the gesture input command image features and the behavior node quantization features to obtain graphic operation behavior description features of the GUI experiment teaching operation behaviors.
In some possible embodiments, the performing convolution operation on the image description feature clusters corresponding to the graphic operation behavior description features to obtain a dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item includes:
acquiring a first GUI experiment teaching operation behavior of which the target dynamic simulation experiment item is an input experiment item from a plurality of GUI experiment teaching operation behaviors;
acquiring a second GUI experiment teaching operation behavior of which the target dynamic simulation experiment item is a feedback experiment item from a plurality of GUI experiment teaching operation behaviors;
performing convolution operation based on an image description feature cluster corresponding to the graphic operation behavior description feature of each first GUI experiment teaching operation behavior to obtain input experiment item convolution features of the target dynamic simulation experiment item;
Performing convolution operation based on an image description feature cluster corresponding to the graphic operation behavior description feature of each second GUI experiment teaching operation behavior to obtain feedback type experiment item convolution features of the target dynamic simulation experiment item;
and carrying out feature integration on the time sequence features of the plurality of GUI experiment teaching operation behaviors, the input type experiment item convolution features of the target dynamic simulation experiment item and the feedback type experiment item convolution features of the target dynamic simulation experiment item to obtain the dynamic simulation experiment item convolution features of the target dynamic simulation experiment item.
In some possible embodiments, the performing convolution operation based on the image description feature cluster corresponding to the graphic operation behavior description feature of each first GUI experiment teaching operation behavior to obtain the input experiment item convolution feature of the target dynamic simulation experiment item includes:
acquiring at least one third GUI experiment teaching operation behavior corresponding to each feedback experiment item in the first GUI experiment teaching operation behaviors;
performing convolution operation on each feedback type experiment item based on an image description feature cluster corresponding to a graphic operation behavior description feature of a third GUI experiment teaching operation behavior corresponding to the feedback type experiment item to obtain an input type experiment item convolution feature of the target dynamic simulation experiment item for the feedback type experiment item;
And performing splicing operation on the target dynamic simulation experiment item aiming at the input experiment item convolution characteristics of each feedback experiment item to obtain the input experiment item convolution characteristics of the target dynamic simulation experiment item.
In some possible embodiments, the performing convolution operation based on the image description feature cluster corresponding to the graphic operation behavior description feature of the third GUI experiment teaching operation behavior corresponding to the feedback experiment item to obtain the input experiment item convolution feature of the target dynamic simulation experiment item for the feedback experiment item includes:
describing a feature cluster for each image of the feature induction arrangement:
acquiring a first statistical value of a third GUI experiment teaching operation behavior of which at least one graphic operation behavior description characteristic belongs to the image description characteristic cluster in the third GUI experiment teaching operation behavior, and taking the first statistical value of each image description characteristic cluster as a description variable corresponding to the image description characteristic cluster;
and based on the distribution labels corresponding to each image description feature cluster, carrying out feature integration on the description variables of a plurality of image description feature clusters to obtain the input type experiment item convolution characteristics of the target dynamic simulation experiment item aiming at the feedback type experiment item.
In some possible embodiments, the performing convolution operation based on the image description feature cluster corresponding to the graphic operation behavior description feature of each of the second GUI experiment teaching operation behaviors to obtain a feedback type experiment item convolution feature of the target dynamic simulation experiment item includes:
acquiring fourth GUI experiment teaching operation behaviors corresponding to each input experiment item in at least one second GUI experiment teaching operation behavior;
performing convolution operation on each input type experiment item based on an image description feature cluster corresponding to a graphic operation behavior description feature of a fourth GUI experiment teaching operation behavior of the corresponding input type experiment item to obtain a feedback type experiment item convolution feature of the target dynamic simulation experiment item for the input type experiment item;
and performing splicing operation on the target dynamic simulation experiment item aiming at the feedback experiment item convolution characteristics of each input experiment item to obtain the feedback experiment item convolution characteristics of the target dynamic simulation experiment item.
In some possible embodiments, the performing a convolution operation based on the image description feature cluster corresponding to the graphic operation behavior description feature corresponding to the fourth GUI experiment teaching operation behavior of the input-type experiment item to obtain a feedback-type experiment item convolution feature of the target dynamic simulation experiment item for the input-type experiment item includes:
Describing a feature cluster for each image of the feature induction arrangement:
acquiring a second statistical value of a fourth GUI experiment teaching operation behavior of which at least one graphic operation behavior description characteristic belongs to the image description characteristic cluster in the fourth GUI experiment teaching operation behavior, and taking the second statistical value of each image description characteristic cluster as a description variable corresponding to the image description characteristic cluster;
and based on the distribution labels corresponding to each image description feature cluster, carrying out feature integration on the description variables of a plurality of image description feature clusters to obtain the feedback type experiment item convolution characteristics of the target dynamic simulation experiment item aiming at the input type experiment item.
In some possible embodiments, the method further comprises:
acquiring graphic operation behavior description feature cases of a plurality of GUI experiment teaching operation behavior cases of a target dynamic simulation experiment project case;
carrying out feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case;
performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description feature cases to obtain a dynamic simulation experiment item convolution feature case of the target dynamic simulation experiment item;
Carrying out experimental effect conclusion judgment processing on the dynamic simulation experiment item convolution characteristic cases through a dynamic simulation experiment item identification network to obtain conclusion point hit scores of the target dynamic simulation experiment item cases belonging to each initial experimental effect conclusion point;
and determining a first network cost based on the difference between the conclusion viewpoint hit score of each initial experimental effect conclusion viewpoint of the target dynamic simulation experimental project case and the conclusion viewpoint priori score of each initial experimental effect conclusion viewpoint of the target dynamic simulation experimental project case, and optimizing the dynamic simulation experimental project identification network based on the first network cost.
In some possible embodiments, the method further comprises:
acquiring contact information between a conclusion description vector and the image description feature cluster;
acquiring a confidence coefficient of each image description feature cluster from the dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item;
and when the confidence coefficient of the image description feature cluster exceeds a confidence coefficient threshold, taking the conclusion description vector corresponding to the image description feature cluster as a viewpoint support vector for matching the target dynamic simulation experiment item with any initial experiment effect conclusion viewpoint.
In some possible embodiments, before obtaining the contact information between the conclusion description vector and the image description feature cluster, the method further comprises:
acquiring graphic operation behavior description feature cases of a plurality of GUI experiment teaching operation behavior cases;
carrying out feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case, and carrying out simulation experiment analysis processing on each graphic operation behavior description feature case through a simulation experiment analysis network to obtain a conclusion description vector corresponding to each graphic operation behavior description feature case;
acquiring a real-time period and a first conclusion description vector associated GUI experiment teaching operation behavior case as an original GUI experiment teaching operation behavior case, and configuring a new conclusion description vector for an image description feature cluster corresponding to the original GUI experiment teaching operation behavior when the original GUI experiment teaching operation behavior case meets a first requirement aiming at each original GUI experiment teaching operation behavior case.
In some possible embodiments, the method further comprises:
Configuring the new conclusion description vector for the original GUI experiment teaching operation behavior;
determining a target image description feature cluster corresponding to a GUI experiment teaching operation behavior case corresponding to each second conclusion description vector, screening the GUI experiment teaching operation behavior case which belongs to the target image description feature cluster and is associated with the first conclusion description vector in real time period as a GUI experiment teaching operation behavior case to be processed, and binding the GUI experiment teaching operation behavior case to be processed with the second conclusion description vector;
and debugging the simulation experiment analysis network based on the contact information of the GUI experiment teaching operation behavior case and the conclusion description vector.
In some possible embodiments, the method further comprises:
acquiring initial image description feature clusters of all original GUI experiment teaching operation behavior cases, and determining a first statistical value of the original GUI experiment teaching operation behavior cases corresponding to each initial image description feature cluster;
and when a first statistical value corresponding to the initial image description feature cluster corresponding to the original GUI experiment teaching operation behavior case exceeds a first statistical value threshold, determining that the original GUI experiment teaching operation behavior case meets the first requirement.
In some possible embodiments, the method further comprises:
obtaining a conclusion thermodynamic diagram vector of the target dynamic simulation experiment item;
feature integration is carried out on the conclusion viewpoint hit score of the target dynamic simulation experiment item and the conclusion thermodynamic diagram vector, so that a global conclusion output vector is obtained;
and carrying out experimental effect conclusion judgment processing on the global conclusion output vector through an experimental project evaluation network to obtain the experimental operation quality score of the target dynamic simulation experimental project.
In a second aspect, the present invention also provides an image recognition tracking system, including a processor and a memory; the processor is in communication with the memory, and the processor is configured to read and execute a computer program from the memory to implement the method described above.
In a third aspect, the present invention also provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the method described above.
According to the embodiment of the invention, graphic operation behavior description characteristics of a plurality of GUI experiment teaching operation behaviors of a target dynamic simulation experiment item are obtained, and each graphic operation behavior description characteristic is subjected to characteristic induction arrangement to obtain an image description characteristic cluster corresponding to each graphic operation behavior description characteristic; performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description features to obtain dynamic simulation experiment item convolution features of a target dynamic simulation experiment item, and performing feature induction in view of the graphic operation behavior description features of the graphic operation behavior through GUI experiment teaching, so that feature induction precision of a graphic layer can be improved to improve feature expression quality of the dynamic simulation experiment item convolution features, and then performing experiment effect conclusion judgment processing on the dynamic simulation experiment item convolution features to obtain conclusion viewpoint hit scores of matching of the target dynamic simulation experiment item and each initial experiment effect conclusion viewpoint; when the conclusion point hit score corresponding to any initial experimental effect conclusion point is larger than the conclusion point hit score threshold, the target dynamic simulation experiment item is determined to belong to any initial experimental effect conclusion point, so that the experimental effect conclusion point corresponding to the target dynamic simulation experiment item can be accurately and efficiently determined, and the intelligent degree of the interactive intelligent experiment teaching system and the interpretability of the dynamic simulation experiment item are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flow chart of an image recognition tracking method based on an interactive intelligent experiment teaching system provided by an embodiment of the invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention.
It should be noted that the terms "first," "second," and the like in the description of the present invention and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided by the embodiments of the present invention may be performed in an image recognition tracking system, a computer device, or similar computing device. Taking the example of running on an image recognition tracking system, the image recognition tracking system may comprise one or more processors (which may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory for storing data, and optionally the image recognition tracking system may further comprise transmission means for communication functions. It will be appreciated by those skilled in the art that the above-described configuration is merely illustrative and is not intended to limit the configuration of the image recognition tracking system. For example, the image recognition tracking system may also include more or fewer components than shown above, or have a different configuration than shown above.
The memory may be used to store a computer program, for example, a software program of application software and a module, for example, a computer program corresponding to an image recognition tracking method based on an interactive intelligent experiment teaching system in an embodiment of the present invention, and the processor executes various functional applications and data processing by running the computer program stored in the memory, that is, implements the method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the image recognition tracking system through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the image recognition tracking system. In one example, the transmission means comprises a network adapter (Network Interface Controller, simply referred to as NIC) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used to communicate with the internet wirelessly.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image recognition tracking method based on an interactive intelligent experiment teaching system according to an embodiment of the present invention, where the method is applied to an image recognition tracking system, and further may include steps 110 to 150.
And 110, acquiring graphic operation behavior description characteristics of a plurality of GUI experiment teaching operation behaviors of the target dynamic simulation experiment item.
In the embodiment of the invention, the nouns of the target dynamic simulation experiment item, the GUI experiment teaching operation behavior and the graphic operation behavior description characteristics are explained as follows.
Target dynamic simulation experiment item: this refers to a specific experimental project performed in an interactive intelligent experimental teaching system. For example, it may be a specific experiment in the fields of physics, chemistry or biology.
GUI experiment teaching operation behavior: this is an experimental operational behavior performed by a user on a Graphical User Interface (GUI), including but not limited to clicking, dragging, rotating, and the like.
Graphical operational behavior description features: this is a graphical characteristic used to describe the behavior of the GUI experiment teaching operation, such as the position, size, color, etc. of the operation object.
Further, in interactive intelligent experimental teaching systems, graphical operational behavioral description features are a very critical concept that can help the system understand and track various operations of users in a virtual environment. Illustratively, the graphical operational behavioral description features generally include the following aspects.
(1) Attributes of the operation object: such as location, size, color, shape, etc. Such information may help the system identify the target of the operation and determine if it has changed.
(2) Type and manner of operation: such as clicking, dragging, rotating, etc. Different operation types may cause different effects and are therefore very important for understanding the intention of the user and simulating experimental results.
(3) Time and order of operation: in some cases, the timing and sequence of operations may affect the results of the experiment. For example, in chemical experiments, the order and rate of reagent addition may affect the outcome of the reaction.
(4) Context of operation: in complex experimental procedures, a single operation often needs to be properly understood in a particular context. For example, the test tube is placed in a rack and then reagents are added thereto, such contextual information being important for understanding the overall experimental procedure.
By collecting and analyzing these graphical operational behavioral profiles, the system can better understand the user's operation, thereby providing a more accurate and realistic simulated experimental environment.
Step 110 is illustrated with a specific example: assuming that the target dynamic simulation experiment item is a virtual chemical experiment, students need to perform titration operation in a virtual environment. In this process, GUI experiment teaching operations may include selecting a pipette, picking up reagents, controlling titration speed, etc. Each of the operational behaviors has its corresponding graphical operational behavior description feature. For example, when a dropper is selected, the information such as the position, color, size, etc. of the dropper is a graphical operational behavior descriptive feature of the operational behavior. The system, by capturing and analyzing these features, understands and tracks the user's operational behavior. The design enables the interactive intelligent experiment teaching system to better simulate and support various experiment operations, thereby providing a deeper and intuitive learning experience.
And 120, carrying out feature induction and arrangement on each graphic operation behavior description feature to obtain an image description feature cluster corresponding to each graphic operation behavior description feature.
In the embodiment of the present invention, terms of feature induction arrangement and image description feature clusters are explained as follows.
And (3) feature induction and arrangement: this is a data processing method that sorts or maps collected graphical operational behavior descriptive features onto a set of higher-level, more abstract features by analyzing and sorting them. Thus, the system can be helped to better understand and identify different operation behaviors, and the accuracy and efficiency of the system are improved.
Image description feature clusters: in the feature induction finishing process, multiple groups containing similar or related features may be generated, and each group is an image description feature cluster. These clusters of features may represent certain specific operational behaviors or states to aid the system in more advanced analysis and judgment.
Step 120 is described by taking a chemical titration experiment as an example: in step 110, the system collects various graphical operational behavioral description features, such as the position, color, size of the drop tube, and the type of operation (click, drag, etc.) of the user. Next, in step 120, the system will sort the features together.
For example, the system may find that when a user is performing a titration operation, the position, color and size of the dropper may change specifically, and these changes are all related to the titration rate. Thus, the system can generalize these features to a higher level of features, titration speed.
After finishing the feature induction, the system also generates a series of image description feature clusters. Each feature cluster contains a set of similar or related features that represent a particular operational behavior or state. For example, one feature cluster may include all features associated with "fast titration" and another feature cluster may include all features associated with "slow titration".
And 130, performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description features to obtain the dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item.
In the embodiment of the invention, the nouns of convolution operation and dynamic simulation experiment item convolution characteristics are explained as follows.
Convolution operation: in the fields of computer vision and deep learning, convolution operations are a common image processing technique. Briefly, it generates a new image or feature map by applying a small matrix (called a convolution kernel or filter) to each pixel of the image. In this process, local information in the original image is extracted, forming higher-level, more abstract features.
Dynamic simulation experiment item convolution characteristics: this refers to features extracted from the image description feature clusters by a convolution operation. Because the convolution operation can capture the local information of the image and convert the local information into advanced features, the dynamic simulation experiment item convolution features can contain more important information about the experiment item, and help the system to perform more accurate analysis and judgment.
Step 130 is described by taking a chemical titration experiment as an example: in step 120, the system has performed feature induction sorting on the graphical operational behavior description features and obtains a plurality of image description feature clusters. Each feature cluster contains a set of similar or related features that represent a particular operational behavior or state.
Next, in step 130, the system will perform a convolution operation on these feature clusters. For example, the system may use a specially designed convolution kernel to detect a change in color of the reagent during titration. After the convolution operation, the system obtains a new characteristic diagram, namely the convolution characteristic of the dynamic simulation experiment item. This feature map captures important information about the color change of the reagent, thereby helping the system to determine if the titration has been completed and if the result of the titration is accurate.
And 140, performing experimental effect conclusion judgment processing on the convolution characteristics of the dynamic simulation experimental project to obtain conclusion viewpoint hit scores of the target dynamic simulation experimental project matched with each initial experimental effect conclusion viewpoint.
In the embodiment of the invention, the terms of the experimental effect conclusion discrimination processing, the initial experimental effect conclusion viewpoint and the conclusion viewpoint hit score are explained as follows.
Judging and processing experimental effect conclusion: the method is characterized in that the convolution characteristics of the dynamic simulation experiment item are analyzed and judged to deduce the result or conclusion of the experiment. This process may involve a complex series of calculations and logic decisions.
Conclusion of initial experimental effect view: this is a set of possible experimental conclusions preset by the system before the experiment starts. They may help the system to better understand the goals of the experiment and possible outcomes.
Conclusion opinion hit score: the method is used for evaluating the matching degree of the experimental effect conclusion discrimination processing result and the initial experimental effect conclusion viewpoint. Each initial experimental effect conclusion view has a corresponding hit score, and the higher the score is, the higher the matching degree of the view and the experimental result is.
Step 140 is illustrated by taking a chemical titration experiment as an example: in step 130, the system obtains the convolution characteristics of the dynamic simulation experiment item through convolution operation, and captures the important information of the reagent color change. Next, in step 140, the system will perform experimental effect conclusion discrimination processing on these features.
The system first references a set of initial experimental effect conclusion views. For example, possible perspectives include: "titration completed, accurate result", "titration completed, but inaccurate result" or "titration not completed".
The system will then determine which view is closest to the actual situation based on the convolution characteristics of the dynamic simulation experiment item and calculate a hit score for each view. If the hit score for a viewpoint exceeds a preset threshold, then that viewpoint can be considered as the effect conclusion of the current experiment.
And step 150, when the conclusion point hit score corresponding to any initial experimental effect conclusion point is greater than the conclusion point hit score threshold, determining the any initial experimental effect conclusion point as the current experimental effect conclusion point of the target dynamic simulation experiment item.
In the embodiment of the invention, the terms of the conclusion point hit score threshold and the current experimental effect conclusion point are explained as follows.
Conclusion point of view hit score threshold: this is a preset threshold for evaluating the accuracy of the experimental effect conclusion to determine the processing result. If the hit score for a conclusion view exceeds this threshold, the system considers this conclusion view to be correct.
Conclusion opinion hit score: this is the score obtained from each initial experimental effect conclusion perspective during the experimental effect conclusion discrimination process. The higher the score, the higher the degree of matching of the viewpoint with the experimental result.
Step 150 is illustrated by way of example of a chemical titration experiment: in step 140, the system has performed experimental effect conclusion discrimination processing on the convolution characteristics of the dynamic simulation experimental project, and calculates hit scores of each initial experimental effect conclusion viewpoint.
For example, assume that there are three perspectives: "titration completed, accurate result", "titration completed, but inaccurate result" and "titration not completed". Hit scores from these three perspectives were 0.8, 0.2 and 0.1, respectively, after the experimental effect conclusion discrimination process.
Then, in step 150, the system references a predetermined conclusion view hit score threshold, e.g., 0.7. Since the hit score (0.8) exceeds the threshold value (0.7) for the "titration completed, the result is accurate" view, and the system will consider this view to be correct.
Through the processing, the interactive intelligent experiment teaching system can better understand and simulate the operation behaviors of the user, and provides a more accurate and more vivid simulation experiment environment.
It can be seen that, applying steps 110-150, obtaining graphic operation behavior description features of a plurality of GUI experiment teaching operation behaviors of a target dynamic simulation experiment item through the embodiment of the present invention, and performing feature induction and arrangement on each graphic operation behavior description feature to obtain an image description feature cluster corresponding to each graphic operation behavior description feature; performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description features to obtain dynamic simulation experiment item convolution features of a target dynamic simulation experiment item, and performing feature induction in view of the graphic operation behavior description features of the graphic operation behavior through GUI experiment teaching, so that feature induction precision of a graphic layer can be improved to improve feature expression quality of the dynamic simulation experiment item convolution features, and then performing experiment effect conclusion judgment processing on the dynamic simulation experiment item convolution features to obtain conclusion viewpoint hit scores of matching of the target dynamic simulation experiment item and each initial experiment effect conclusion viewpoint; when the conclusion point hit score corresponding to any initial experimental effect conclusion point is larger than the conclusion point hit score threshold, the target dynamic simulation experiment item is determined to belong to any initial experimental effect conclusion point, so that the experimental effect conclusion point corresponding to the target dynamic simulation experiment item can be accurately and efficiently determined, and the intelligent degree of the interactive intelligent experiment teaching system and the interpretability of the dynamic simulation experiment item are improved.
In detail, the embodiment of the invention converts the complex GUI experimental teaching operation behavior into a series of higher-level and more abstract image description characteristic clusters through the induction and arrangement of the graphic operation behavior description characteristics. The induction arrangement can improve the feature induction precision of the graph layer, thereby improving the feature expression quality of the convolution features of the dynamic simulation experiment item.
In particular, if no feature induction collation is performed, the system may need to process a large amount of raw data and find useful information in the data, which is time consuming and error prone. Through feature induction arrangement, the system can rapidly identify key operation behaviors and states, so that the efficiency and accuracy of the system are greatly improved.
After the convolution characteristics of the dynamic simulation experiment items are obtained, the system also carries out experimental effect conclusion judgment processing. This is a process based on logical reasoning and deep learning that can help the system extract important information from the convolution characteristics and generate experimental results or conclusions therefrom.
In the experimental effect conclusion judging and processing process, the system also calculates the hit score of each initial experimental effect conclusion viewpoint. This score reflects how well the view matches the experimental results. By setting a conclusion opinion hit score threshold, the system can accurately determine which opinion is correct.
Through the processing, the embodiment of the invention not only can accurately and efficiently determine the experimental effect conclusion view corresponding to the target dynamic simulation experiment item, but also improves the intelligent degree of the interactive intelligent experiment teaching system and the interpretability of the dynamic simulation experiment item.
For example, in conducting a chemical titration experiment, the system can quickly identify the user's operational behavior (e.g., titration speed, color change of the reagent, etc.) through feature induction finishing and convolution operations, and generate the result of the experiment accordingly. Meanwhile, by calculating the hit score, the system can also judge whether the operation of the user accords with the expectation or not and whether the experimental result is accurate or not. In this way, both teachers and students can better understand and evaluate the experimental process, thereby obtaining a deeper learning experience.
In some possible embodiments, the graphical operational behavior descriptive feature of the plurality of GUI experimental teaching operational behaviors of the acquisition target dynamic simulation experiment item described in step 110 includes step 111.
Step 111, implementing the following steps 1111-1113 for each GUI experiment teaching operation behavior.
And 1111, performing feature extraction operation on the gesture input command in the GUI experiment teaching operation behavior to obtain gesture input command image features.
And step 1112, performing interval numerical mapping on the tracked behavior node number in the GUI experiment teaching operation behavior to obtain behavior node quantitative characteristics.
Wherein the interval value mapping includes normalization processing.
And 1113, performing feature integration on the gesture input command image features and the behavior node quantization features to obtain graphic operation behavior description features of the GUI experiment teaching operation behaviors.
In the above embodiment, the gesture input command image feature is a feature obtained by performing a feature extraction operation on a gesture input command in GUI experiment teaching operation behavior. It can express various operations such as sliding, clicking, etc. performed by the user through gestures. The behavior node quantization characteristic is a characteristic obtained by carrying out interval numerical mapping on the number of tracked behavior nodes in the GUI experiment teaching operation behavior. This feature reflects the number of behavioural nodes involved in the user operation. The feature integration is to combine the gesture input command image features and behavior node quantization features to form a more comprehensive and comprehensive graphic operation behavior description feature.
Step 1111-step 1113 will be described by taking a chemical titration experiment as an example.
In step 1111, the system first performs a feature extraction operation on the gesture input command of the user. For example, if the user is titrating by sliding a dropper, the system will extract gesture input command image features associated therewith.
Then, in step 1112, the system further performs interval value mapping on the tracked behavior node number in the user operation process, to obtain the behavior node quantization feature. For example, if the user has completed a titration and begins to observe a color change, the behavior node quantification feature may include a titration step and an observation step.
Finally, in step 1113, the system performs feature integration on the gesture input command image feature and the behavior node quantization feature to obtain a graphic operation behavior description feature of the GUI experiment teaching operation behavior. This descriptive feature contains comprehensive information about the user's operation and can help the system better understand and simulate the user's behavior.
Therefore, by implementing the steps for each GUI experiment teaching operation behavior, key characteristics of user operation can be extracted from multiple dimensions, so that understanding and identifying capability of the system to the user operation are improved. Meanwhile, through feature integration, the system can fuse all the features together to form a more comprehensive and more accurate graphic operation behavior description feature. In this way, the system can provide higher quality services, both in real-time interactions and in post-analysis.
In some optional embodiments, the step 130 of performing a convolution operation on the image description feature clusters corresponding to the plurality of graphic operation behavior description features to obtain a dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item includes steps 131-135.
And 131, acquiring a first GUI experiment teaching operation behavior of which the target dynamic simulation experiment item is an input experiment item from a plurality of GUI experiment teaching operation behaviors.
And step 132, acquiring a second GUI experiment teaching operation behavior of which the target dynamic simulation experiment item is a feedback experiment item from a plurality of GUI experiment teaching operation behaviors.
And 133, performing convolution operation based on the image description feature clusters corresponding to the graphic operation behavior description features of each first GUI experiment teaching operation behavior to obtain the input experiment item convolution features of the target dynamic simulation experiment item.
And step 134, performing convolution operation based on the image description feature clusters corresponding to the graphic operation behavior description features of each second GUI experiment teaching operation behavior to obtain feedback experiment item convolution features of the target dynamic simulation experiment item.
And 135, performing feature integration on the time sequence features of the plurality of GUI experiment teaching operation behaviors, the input experiment item convolution features of the target dynamic simulation experiment item and the feedback experiment item convolution features of the target dynamic simulation experiment item to obtain the dynamic simulation experiment item convolution features of the target dynamic simulation experiment item.
In the embodiment of the invention, the input experimental project convolution feature is a feature obtained by performing convolution operation on the basis of an image description feature cluster corresponding to the graphic operation behavior description feature of the first GUI experimental teaching operation behavior (namely, the input experimental project). This feature reflects the user's operation at the beginning of the experiment. The feedback type experiment item convolution feature is a feature obtained by carrying out convolution operation on an image description feature cluster corresponding to the graphic operation behavior description feature of the second GUI experiment teaching operation behavior (namely, the feedback type experiment item). This feature reflects the user's operation after receiving system feedback. The timing characteristics are time information about the behavior of the GUI experiment teaching operation, such as the order, frequency, or duration of the operation, etc.
The chemical titration experiments are taken as examples to illustrate the steps 131-135.
In step 131, the system first obtains the first operational behavior from all GUI experiment teaching operational behaviors, which is an input type experiment item. For example, the user may select a reagent at this stage and set the titration rate.
Then, in step 132, the system acquires a second operational behavior, which is a feedback type experiment item. For example, the user may adjust the titration rate or change the reagent after receiving feedback from the system.
Next, in step 133 and step 134, the system performs convolution operations on the input-type experiment item and the feedback-type experiment item, respectively, to obtain the input-type experiment item convolution characteristic and the feedback-type experiment item convolution characteristic.
Finally, in step 135, the system performs feature integration on the time sequence features, the input type experiment item convolution features, and the feedback type experiment item convolution features of all GUI experiment teaching operation behaviors, so as to obtain the dynamic simulation experiment item convolution features.
In more detail, the input-type experimental item and the feedback-type experimental item are one classification method for user interaction behavior.
Input type experiment items mainly relate to operations in which a user inputs information or instructions to a system. For example, in a chemical titration experiment, the user's operations of selecting a reagent, setting a titration speed, and the like may be regarded as input-type experimental items. For such items, emphasis is placed on understanding the user's operational goals and correctly recognizing and processing the user's inputs.
The feedback type experiment item mainly relates to operations performed by a user according to system feedback. For example, in a chemical titration experiment, a user may adjust the titration rate after observing a color change, which is a typical feedback-type experiment. For such items, emphasis is placed on understanding how the user uses feedback from the system to adjust his own operation and obtain valuable information from it.
The user operation is distinguished and processed according to the input type and the feedback type, so that the system is facilitated to more accurately understand the operation intention of the user, and more accurate feedback is provided. Meanwhile, the operation process of the user can be tracked and recorded better, and later analysis and evaluation are facilitated.
In addition, by integrating the convolution characteristics of the input-type experiment item and the feedback-type experiment item, the system can obtain a more comprehensive and more detailed characteristic describing the operation behavior of the user. This not only can help the system better simulate and understand the experimental process, but also helps to improve the interpretability of the experimental results.
For example, in a chemical titration experiment, the system can know the operational inertia and strategy of a user at the beginning stage by analyzing the convolution characteristics of input experiment items; by analyzing the convolution characteristics of the feedback type experimental project, the user can know how to adjust the titration speed according to the color change. Such information is useful for understanding the course of the experiment, evaluating the results of the experiment, and guiding the user in improving the operation.
In general, by distinguishing and integrating convolution characteristics of input type experiment items and feedback type experiment items, the embodiment of the invention can more comprehensively and deeply understand the operation behaviors of users, thereby providing higher-quality teaching services.
In other optional embodiments, the convolution operation is performed in step 133 based on the image description feature cluster corresponding to the graphic operation behavior description feature of each of the first GUI experiment teaching operation behaviors, so as to obtain the input type experiment item convolution feature of the target dynamic simulation experiment item, which includes steps 1331-1333.
Step 1331, obtaining at least one third GUI experiment teaching operation behavior corresponding to each feedback experiment item in the first GUI experiment teaching operation behaviors.
And 1332, for each feedback type experiment item, performing convolution operation based on an image description feature cluster corresponding to the graphic operation behavior description feature of the third GUI experiment teaching operation behavior corresponding to the feedback type experiment item, so as to obtain the input type experiment item convolution feature of the target dynamic simulation experiment item for the feedback type experiment item.
And 1333, performing splicing operation on the input type experiment item convolution characteristics of each feedback type experiment item for the target dynamic simulation experiment item to obtain the input type experiment item convolution characteristics of the target dynamic simulation experiment item.
In the embodiment of the invention, the third GUI experiment teaching operation behavior refers to an operation behavior corresponding to each feedback type experiment item in the input type experiment items. That is, the user performs an adjustment or modification operation on the input type experiment item after receiving feedback from the system. The input experiment item convolution feature aiming at the feedback experiment item is a feature obtained by executing convolution operation on the image description feature cluster corresponding to the graphic operation behavior description feature based on the third GUI experiment teaching operation behavior. This feature reflects how the user adjusts the operation of the input experimental project after feedback is obtained. The splicing operation is to combine the input type experiment item convolution characteristics of the target dynamic simulation experiment item aiming at each feedback type experiment item to form a more comprehensive and comprehensive input type experiment item convolution characteristic.
Step 1331-step 1333 will be described with respect to a chemical titration experiment.
In step 1331, the system first obtains at least one third GUI experiment teaching action, such as a user operation to adjust the titration speed after observing a color change.
Then, in step 1332, the system performs a convolution operation for each feedback-type experiment item (e.g., color change) based on the graphical operation behavior descriptive feature of the third GUI experiment teaching operation behavior (e.g., adjusting titration speed), resulting in an input-type experiment item convolution feature for that feedback-type experiment item.
Finally, in step 1333, the system performs a stitching operation on the input-type experiment item convolution characteristics for each feedback-type experiment item to obtain input-type experiment item convolution characteristics of the target dynamic simulation experiment item.
The beneficial effects of the above steps are: by considering the adjustment operation of the user on the input type experiment item after the feedback is obtained, the embodiment of the invention can more comprehensively and deeply understand the operation behaviors of the user. This not only helps the system to more accurately simulate and predict the experimental process, but also helps to improve the interpretability of the experimental results. Meanwhile, through splicing operation, the system can obtain comprehensive characteristics containing a plurality of pieces of feedback experimental project information, which is very useful for understanding complex experimental processes.
In some optional embodiments, the convolving operation is performed in step 1332 based on the image description feature cluster corresponding to the graphic operation behavior description feature corresponding to the third GUI experiment teaching operation behavior of the feedback experiment item, to obtain the input experiment item convolution feature of the target dynamic simulation experiment item for the feedback experiment item, which includes step 13321.
Step 13321 the following steps 13321 a-13321 b are performed for each image description feature cluster of the feature induction arrangement.
Step 13321a, obtaining a first statistical value of a third GUI experiment teaching operation behavior of at least one graphic operation behavior description feature in the third GUI experiment teaching operation behavior, where the third GUI experiment teaching operation behavior description feature belongs to the image description feature cluster, and taking the first statistical value of each image description feature cluster as a description variable corresponding to the image description feature cluster.
And step 13321b, carrying out feature integration on descriptive variables of a plurality of image descriptive feature clusters based on the distribution labels corresponding to each image descriptive feature cluster to obtain the input type experiment item convolution feature of the target dynamic simulation experiment item aiming at the feedback type experiment item.
In the embodiment of the invention, the first statistical value is a statistical value of the graphic operation behavior description characteristic belonging to the image description characteristic cluster in the acquired at least one third GUI experiment teaching operation behavior. It may reflect some statistical information of the data inside the feature cluster, such as average, maximum, minimum, variance, etc. The descriptive variable is descriptive information of a corresponding image descriptive feature cluster obtained based on the first statistical value. The method can reflect the main characteristics of the feature clusters in a more concise and visual way. The distribution label is a label corresponding to each image description feature cluster, and is generally used for representing the position or distribution of the feature cluster in the overall feature space.
Step 13321 a-step 13321b will be described by way of example in terms of a chemical titration experiment.
In step 13321a, the system first obtains a first statistic of the graphical operational behavior descriptive characteristics belonging to the image descriptive characteristic cluster in at least one third GUI experimental teaching operational behavior. For example, if the third GUI experiment teaches that the operational behavior includes adjusting the titration speed and observing the color change, then the system may calculate the average titration speed and the degree of color change for both operational behaviors as the first statistic. The system then takes the first statistical value of each image description feature cluster as a description variable of the corresponding image description feature cluster.
Next, in step 13321b, the system performs feature integration on the description variables of the plurality of image description feature clusters based on the distribution labels corresponding to each image description feature cluster, so as to obtain the input type experiment item convolution feature of the target dynamic simulation experiment item for the feedback type experiment item.
By introducing the first statistic and the descriptive variable, the embodiment of the invention can more effectively extract and utilize the information of the graphic operation behavior descriptive feature, thereby improving the quality and the practicability of the convolution feature. Meanwhile, by considering the distributed labels, the system can better understand and process the relation of different image description feature clusters, so that the effect of feature integration is improved. These all help to promote the understanding and simulation ability of the system to the user's operation behavior, and improve the accuracy and interpretation of experimental results.
Under some exemplary design considerations, the convolution operation is performed in step 134 based on the image description feature cluster corresponding to the graphic operation behavior description feature of each of the second GUI experiment teaching operation behaviors, so as to obtain the feedback type experiment item convolution feature of the target dynamic simulation experiment item, which includes steps 1341-1343.
Step 1341, obtaining a fourth GUI experiment teaching operation behavior corresponding to each input experiment item in at least one second GUI experiment teaching operation behavior.
Step 1342, for each input-type experiment item, performing convolution operation based on an image description feature cluster corresponding to a graphic operation behavior description feature of a fourth GUI experiment teaching operation behavior corresponding to the input-type experiment item, to obtain a feedback-type experiment item convolution feature of the target dynamic simulation experiment item for the input-type experiment item.
And 1343, performing splicing operation on the feedback type experiment item convolution characteristics of each input type experiment item for the target dynamic simulation experiment item to obtain the feedback type experiment item convolution characteristics of the target dynamic simulation experiment item.
In the embodiment of the invention, the fourth GUI experiment teaching operation behavior refers to an operation behavior corresponding to each input type experiment item in the feedback type experiment item. That is, the user performs an adjustment or modification operation on the feedback type experiment item after receiving feedback from the system. The feedback experiment item convolution characteristic aiming at the input experiment item is a characteristic obtained by executing convolution operation on the basis of an image description characteristic cluster corresponding to the graphic operation behavior description characteristic of the fourth GUI experiment teaching operation behavior. This feature reflects how the user adjusts the operation of the feedback-type experiment item after feedback is obtained.
Steps 1341-1343 are described as examples of chemical titration experiments.
In step 1341, the system first obtains at least one fourth GUI experiment teaching action, such as the user adjusting the titration speed based on this feedback after observing a color change.
Then, in step 1342, the system performs a convolution operation for each input-type experiment item (e.g., titration speed) based on the graphical operational behavior descriptive feature of the fourth GUI experimental teaching operational behavior (e.g., adjusting titration speed), resulting in a feedback-type experiment item convolution feature for that input-type experiment item.
Finally, in step 1343, the system performs a stitching operation on the feedback type experiment item convolution characteristics for each input type experiment item, to obtain feedback type experiment item convolution characteristics of the target dynamic simulation experiment item.
By considering the adjustment operation of the user on the feedback type experiment item after feedback is obtained, the embodiment of the invention can more comprehensively and deeply understand the operation behaviors of the user. This not only helps the system to more accurately simulate and predict the experimental process, but also helps to improve the interpretability of the experimental results. Meanwhile, through splicing operation, the system can obtain comprehensive characteristics containing a plurality of input experimental project information, which is very useful for understanding complex experimental processes.
In some optional embodiments, the convolving operation is performed based on the image description feature cluster corresponding to the graphic operation behavior description feature corresponding to the fourth GUI experiment teaching operation behavior of the input-type experiment item described in step 1342, so as to obtain the feedback-type experiment item convolving feature of the target dynamic simulation experiment item for the input-type experiment item, which includes step 13421.
Step 13421, performing steps 13421 a-13421 b for each image description feature cluster of the feature induction arrangement.
Step 13421a, obtaining second statistics of fourth GUI experiment teaching operation behaviors of which at least one graphic operation behavior description feature belongs to the image description feature cluster in the fourth GUI experiment teaching operation behaviors, and taking the second statistics of each image description feature cluster as a description variable corresponding to the image description feature cluster.
And step 13421b, based on the distribution labels corresponding to each image description feature cluster, performing feature integration on the description variables of a plurality of image description feature clusters to obtain the feedback type experiment item convolution feature of the target dynamic simulation experiment item aiming at the input type experiment item.
In the embodiment of the invention, the second statistical value is the statistical value of the graphic operation behavior description characteristic belonging to the image description characteristic cluster in the acquired at least one fourth GUI experiment teaching operation behavior. It may reflect some statistical information of the data inside the feature cluster, such as average, maximum, minimum, variance, etc.
The steps 13421 a-13421 b are described by way of example in terms of a chemical titration experiment.
In step 13421a, the system first obtains a second statistical value of the graphical operational behavior description features belonging to the image description feature cluster in at least one fourth GUI experimental teaching operational behavior. For example, if the fourth GUI experiment teaches that the operational behavior includes adjusting the titration speed and observing the color change, then the system may calculate the average titration speed and the degree of color change for both operational behaviors as the second statistic. The system then uses the second statistical value of each image description feature cluster as a description variable of the corresponding image description feature cluster.
Next, in step 13421b, the system performs feature integration on the description variables of the plurality of image description feature clusters based on the distribution labels corresponding to each image description feature cluster, to obtain the feedback type experiment item convolution feature of the target dynamic simulation experiment item for the input type experiment item.
By introducing the second statistical value and the descriptive variable, the embodiment of the invention can more effectively extract and utilize the information of the graphic operation behavior descriptive feature, thereby improving the quality and the practicability of the convolution feature. Meanwhile, by considering the distributed labels, the system can better understand and process the relation of different image description feature clusters, so that the effect of feature integration is improved. These all help to promote the understanding and simulation ability of the system to the user's operation behavior, and improve the accuracy and interpretation of experimental results.
In some alternative embodiments, the method further comprises steps 210-250.
Step 210, obtaining graphic operation behavior description feature cases of a plurality of GUI experiment teaching operation behavior cases of the target dynamic simulation experiment project cases.
And 220, carrying out feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case.
And 230, performing convolution operation on the image description feature clusters corresponding to the graphic operation behavior description feature cases to obtain a dynamic simulation experiment item convolution feature case of the target dynamic simulation experiment item.
And 240, carrying out experimental effect conclusion judgment processing on the dynamic simulation experimental project convolution characteristic cases through a dynamic simulation experimental project identification network to obtain conclusion point hit scores of conclusion points of the target dynamic simulation experimental project cases belonging to each initial experimental effect conclusion point.
Step 250, determining a first network cost based on the difference between the conclusion viewpoint hit score of each initial experimental effect conclusion viewpoint of the target dynamic simulation experimental project case and the conclusion viewpoint priori score of each initial experimental effect conclusion viewpoint of the target dynamic simulation experimental project case, and optimizing the dynamic simulation experimental project recognition network based on the first network cost.
In the embodiment of the invention, the graphic operation behavior description feature case is the graphic operation behavior description feature of a plurality of GUI experiment teaching operation behaviors in the acquired target dynamic simulation experiment project case. Each case can be considered a training sample. The dynamic simulation experiment item convolution feature cases are feature cases obtained by performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description feature cases. The conclusion view hit score is a result obtained by carrying out experimental effect conclusion judgment processing on the convolution characteristic cases of the dynamic simulation experiment items through the dynamic simulation experiment item identification network. It reflects the likelihood that the case is from the point of view of each initial experimental effect conclusion. Conclusion view a priori score is the likelihood that the case belongs to each initial experimental effect conclusion view based on a priori knowledge or experience without any additional information. The first network cost is a cost determined based on a distinction between a conclusion view hit score and a conclusion view a priori score. The cost is used for optimizing the dynamic simulation experiment item identification network, so that the experimental effect conclusion can be judged more accurately.
Assume that there is a case set of dynamic simulated titration experiments, which includes various GUI experiment teaching operation behavior cases, such as fast titration, slow titration, interrupt titration, etc. Each case records the operation behavior of the user and the generated experimental results, such as color change, PH value change, etc., in the experimental process.
In step 210, the system obtains graphical operational behavior descriptive feature cases of these operational behaviors, such as the speed of titration, the number of titrations, etc.
Next, in step 220, the system performs feature induction sorting based on the graphical operation behavior description feature cases, and induction sorting similar features into the same image description feature cluster, e.g., all "fast titration" cases may be classified into one feature cluster.
Then, in step 230, the system performs a convolution operation on each image description feature cluster, thereby extracting deeper feature information, and obtaining a dynamic simulation experiment item convolution feature case of the target dynamic simulation experiment item. For example, by a convolution operation, the system may find that a fast titration will generally result in a faster color change.
In step 240, the system performs experimental effect conclusion discrimination processing on the convolution feature cases through the dynamic simulation experiment item identification network, so as to obtain conclusion point hit scores of the target dynamic simulation experiment item cases belonging to each initial experimental effect conclusion point. For example, the system may predict whether the experimental results will be expected in the case of a rapid titration.
Finally, in step 250, the system determines a first network cost based on the distinction between the conclusion viewpoint hit score and the conclusion viewpoint a priori score, and optimizes the dynamic simulation experiment item identification network based on the first network cost. For example, if the predicted conclusion point-of-view hit score and actual result are widely separated, then the network cost will be high and the system will need to adjust the network parameters to reduce this difference.
The steps are helpful for improving the precision and efficiency of the simulation experiment, extracting useful information from the operation behaviors of the user and improving the teaching method.
In other possible embodiments, the method further comprises steps 310-330.
And step 310, obtaining the contact information between the conclusion description vector and the image description feature cluster.
Step 320, obtaining a confidence coefficient of each image description feature cluster from the dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item.
And 330, when the confidence coefficient of the image description feature cluster exceeds a confidence coefficient threshold, using the conclusion description vector corresponding to the image description feature cluster as a viewpoint support vector for matching the target dynamic simulation experiment item with any initial experiment effect conclusion viewpoint.
In the embodiment of the present invention, the conclusion description vector is a vector representing experimental results or conclusions, which reflects characteristics or properties of the experimental results. The confidence coefficient is a value for measuring the importance degree or the credibility of the image description feature cluster in the convolution feature of the dynamic simulation experiment item. The confidence coefficient threshold is a predetermined value that determines which images describe a cluster of features with a confidence coefficient that is sufficiently high to be considered valid or trusted. The viewpoint support vector is a vector formed by conclusion description vectors corresponding to the image description feature clusters and is used for representing the matching degree of a target dynamic simulation experiment item and any initial experiment effect conclusion viewpoint.
Steps 310-330 are illustrated by way of example in a chemical titration experiment.
In step 310, the system first obtains contact information between the conclusion description vector and the image description feature cluster. For example, if a fast titration in historical data would typically result in a faster color change, such contact information might be to relate the image description feature cluster of "fast titration" to the conclusion description vector of "fast color change".
Next, in step 320, the system obtains confidence coefficients for each image description feature cluster from the dynamic simulation experiment item convolution features of the target dynamic simulation experiment item. For example, if a user employs a fast titration in a particular titration experiment, the confidence coefficient of the image describing the feature cluster may be relatively high.
Then, in step 330, when the confidence coefficient of the image description feature cluster exceeds the confidence coefficient threshold, the system uses the conclusion description vector corresponding to the image description feature cluster as the viewpoint support vector for matching the target dynamic simulation experiment item with any initial experimental effect conclusion viewpoint. For example, if the confidence coefficient of "fast titration" exceeds a preset threshold, the conclusion description vector of "fast color change" is regarded as a view support vector.
By considering the confidence coefficient and the conclusion description vector, the embodiment of the invention can more accurately judge the matching degree of the target dynamic simulation experiment item and the initial experiment effect conclusion viewpoint, thereby improving the accuracy and the interpretability of the experiment result. Meanwhile, by introducing a confidence coefficient threshold, the system can also effectively filter out unimportant or unreliable information, and the judgment accuracy is further improved.
In other possible embodiments, the method further comprises steps 410-430 before obtaining the contact information between the conclusion description vector and the image description feature cluster described in step 310.
Step 410, obtaining graphic operation behavior description feature cases of a plurality of GUI experiment teaching operation behavior cases.
And step 420, carrying out feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case, and carrying out simulation experiment analysis processing on each graphic operation behavior description feature case through a simulation experiment analysis network to obtain a conclusion description vector corresponding to each graphic operation behavior description feature case.
Step 430, acquiring GUI experiment teaching operation behavior cases with real-time periods associated with the first conclusion description vectors as original GUI experiment teaching operation behavior cases, and configuring new conclusion description vectors for image description feature clusters corresponding to the original GUI experiment teaching operation behaviors when the original GUI experiment teaching operation behavior cases meet a first requirement for each of the original GUI experiment teaching operation behavior cases.
In the embodiment of the invention, the simulation experiment analysis network is a network special for analyzing simulation experiments. The method can describe the characteristic cases according to the graphic operation behaviors and generate corresponding conclusion description vectors. Real-time period generally refers to a point in time or period of time during an experiment. The first conclusion description vector is a conclusion description vector generated by the simulation experiment analysis network and reflects the characteristics or attributes of the experimental result. The original GUI experiment teaching operation behavior case is the GUI experiment teaching operation behavior case associated with the first conclusion description vector in a real-time period. The first requirement is a preset condition or standard, and only the original GUI experiment teaching operation behavior cases meeting the requirement are allocated with new conclusion description vectors.
Steps 410-430 are illustrated by way of example of a chemical titration experiment.
In step 410, the system first obtains graphical operational behavior descriptive feature cases of a plurality of GUI experimental teaching operational behavior cases, such as operational behaviors for a user to adjust titration speed and observe color changes.
Then, in step 420, the system performs feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case, and performs simulation experiment analysis processing on each graphic operation behavior description feature case through a simulation experiment analysis network to obtain a conclusion description vector corresponding to each graphic operation behavior description feature case. For example, the system may predict whether the experimental results will be expected in the case of a rapid titration.
Next, in step 430, the system obtains GUI experiment teaching operational behavior cases associated with the first conclusion description vector in real-time period as original GUI experiment teaching operational behavior cases, such as titration operations being performed by the user in real-time period. Then, when the case of the original GUI experiment teaching operation meets the first requirement, for example, the titration operation is completed, the system configures a new conclusion description vector for the image description feature cluster corresponding to the original GUI experiment teaching operation.
In this way, the system can be better informed and record changes during the course of the experiment, thereby providing more accurate simulation and prediction. In addition, by using a simulation experiment analysis network, a conclusion description vector can be generated according to the graph operation behavior description feature cases, so that a deeper understanding of experimental results is obtained. Meanwhile, the method is also helpful for improving the accuracy and the interpretability of the experimental result.
In alternative design considerations, the method further includes steps 510-530.
Step 510, configuring the new conclusion description vector for the original GUI experiment teaching operation behavior.
Step 520, for each second conclusion description vector, determining a target image description feature cluster corresponding to a GUI experiment teaching operation behavior case corresponding to the second conclusion description vector, screening the GUI experiment teaching operation behavior cases belonging to the target image description feature cluster, and associating the real-time period with the first conclusion description vector as GUI experiment teaching operation behavior cases to be processed, and binding the GUI experiment teaching operation behavior cases to be processed with the second conclusion description vector.
And 530, debugging the simulation experiment analysis network based on the contact information of the GUI experiment teaching operation behavior case and the conclusion description vector.
In the embodiment of the present invention, the second conclusion description vector is a vector representing experimental results or conclusions, which reflects characteristics or properties of experimental results, which may be different from the first conclusion description vector. The target image description feature cluster is an image description feature cluster corresponding to the GUI experiment teaching operation behavior case corresponding to the second conclusion description vector. The GUI experiment teaching operation behavior case to be processed belongs to the target image description characteristic cluster in a real-time period, and is associated with the first conclusion description vector.
Steps 510-530 are illustrated by way of example of a chemical titration experiment.
In step 510, the system first configures a new conclusion description vector for the original GUI experimental teaching operational behavior. For example, if the original operation is a fast titration, then the new conclusion description vector may be "fast color change".
Next, in step 520, the system determines, for each second conclusion description vector, a target image description feature cluster corresponding to the GUI experiment teaching operation behavior case corresponding to the conclusion description vector, and filters out the GUI experiment teaching operation behavior cases belonging to the feature cluster, where the real-time period is associated with the first conclusion description vector as GUI experiment teaching operation behavior cases to be processed. For example, if the second conclusion description vector is "slow color change," the system may select all operational behavior cases related to slow titration.
Finally, in step 530, the system debugs the simulation experiment parsing network based on the contact information of the GUI experiment teaching operation behavior case and the conclusion description vector. For example, the system may adjust network parameters based on new training data to improve accuracy of predictions for similar situations.
In this way, the system can be better informed and record changes during the course of the experiment, thereby providing more accurate simulation and prediction. Meanwhile, by using a simulation experiment analysis network, a conclusion description vector can be generated according to the graph operation behavior description characteristic cases, so that the experimental result is more deeply understood. In addition, this also helps to improve the accuracy and interpretability of the experimental results.
The method further includes steps 610-620, under some possible design considerations.
Step 610, acquiring initial image description feature clusters of all original GUI experiment teaching operation behavior cases, and determining a first statistical value of the original GUI experiment teaching operation behavior cases corresponding to each initial image description feature cluster.
And 620, when a first statistical value corresponding to the initial image description feature cluster corresponding to the original GUI experiment teaching operation behavior case exceeds a first statistical value threshold, determining that the original GUI experiment teaching operation behavior case meets the first requirement.
In the embodiment of the invention, the initial image description feature clusters are image description feature clusters corresponding to all original GUI experiment teaching operation behavior cases. The first statistical value is a certain statistical value of the original GUI experiment teaching operation behavior case corresponding to each initial image description feature cluster. It may represent a measure of a certain characteristic or property, such as frequency, average, etc. The first statistical threshold is a predetermined value that is used to determine which of the first statistical values of the original GUI experimental teaching operational behavior cases are sufficiently high to be considered as meeting the first requirement.
Steps 610-620 are illustrated by way of example of a chemical titration experiment.
In step 610, the system first obtains initial image description feature clusters for all of the original GUI experiment teaching operational behavior cases, and determines a first statistical value of the original GUI experiment teaching operational behavior cases corresponding to each of the initial image description feature clusters. For example, if the original operation is a fast titration, the initial image description feature cluster may be all operational behavior cases related to the fast titration, and the first statistical value may be the average titration speed of those cases.
Next, in step 620, when the first statistical value corresponding to the initial image description feature cluster corresponding to the original GUI experiment teaching operation behavior case exceeds the first statistical value threshold, the system determines that the original GUI experiment teaching operation behavior case meets the first requirement. For example, if the average titration speed exceeds a preset threshold, the system may consider these fast titration operational behavior cases to meet the first requirement.
By considering the initial image description feature cluster and the first statistical value, the embodiment of the invention can more accurately judge which original GUI experiment teaching operation behavior cases meet the first requirement, thereby more accurately generating the conclusion description vector. Meanwhile, by introducing a first statistical value threshold, the system can also effectively filter out information which does not meet the requirement, and the judgment accuracy is further improved.
The method further includes steps 710-730, under some possible design considerations.
And 710, obtaining a conclusion thermodynamic diagram vector of the target dynamic simulation experiment item.
And step 720, performing feature integration on the conclusion viewpoint hit score of the target dynamic simulation experiment item and the conclusion thermodynamic diagram vector to obtain a global conclusion output vector.
And 730, performing experimental effect conclusion judging treatment on the global conclusion output vector through an experimental project evaluation network to obtain the experimental operation quality score of the target dynamic simulation experimental project.
In embodiments of the present invention, the conclusion thermodynamic diagram vector is a vector representing experimental results or conclusions that reflect the importance or heat of various possible experimental results or conclusions. Feature integration is a process of fusing multiple features (e.g., conclusion view hit scores and conclusion thermodynamic diagram vectors) together, resulting in a global, comprehensive feature vector. The global conclusion output vector is a vector obtained by feature integration and contains information of all relevant features. The experimental project evaluation network is a network dedicated to evaluating the global conclusion output vector, which can generate experimental operational quality scores. The experimental operation quality score is a value generated by the experimental project evaluation network that represents the quality or effect of an experimental operation.
Steps 710-730 are illustrated by way of example of a chemical titration experiment.
In step 710, the system first obtains a conclusion thermodynamic diagram vector for the target dynamic simulation experiment item. For example, the vector may represent the extent to which various titration rates have an effect on the outcome of the experiment in a titration experiment.
Then, in step 720, the system performs feature integration on the conclusion viewpoint hit score of the target dynamic simulation experiment item and the conclusion thermodynamic diagram vector to obtain a global conclusion output vector. For example, if the hit score of a fast titration is high and also shows a large impact in the thermodynamic diagram vector, then the importance of the fast titration may be further raised in the global conclusion output vector.
Finally, in step 730, the experimental result conclusion discrimination processing is performed on the global conclusion output vector through the experimental project evaluation network, so as to obtain the experimental operation quality score of the target dynamic simulation experimental project. For example, if the global conclusion output vector indicates that the user's titration operation is very accurate, the experimental operation quality score may be high.
By considering the conclusion thermodynamic diagram vector and the conclusion viewpoint hit score, the embodiment of the invention can more comprehensively consider all relevant information, thereby generating a more accurate global conclusion output vector. Meanwhile, by using the experimental project evaluation network, the experimental operation quality scores can be generated according to the global conclusion output vector, so that the quality or effect of experimental operation can be better evaluated.
Further, there is also provided a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the above-described method.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus and method embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image recognition tracking method based on an interactive intelligent experiment teaching system is characterized by being applied to the image recognition tracking system, and comprises the following steps:
acquiring graphic operation behavior description characteristics of a plurality of GUI experiment teaching operation behaviors of a target dynamic simulation experiment item;
performing feature induction and arrangement on each graphic operation behavior description feature to obtain an image description feature cluster corresponding to each graphic operation behavior description feature;
performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description features to obtain dynamic simulation experiment item convolution features of the target dynamic simulation experiment item;
performing experimental effect conclusion discrimination processing on the convolution characteristics of the dynamic simulation experimental project to obtain conclusion viewpoint hit scores of the target dynamic simulation experimental project matched with each initial experimental effect conclusion viewpoint;
And when the conclusion point hit score corresponding to any initial experimental effect conclusion point is larger than the conclusion point hit score threshold, determining the any initial experimental effect conclusion point as the current experimental effect conclusion point of the target dynamic simulation experiment item.
2. The method of claim 1, wherein the obtaining graphical operational behavior descriptive characteristics of the plurality of GUI experimental teaching operational behaviors of the target dynamic simulation experiment item comprises:
the following steps are implemented for each GUI experiment teaching operation behavior:
performing feature extraction operation on the gesture input command in the GUI experiment teaching operation behavior to obtain gesture input command image features;
performing interval numerical mapping on the number of tracked behavior nodes in the GUI experiment teaching operation behavior to obtain behavior node quantization characteristics;
and carrying out feature integration on the gesture input command image features and the behavior node quantization features to obtain graphic operation behavior description features of the GUI experiment teaching operation behaviors.
3. The method of claim 1, wherein the performing a convolution operation on the image description feature clusters corresponding to the plurality of graphic operation behavior description features to obtain the dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item comprises:
Acquiring a first GUI experiment teaching operation behavior of which the target dynamic simulation experiment item is an input experiment item from a plurality of GUI experiment teaching operation behaviors;
acquiring a second GUI experiment teaching operation behavior of which the target dynamic simulation experiment item is a feedback experiment item from a plurality of GUI experiment teaching operation behaviors;
performing convolution operation based on an image description feature cluster corresponding to the graphic operation behavior description feature of each first GUI experiment teaching operation behavior to obtain input experiment item convolution features of the target dynamic simulation experiment item;
performing convolution operation based on an image description feature cluster corresponding to the graphic operation behavior description feature of each second GUI experiment teaching operation behavior to obtain feedback type experiment item convolution features of the target dynamic simulation experiment item;
and carrying out feature integration on the time sequence features of the plurality of GUI experiment teaching operation behaviors, the input type experiment item convolution features of the target dynamic simulation experiment item and the feedback type experiment item convolution features of the target dynamic simulation experiment item to obtain the dynamic simulation experiment item convolution features of the target dynamic simulation experiment item.
4. The method of claim 3, wherein the performing a convolution operation based on the image description feature clusters corresponding to the graphic operation behavior description features of each of the first GUI experiment teaching operation behaviors to obtain the input experiment item convolution feature of the target dynamic simulation experiment item comprises:
acquiring at least one third GUI experiment teaching operation behavior corresponding to each feedback experiment item in the first GUI experiment teaching operation behaviors;
performing convolution operation on each feedback type experiment item based on an image description feature cluster corresponding to a graphic operation behavior description feature of a third GUI experiment teaching operation behavior corresponding to the feedback type experiment item to obtain an input type experiment item convolution feature of the target dynamic simulation experiment item for the feedback type experiment item;
and performing splicing operation on the target dynamic simulation experiment item aiming at the input experiment item convolution characteristics of each feedback experiment item to obtain the input experiment item convolution characteristics of the target dynamic simulation experiment item.
5. The method of claim 4, wherein the performing a convolution operation based on the image description feature cluster corresponding to the graphic operation behavior description feature corresponding to the third GUI experiment teaching operation behavior of the feedback experiment item to obtain the input experiment item convolution feature of the target dynamic simulation experiment item for the feedback experiment item comprises:
Describing a feature cluster for each image of the feature induction arrangement:
acquiring a first statistical value of a third GUI experiment teaching operation behavior of which at least one graphic operation behavior description characteristic belongs to the image description characteristic cluster in the third GUI experiment teaching operation behavior, and taking the first statistical value of each image description characteristic cluster as a description variable corresponding to the image description characteristic cluster;
and based on the distribution labels corresponding to each image description feature cluster, carrying out feature integration on the description variables of a plurality of image description feature clusters to obtain the input type experiment item convolution characteristics of the target dynamic simulation experiment item aiming at the feedback type experiment item.
6. The method of claim 3, wherein the performing a convolution operation based on the image description feature clusters corresponding to the graphic operation behavior description features of each of the second GUI experiment teaching operation behaviors to obtain the feedback type experiment item convolution feature of the target dynamic simulation experiment item comprises:
acquiring fourth GUI experiment teaching operation behaviors corresponding to each input experiment item in at least one second GUI experiment teaching operation behavior;
Performing convolution operation on each input type experiment item based on an image description feature cluster corresponding to a graphic operation behavior description feature of a fourth GUI experiment teaching operation behavior of the corresponding input type experiment item to obtain a feedback type experiment item convolution feature of the target dynamic simulation experiment item for the input type experiment item;
performing splicing operation on the target dynamic simulation experiment item aiming at the feedback experiment item convolution characteristics of each input experiment item to obtain the feedback experiment item convolution characteristics of the target dynamic simulation experiment item;
the performing convolution operation on the image description feature cluster corresponding to the fourth GUI experiment teaching operation behavior of the input-type experiment item to obtain a feedback-type experiment item convolution feature of the target dynamic simulation experiment item for the input-type experiment item, including:
describing a feature cluster for each image of the feature induction arrangement:
acquiring a second statistical value of a fourth GUI experiment teaching operation behavior of which at least one graphic operation behavior description characteristic belongs to the image description characteristic cluster in the fourth GUI experiment teaching operation behavior, and taking the second statistical value of each image description characteristic cluster as a description variable corresponding to the image description characteristic cluster;
And based on the distribution labels corresponding to each image description feature cluster, carrying out feature integration on the description variables of a plurality of image description feature clusters to obtain the feedback type experiment item convolution characteristics of the target dynamic simulation experiment item aiming at the input type experiment item.
7. The method of claim 1, wherein the method further comprises:
acquiring graphic operation behavior description feature cases of a plurality of GUI experiment teaching operation behavior cases of a target dynamic simulation experiment project case;
carrying out feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case;
performing convolution operation on image description feature clusters corresponding to the graphic operation behavior description feature cases to obtain a dynamic simulation experiment item convolution feature case of the target dynamic simulation experiment item;
carrying out experimental effect conclusion judgment processing on the dynamic simulation experiment item convolution characteristic cases through a dynamic simulation experiment item identification network to obtain conclusion point hit scores of the target dynamic simulation experiment item cases belonging to each initial experimental effect conclusion point;
And determining a first network cost based on the difference between the conclusion viewpoint hit score of each initial experimental effect conclusion viewpoint of the target dynamic simulation experimental project case and the conclusion viewpoint priori score of each initial experimental effect conclusion viewpoint of the target dynamic simulation experimental project case, and optimizing the dynamic simulation experimental project identification network based on the first network cost.
8. The method of claim 1, wherein the method further comprises:
acquiring contact information between a conclusion description vector and the image description feature cluster;
acquiring a confidence coefficient of each image description feature cluster from the dynamic simulation experiment item convolution feature of the target dynamic simulation experiment item;
when the confidence coefficient of the image description feature cluster exceeds a confidence coefficient threshold, the conclusion description vector corresponding to the image description feature cluster is used as a viewpoint support vector for matching the target dynamic simulation experiment item with any initial experiment effect conclusion viewpoint;
wherein, before obtaining the contact information between the conclusion description vector and the image description feature cluster, the method further comprises:
Acquiring graphic operation behavior description feature cases of a plurality of GUI experiment teaching operation behavior cases;
carrying out feature induction and arrangement on each graphic operation behavior description feature case to obtain an image description feature cluster corresponding to each graphic operation behavior description feature case, and carrying out simulation experiment analysis processing on each graphic operation behavior description feature case through a simulation experiment analysis network to obtain a conclusion description vector corresponding to each graphic operation behavior description feature case;
acquiring a GUI experiment teaching operation behavior case of which the real-time period is associated with a first conclusion description vector as an original GUI experiment teaching operation behavior case, and configuring a new conclusion description vector for an image description feature cluster corresponding to the original GUI experiment teaching operation behavior when the original GUI experiment teaching operation behavior case meets a first requirement aiming at each original GUI experiment teaching operation behavior case;
wherein the method further comprises: configuring the new conclusion description vector for the original GUI experiment teaching operation behavior; determining a target image description feature cluster corresponding to a GUI experiment teaching operation behavior case corresponding to each second conclusion description vector, screening the GUI experiment teaching operation behavior case which belongs to the target image description feature cluster and is associated with the first conclusion description vector in real time period as a GUI experiment teaching operation behavior case to be processed, and binding the GUI experiment teaching operation behavior case to be processed with the second conclusion description vector; debugging the simulation experiment analysis network based on the contact information of the GUI experiment teaching operation behavior case and the conclusion description vector;
Wherein the method further comprises: acquiring initial image description feature clusters of all original GUI experiment teaching operation behavior cases, and determining a first statistical value of the original GUI experiment teaching operation behavior cases corresponding to each initial image description feature cluster; and when a first statistical value corresponding to the initial image description feature cluster corresponding to the original GUI experiment teaching operation behavior case exceeds a first statistical value threshold, determining that the original GUI experiment teaching operation behavior case meets the first requirement.
9. The method of claim 1, wherein the method further comprises:
obtaining a conclusion thermodynamic diagram vector of the target dynamic simulation experiment item;
feature integration is carried out on the conclusion viewpoint hit score of the target dynamic simulation experiment item and the conclusion thermodynamic diagram vector, so that a global conclusion output vector is obtained;
and carrying out experimental effect conclusion judgment processing on the global conclusion output vector through an experimental project evaluation network to obtain the experimental operation quality score of the target dynamic simulation experimental project.
10. An image recognition tracking system, comprising a processor and a memory; the processor is communicatively connected to the memory, the processor being configured to read a computer program from the memory and execute the computer program to implement the method of any of claims 1-9.
CN202311546162.0A 2023-11-20 2023-11-20 Image recognition tracking method based on interactive intelligent experiment teaching system Active CN117539367B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311546162.0A CN117539367B (en) 2023-11-20 2023-11-20 Image recognition tracking method based on interactive intelligent experiment teaching system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311546162.0A CN117539367B (en) 2023-11-20 2023-11-20 Image recognition tracking method based on interactive intelligent experiment teaching system

Publications (2)

Publication Number Publication Date
CN117539367A true CN117539367A (en) 2024-02-09
CN117539367B CN117539367B (en) 2024-04-12

Family

ID=89791372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311546162.0A Active CN117539367B (en) 2023-11-20 2023-11-20 Image recognition tracking method based on interactive intelligent experiment teaching system

Country Status (1)

Country Link
CN (1) CN117539367B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098814A1 (en) * 2011-03-30 2018-04-12 Surgical Theater LLC Method and system for simulating surgical procedures
CN112138403A (en) * 2020-10-19 2020-12-29 腾讯科技(深圳)有限公司 Interactive behavior recognition method and device, storage medium and electronic equipment
CN115205727A (en) * 2022-05-31 2022-10-18 上海锡鼎智能科技有限公司 Experiment intelligent scoring method and system based on unsupervised learning
CN115762295A (en) * 2022-11-24 2023-03-07 天津大学 Intelligent experiment teaching platform based on embedded core MCU and AI chip
WO2023163376A1 (en) * 2022-02-25 2023-08-31 계명대학교 산학협력단 Virtual collaboration non-contact real-time remote experimental system
CN116682293A (en) * 2023-05-31 2023-09-01 东华大学 Experiment teaching system based on augmented reality and machine vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098814A1 (en) * 2011-03-30 2018-04-12 Surgical Theater LLC Method and system for simulating surgical procedures
CN112138403A (en) * 2020-10-19 2020-12-29 腾讯科技(深圳)有限公司 Interactive behavior recognition method and device, storage medium and electronic equipment
WO2023163376A1 (en) * 2022-02-25 2023-08-31 계명대학교 산학협력단 Virtual collaboration non-contact real-time remote experimental system
CN115205727A (en) * 2022-05-31 2022-10-18 上海锡鼎智能科技有限公司 Experiment intelligent scoring method and system based on unsupervised learning
CN115762295A (en) * 2022-11-24 2023-03-07 天津大学 Intelligent experiment teaching platform based on embedded core MCU and AI chip
CN116682293A (en) * 2023-05-31 2023-09-01 东华大学 Experiment teaching system based on augmented reality and machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
师蕾;郝挺雷;林筑英;: "基于三维引擎的虚拟实验教学系统交互模式研究", 中国远程教育, no. 08, 31 August 2010 (2010-08-31), pages 70 - 74 *

Also Published As

Publication number Publication date
CN117539367B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
CN101097564B (en) Parameter learning method, parameter learning apparatus, pattern classification method, and pattern classification apparatus
CN108229588B (en) Machine learning identification method based on deep learning
CN105303179A (en) Fingerprint identification method and fingerprint identification device
EP3596655B1 (en) Method and apparatus for analysing an image
CN111340233B (en) Training method and device of machine learning model, and sample processing method and device
CN112132014A (en) Target re-identification method and system based on non-supervised pyramid similarity learning
CN113449011A (en) Big data prediction-based information push updating method and big data prediction system
EP3745317A1 (en) Apparatus and method for analyzing time series data based on machine learning
CN111488939A (en) Model training method, classification method, device and equipment
CN112200862B (en) Training method of target detection model, target detection method and device
CN108345942B (en) Machine learning identification method based on embedded code learning
CN114238764A (en) Course recommendation method, device and equipment based on recurrent neural network
CN117539367B (en) Image recognition tracking method based on interactive intelligent experiment teaching system
CN113435459A (en) Rock component identification method, device, equipment and medium based on machine learning
CN112633341A (en) Interface testing method and device, computer equipment and storage medium
CN109934352B (en) Automatic evolution method of intelligent model
CN109743200B (en) Resource feature-based cloud computing platform computing task cost prediction method and system
CN111539390A (en) Small target image identification method, equipment and system based on Yolov3
CN108345943B (en) Machine learning identification method based on embedded coding and contrast learning
CN115690514A (en) Image recognition method and related equipment
CN112115996B (en) Image data processing method, device, equipment and storage medium
CN113468936A (en) Food material identification method, device and equipment
CN110716778A (en) Application compatibility testing method, device and system
CN113537262B (en) Data analysis method, device, equipment and readable storage medium
CN112348040B (en) Model training method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant