CN115471216B - Data management method of intelligent laboratory management platform - Google Patents

Data management method of intelligent laboratory management platform Download PDF

Info

Publication number
CN115471216B
CN115471216B CN202211366399.6A CN202211366399A CN115471216B CN 115471216 B CN115471216 B CN 115471216B CN 202211366399 A CN202211366399 A CN 202211366399A CN 115471216 B CN115471216 B CN 115471216B
Authority
CN
China
Prior art keywords
learner
feature
interest
equipment
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211366399.6A
Other languages
Chinese (zh)
Other versions
CN115471216A (en
Inventor
张秀明
蔡钦泉
汪之红
张丽军
翁佳楠
文启林
佘鑫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huikang Information Technology Co ltd
Shenzhen Sunyuan Technology Co ltd
Original Assignee
Shenzhen Huikang Information Technology Co ltd
Shenzhen Sunyuan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Huikang Information Technology Co ltd, Shenzhen Sunyuan Technology Co ltd filed Critical Shenzhen Huikang Information Technology Co ltd
Priority to CN202211366399.6A priority Critical patent/CN115471216B/en
Publication of CN115471216A publication Critical patent/CN115471216A/en
Application granted granted Critical
Publication of CN115471216B publication Critical patent/CN115471216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • G06Q50/265Personal security, identity or safety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Tourism & Hospitality (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • General Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Primary Health Care (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses data management method of wisdom laboratory management platform, it is through using neural network model based on deep learning to extract the experimental operation dynamic characteristics of learner's experimental operation surveillance video as the feature extractor to combine the classifier to classify the experimental operation developments and realize monitoring and intelligent analysis the experimental operation of learner, like this, avoid because of the laboratory accident that the maloperation brought.

Description

Data management method of intelligent laboratory management platform
Technical Field
The application relates to the technical field of laboratory management, in particular to a data management method of an intelligent laboratory management platform.
Background
The safety of a laboratory is a precondition for ensuring the normal operation of experimental teaching and learning, and in the traditional experimental teaching, the error operation of most learners needs the careful observation of a human instructor to correct and further prevent the occurrence of safety accidents. However, the attention of the human instructor is limited, and the attention of the human instructor cannot be paid to each experimental operation of each learner, which is also a main reason for many laboratory safety accidents.
Therefore, an optimized data management scheme of the intelligent laboratory management platform is expected, which can monitor and intelligently analyze the experimental operation of the learner so as to avoid the laboratory accidents caused by the wrong operation.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a data management method of an intelligent laboratory management platform, which extracts the experiment operation dynamic characteristics of an experiment operation monitoring video of a learner by taking a neural network model based on deep learning as a characteristic extractor, and classifies the experiment operation dynamic state by combining a classifier to realize monitoring and intelligent analysis of the experiment operation of the learner, so that laboratory accidents caused by misoperation are avoided.
According to an aspect of the present application, there is provided a data management method of an intelligent laboratory management platform, including:
acquiring an experiment operation monitoring video of a learner, which is acquired by a camera deployed in a laboratory;
extracting a plurality of operation monitoring key frames from the experiment operation monitoring video;
respectively enabling each operation monitoring key frame in the operation monitoring key frames to pass through an equipment target detection network and a learner target detection network to obtain a plurality of equipment interested areas and a plurality of learner interested areas;
converting each equipment interested region in the equipment interested regions into equipment interested feature vectors through a first linear embedding layer respectively to obtain a plurality of equipment interested feature vectors;
converting each learner interested region in the plurality of learner interested regions into a learner interested feature vector through a second linear embedding layer to obtain a plurality of learner interested feature vectors;
performing correlation coding on the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors to obtain a plurality of co-operating feature matrices;
arranging the plurality of cooperative operation feature matrixes into a three-dimensional input tensor, and then obtaining a classification feature map by using a convolution neural network model of a three-dimensional convolution kernel;
performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map; and
and passing the optimized classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the experimental operation of the learner is normative or not.
In the data management method of the intelligent laboratory management platform, the extracting a plurality of operation monitoring key frames from the experiment operation monitoring video includes: extracting the plurality of operational monitoring key frames from the experimental operational monitoring video at a predetermined sampling frequency.
In the data management method of the intelligent laboratory management platform, the equipment target detection network and/or the learner target detection network are/is an anchor-window-free target detection network.
In the data management method of the intelligent laboratory management platform, the target detection network based on the anchorless window is YOLOv1, FCOS, centrnet, extreme net or RepPoints.
In the data management method of the intelligent laboratory management platform, the converting each equipment region of interest of the plurality of equipment regions of interest into an equipment interest feature vector through a first linear embedding layer to obtain a plurality of equipment interest feature vectors includes: the first linear embedding layer fully concatenates each of the plurality of fixture regions-of-interest using a learnable embedding matrix to derive the plurality of fixture interest feature vectors.
In the data management method of the intelligent laboratory management platform, the converting each of the learner interest regions into a learner interest feature vector through a second linear embedding layer to obtain a plurality of learner interest feature vectors, includes: the second linear embedding layer fully concatenates the respective ones of the plurality of learner-interest regions using a learnable embedding matrix to derive the plurality of learner-interest feature vectors.
In the above data management method of the intelligent laboratory management platform, the performing associated encoding on the equipment interest feature vectors and the learner interest feature vectors to obtain a plurality of co-operating feature matrices includes: performing correlation coding on the equipment interest feature vector and the learner interest feature vector of the same operation monitoring key frame according to the following formula to obtain the cooperative operation feature matrix; wherein the formula is:
Figure 51159DEST_PATH_IMAGE001
=/>
Figure 420829DEST_PATH_IMAGE002
wherein
Figure 934987DEST_PATH_IMAGE003
A transposed vector representing the learner's interest feature vector corresponding to each of the plurality of operational monitoring keyframes, and a conjunction/conjunction system for determining whether a learner has interested in a feature vector based on the transposed vector and the conjunction/conjunction system for each of the plurality of operational monitoring keyframes>
Figure 64617DEST_PATH_IMAGE004
Represents a feature vector of interest of the equipment, and>
Figure 347831DEST_PATH_IMAGE001
represents the co-operating characteristic matrix +>
Figure 904714DEST_PATH_IMAGE005
Representing vector multiplication.
In the data management method of the intelligent laboratory management platform, the performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map includes: performing feature distribution optimization on the classification feature map according to the following formula to obtain an optimized classification feature map; wherein the formula is:
Figure 640589DEST_PATH_IMAGE006
wherein, the first and the second end of the pipe are connected with each other,
Figure 823178DEST_PATH_IMAGE007
is the ^ th or greater than the classification feature map>
Figure 960898DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 688683DEST_PATH_IMAGE009
Is the ^ th or greater than the optimized classification feature map>
Figure 177433DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 648865DEST_PATH_IMAGE010
Is the logarithm to the base of 2.
In the data management method of the intelligent laboratory management platform, the step of passing the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the experimental operation of the learner is normative or not, includes: expanding each optimized classification feature matrix of the optimized classification feature map into classification feature vectors according to row vectors or column vectors; performing full-concatenation coding on the classification feature vectors by using a full-concatenation layer of the classifier to obtain coded classification feature vectors; and inputting the encoding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
According to another aspect of the present application, there is provided a data management system of an intelligent laboratory management platform, including:
the monitoring acquisition module is used for acquiring an experiment operation monitoring video of the learner, which is acquired by a camera deployed in a laboratory;
the key frame extraction module is used for extracting a plurality of operation monitoring key frames from the experiment operation monitoring video;
the interesting region identification module is used for enabling each operation monitoring key frame in the operation monitoring key frames to pass through an equipment target detection network and a learner target detection network respectively to obtain a plurality of equipment interesting regions and a plurality of learner interesting regions;
the equipment interest feature vector construction module is used for converting each equipment interest region in the equipment interest regions into equipment interest feature vectors through a first linear embedding layer respectively so as to obtain a plurality of equipment interest feature vectors;
the learner interest feature vector construction module is used for converting each learner interest region in the plurality of learner interest regions into a learner interest feature vector through a second linear embedding layer so as to obtain a plurality of learner interest feature vectors;
the association module is used for performing association coding on the equipment interest feature vectors and the learner interest feature vectors to obtain a plurality of cooperative operation feature matrixes;
the classification characteristic diagram generating module is used for arranging the plurality of cooperative operation characteristic matrixes into a three-dimensional input tensor and then obtaining a classification characteristic diagram by using a convolution neural network model of a three-dimensional convolution kernel;
the optimization module is used for optimizing the feature distribution of the classification feature map to obtain an optimized classification feature map; and
and the result generation module is used for enabling the optimized classification characteristic graph to pass through a classifier to obtain a classification result, and the classification result is used for indicating whether the experimental operation of the learner is normative or not.
In the data management system of the intelligent laboratory management platform, the key frame extraction module is further configured to: extracting the plurality of operational monitoring key frames from the experimental operational monitoring video at a predetermined sampling frequency.
In the data management system of the intelligent laboratory management platform, the equipment target detection network and/or the learner target detection network are/is an anchor-window-free target detection network.
In the data management system of the intelligent laboratory management platform, the target detection network based on the anchorless window is YOLOv1, FCOS, centrnet, extreme net or RepPoints.
In the data management system of the intelligent laboratory management platform, the device interest feature vector constructing module includes: the first linear embedding layer fully-connected encodes each of the plurality of equipment regions of interest using a learnable embedding matrix to obtain the plurality of equipment feature vectors of interest.
In the data management system of the intelligent laboratory management platform, the learner-interested feature vector constructing module includes: the second linear embedding layer fully-concatenates encoding respective ones of the plurality of learner interest regions using a learnable embedding matrix to obtain the plurality of learner interest feature vectors.
In the data management system of the intelligent laboratory management platform, the association module is further configured to: performing correlation coding on the equipment interest feature vector and the learner interest feature vector of the same operation monitoring key frame according to the following formula to obtain the cooperative operation feature matrix; wherein the formula is:
Figure 641092DEST_PATH_IMAGE001
=/>
Figure 54625DEST_PATH_IMAGE002
wherein
Figure 30671DEST_PATH_IMAGE003
A transposed vector representing the learner-interested feature vector corresponding to each of the plurality of operational monitoring key frames, and->
Figure 774636DEST_PATH_IMAGE004
Represents a feature vector of interest of the equipment, and>
Figure 152528DEST_PATH_IMAGE001
a matrix of co-operation features is represented, device for combining or screening>
Figure 222115DEST_PATH_IMAGE005
Representing vector multiplication.
In the data management system of the intelligent laboratory management platform, the optimization module is further configured to: performing feature distribution optimization on the classification feature map according to the following formula to obtain an optimized classification feature map; wherein the formula is:
Figure 419878DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 482381DEST_PATH_IMAGE007
is the ^ th or greater than the classification feature map>
Figure 714779DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 220847DEST_PATH_IMAGE009
Is the ^ th or greater than the optimized classification feature map>
Figure 905906DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 257253DEST_PATH_IMAGE010
Is the logarithm to the base of 2.
In the data management system of the intelligent laboratory management platform, the result generation module is further configured to: expanding each optimized classification feature matrix of the optimized classification feature map into classification feature vectors according to row vectors or column vectors; performing full-concatenation coding on the classification feature vectors by using a full-concatenation layer of the classifier to obtain coded classification feature vectors; and inputting the encoding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the data management method of the intelligent laboratory management platform as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the data management method of the intelligent laboratory management platform as described above.
Compared with the prior art, the data management method of the intelligent laboratory management platform extracts the experiment operation dynamic characteristics of the experiment operation monitoring video of the learner by taking the neural network model based on deep learning as the characteristic extractor, and classifies the experiment operation dynamic state by combining the classifier to realize monitoring and intelligent analysis of the experiment operation of the learner, so that the laboratory accidents caused by misoperation are avoided.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic view illustrating a scenario of a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure.
Fig. 2 is a flowchart illustrating a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram illustrating a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating the optimized classification feature map being passed through a classifier to obtain a classification result in a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure.
Fig. 5 is a block diagram of a data management system of an intelligent laboratory management platform according to an embodiment of the present application.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, an optimized data management scheme for an intelligent laboratory management platform is expected, which can monitor and intelligently analyze the experimental operation of a learner so as to avoid the laboratory accident caused by the error operation.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
The deep learning and the development of the neural network provide a new solution for the data management of the intelligent laboratory management platform. Specifically, in the technical scheme of this application, monitor and intelligent analysis to learner's experimental operation to avoid the laboratory accident that brings because of the maloperation, this can be through: the method is implemented by taking a neural network model based on deep learning as a feature extractor to extract the experiment operation dynamic features of the experiment operation monitoring video of the learner and combining a classifier to classify the experiment operation dynamic.
More specifically, a camera deployed in a laboratory is used to collect a monitoring video of the experimental operation of the learner. In order to more fully capture the experimental operation process of the learner, the camera is preferably arranged at the side part of the learner, and the limbs and the experimental equipment of the learner can be in the imaging visual field of the camera when the video is acquired. It should be appreciated that many consecutive frames in the entire sequence of experimental operations of the experimental operations surveillance video are repeated or similar, resulting in redundancy of information and increased subsequent model computation. In order to solve the problems, before the experimental operation monitoring video is input into a neural network model, the experimental operation monitoring video is sampled. For example, in one specific example of the present application, a plurality of operation monitoring key frames are extracted from the experimental operation monitoring video at a predetermined sampling frequency, where the predetermined sampling frequency is not a fixed value but a set value that can be adaptively adjusted based on an application scene.
It should be understood that, in the course of performing the experiment operation monitoring and analysis, the operation action of the learner and the state characteristics of the experiment equipment are emphasized, so that, in order to avoid the interference of other background information, each operation monitoring key frame in the operation monitoring key frames is respectively passed through the equipment target detection network and the learner target detection network to obtain a plurality of equipment interested areas and a plurality of learner interested areas.
Those skilled in the art will know that the target detection method based on deep learning divides the network into two categories, anchor-based (Anchor-based) and Anchor-free (Anchor-free) according to whether an Anchor window is used in the network. Anchor window based methods such as Fast R-CNN, retinaNet, etc., anchor window based methods such as CenterNet, extremeNet, rePoints, etc. The method without the anchor window solves the defects that the target with large scale change is difficult to identify, positive and negative samples are unbalanced in the training process, and the memory is occupied excessively, and the like, which are caused by the anchor window, and is the current mainstream development direction.
Accordingly, in the technical solution of the present application, the equipment target detection network and/or the learner target detection network is/are an anchorless window-based target detection network. More specifically, in the technical solution of the present application, the target detection network based on the anchor-free window is YOLOv1, FCOS, centrnet, extremeNet, or RepPoints, and for this, the type of the target detection network based on the anchor-free window is not limited in the present application.
The plurality of asset interest regions and the plurality of learner interest regions are then coded in association to construct a simultaneous representation of learner actions and state characteristics of the experimental assets. However, considering that the dimensions of the equipment region of interest and the learner region of interest are different, the two cannot be directly mathematically related at the data level. Therefore, in the technical solution of the present application, each of the plurality of equipment regions of interest is converted into an equipment region of interest feature vector through a first linear embedding layer to obtain a plurality of equipment region of interest feature vectors, and each of the plurality of learner regions of interest is converted into a learner region of interest feature vector through a second linear embedding layer to obtain a plurality of learner region of interest feature vectors, such that, on one hand, the scale difference between the equipment regions of interest and the learner regions of interest is unified through the linear embedding layers, and on the other hand, the linear embedding layers encode the equipment regions of interest and the learner regions of interest using learnable embedding matrices to further extract useful information in the equipment regions of interest and the learner regions of interest.
Then, the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors are subjected to correlation coding to obtain a plurality of co-operation feature matrixes. Specifically, in the technical solution of the present application, a product between a transposed vector of the learner-interested feature vector and the equipment-interested feature vector is calculated to perform correlation coding on the transposed vector and the equipment-interested feature vector to obtain the plurality of co-operation feature matrices, where the co-operation feature matrices represent simultaneous representations between learner operation actions and state features of experimental equipment.
Then, the plurality of co-operation feature matrices are arranged into a three-dimensional input tensor, and a classification feature map is obtained through a convolution neural network model using a three-dimensional convolution kernel. The convolutional neural network utilizes a convolution kernel as a feature factor to capture the correlation feature of the simultaneous representation between the learner operation action and the state feature of the experimental equipment in time sequence. It should be noted that the three-dimensional convolution kernel has three dimensions: width x height x channel, wherein the channel dimension corresponds to the time dimension of the plurality of co-operating feature matrices, thus, when the three-dimensional convolution kernel is used for three-dimensional convolution coding, the correlation feature of simultaneous representation between the learner operation action and the state feature of the experimental equipment in time sequence can be captured. Then, the classification characteristic diagram can be used for obtaining a classification result for indicating whether the experimental operation of the learner is normative or not through a classifier. Therefore, the experimental operation of the learner is monitored and intelligently analyzed, so that laboratory accidents caused by wrong operation are avoided.
In particular, in the technical solution of the present application, since the location-by-location correlation coding is performed when the cooperative operation feature matrix is obtained by performing the correlation coding on the equipment interest feature vector and the learner interest feature vector, which are obtained through the equipment target detection network and the learner target detection network, respectively, a distribution deviation inevitably exists in the distribution direction, which causes a local abnormal feature distribution to exist in the cooperative operation feature matrix. Moreover, the local abnormal feature distribution existing in the cooperative operation feature matrix is amplified by a three-dimensional convolution kernel of a convolution neural network model for simultaneously extracting the feature correlation between matrixes and the feature local correlation in the matrixes, so that the more serious local abnormality of the feature distribution is caused in the classification feature map, and the induction deviation of classification is caused when the classification is carried out through a classifier.
Therefore, before the classification feature map is classified by the classifier, the classification feature map is optimized by the micro-operator transformation of the classification bias, which is expressed as:
Figure 344158DEST_PATH_IMAGE006
wherein
Figure 21127DEST_PATH_IMAGE007
Is the ^ th or greater than the classification feature map>
Figure 442750DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 332208DEST_PATH_IMAGE010
Is the logarithm to base 2.
That is, for the induction deviation of the high-dimensional feature distribution of the classification feature map under the classification problem, the induction deviation is converted into the informatization expression combination of a micromanipulator based on the inductive constraint form of the induction convergence rate, and the induction constraint based on the classification problem converges on the decision domain under the class probability limitation, so that the certainty of the induction result under the target problem is improved, and thus, the accuracy of the classification result of the classification feature map is improved under the condition that the induction deviation exists.
Based on this, the present application provides a data management method for an intelligent laboratory management platform, which includes: acquiring an experiment operation monitoring video of a learner, which is acquired by a camera deployed in a laboratory; extracting a plurality of operation monitoring key frames from the experiment operation monitoring video; respectively enabling each operation monitoring key frame in the operation monitoring key frames to pass through an equipment target detection network and a learner target detection network to obtain a plurality of equipment interested areas and a plurality of learner interested areas; converting each equipment interested region in the equipment interested regions into equipment interested feature vectors through a first linear embedding layer respectively to obtain a plurality of equipment interested feature vectors; converting each learner interested region in the plurality of learner interested regions into a learner interested feature vector through a second linear embedding layer to obtain a plurality of learner interested feature vectors; performing correlation coding on the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors to obtain a plurality of co-operating feature matrices; arranging the plurality of cooperative operation feature matrixes into a three-dimensional input tensor, and then obtaining a classification feature map by using a convolution neural network model of a three-dimensional convolution kernel; performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map; and passing the optimized classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the experimental operation of the learner is normative or not.
Fig. 1 is a schematic view illustrating a scenario of a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure. As shown in fig. 1, in this application scenario, an experimental operation monitoring video of a learner (e.g., le as illustrated in fig. 1) acquired by a camera (e.g., C as illustrated in fig. 1) deployed within a laboratory (e.g., L as illustrated in fig. 1) is first acquired. In turn, the experimental operation monitoring video is input into a server (e.g., S as illustrated in fig. 1) deployed with a data management algorithm of a smart laboratory management platform, wherein the server can process the experimental operation monitoring video with the data management algorithm of the smart laboratory management platform to obtain a classification result representing whether the experimental operation of the learner is normative.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary method
Fig. 2 is a flowchart illustrating a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure. As shown in fig. 2, a data management method of an intelligent laboratory management platform according to an embodiment of the present application includes: s110, acquiring an experiment operation monitoring video of the learner, which is acquired by a camera deployed in a laboratory; s120, extracting a plurality of operation monitoring key frames from the experiment operation monitoring video; s130, enabling each operation monitoring key frame in the operation monitoring key frames to pass through an equipment target detection network and a learner target detection network respectively to obtain a plurality of equipment interested areas and a plurality of learner interested areas; s140, converting each equipment interested region in the equipment interested regions into equipment interested feature vectors through a first linear embedding layer respectively to obtain a plurality of equipment interested feature vectors; s150, converting each learner interested region in the learner interested regions into a learner interested feature vector through a second linear embedding layer respectively to obtain a plurality of learner interested feature vectors; s160, performing correlation coding on the equipment interest feature vectors and the learner interest feature vectors to obtain a plurality of cooperative operation feature matrixes; s170, after the plurality of cooperative operation feature matrixes are arranged into a three-dimensional input tensor, a classification feature map is obtained by using a convolution neural network model of a three-dimensional convolution kernel; s180, performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map; and S190, passing the optimized classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the experimental operation of the learner is normative or not.
Fig. 3 is a schematic diagram illustrating a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure. As shown in fig. 3, in the network architecture, an experimental operation monitoring video of a learner, which is acquired by a camera deployed in a laboratory, is first acquired, and a plurality of operation monitoring key frames are extracted from the experimental operation monitoring video. And then, respectively passing each operation monitoring key frame in the operation monitoring key frames through an equipment target detection network and a learner target detection network to obtain a plurality of equipment interested areas and a plurality of learner interested areas. Then, each of the plurality of equipment interest regions is converted into an equipment interest feature vector through a first linear embedding layer to obtain a plurality of equipment interest feature vectors, and similarly, each of the plurality of learner interest regions is converted into a learner interest feature vector through a second linear embedding layer to obtain a plurality of learner interest feature vectors. And further, performing correlation coding on the equipment interest feature vectors and the learner interest feature vectors to obtain a plurality of co-operating feature matrixes. Then, after the plurality of cooperative operation feature matrixes are arranged into a three-dimensional input tensor, a classification feature map is obtained through a convolution neural network model using a three-dimensional convolution kernel. And then, carrying out feature distribution optimization on the classification feature map to obtain an optimized classification feature map. And then, the optimized classification characteristic graph is passed through a classifier to obtain a classification result, and the classification result is used for indicating whether the experimental operation of the learner is normative or not.
In step S110, an experimental operation monitoring video of a learner, which is acquired by a camera deployed in a laboratory, is acquired. As described above, an optimized data management scheme for an intelligent laboratory management platform is expected, which can monitor and intelligently analyze the experimental operation of learners to avoid the laboratory accidents caused by the wrong operation.
The deep learning and the development of the neural network provide a new solution for the data management of the intelligent laboratory management platform. Specifically, in the technical scheme of this application, monitor and intelligent analysis to learner's experimental operation to avoid the laboratory accident that brings because of the maloperation, this can be through: the method is implemented by taking a neural network model based on deep learning as a feature extractor to extract the experiment operation dynamic features of the experiment operation monitoring video of the learner and combining a classifier to classify the experiment operation dynamic.
More specifically, a camera deployed in a laboratory is used for collecting a monitoring video of the experimental operation of the learner. In order to more fully capture the experimental operation process of the learner, the camera is preferably arranged at the side part of the learner, and the limbs and the experimental equipment of the learner can be in the imaging visual field of the camera when the video is acquired.
In step S120, a plurality of operation monitoring key frames are extracted from the experimental operation monitoring video. It should be appreciated that many consecutive frames in the entire sequence of experimental operations of the experimental operations surveillance video are repeated or similar, resulting in redundancy of information and increased subsequent model computation. In order to solve the problems, before the experimental operation monitoring video is input into a neural network model, the experimental operation monitoring video is sampled. For example, in one specific example of the present application, a plurality of operation monitoring key frames are extracted from the experimental operation monitoring video at a predetermined sampling frequency, where the predetermined sampling frequency is not a fixed value but a set value that can be adaptively adjusted based on an application scene.
In step S130, each operation monitoring key frame in the plurality of operation monitoring key frames is respectively passed through an equipment target detection network and a learner target detection network to obtain a plurality of equipment interest areas and a plurality of learner interest areas. It should be understood that, in the course of performing the experiment operation monitoring and analysis, the operation action of the learner and the state characteristics of the experiment equipment are emphasized, so that, in order to avoid the interference of other background information, each operation monitoring key frame in the operation monitoring key frames is respectively passed through the equipment target detection network and the learner target detection network to obtain a plurality of equipment interested areas and a plurality of learner interested areas.
Those skilled in the art will know that the target detection method based on deep learning divides the network into two categories, anchor-based (Anchor-based) and Anchor-free (Anchor-free) according to whether an Anchor window is used in the network. Anchor window based methods such as Fast R-CNN, retinaNet, etc., anchor window based methods such as CenterNet, extremeNet, rePoints, etc. The method without the anchor window solves the defects that the target with large scale change is difficult to identify, positive and negative samples are unbalanced in the training process, the memory is occupied in a high amount and the like caused by the anchor window, and is the current mainstream development direction.
Accordingly, in the technical solution of the present application, the equipment target detection network and/or the learner target detection network is/are an anchorless window-based target detection network. More specifically, in the technical solution of the present application, the target detection network based on the anchorless window is YOLOv1, FCOS, centrnet, extremeNet, or RepPoints, and for this, the type of the target detection network based on the anchorless window is not limited in the present application.
In step S140, each of the multiple equipment regions of interest is converted into an equipment interest feature vector through a first linear embedding layer, so as to obtain multiple equipment interest feature vectors. The equipment region of interest and the learner region of interest cannot be directly mathematically related at a data level, taking into account the different dimensions of the equipment region of interest and the learner region of interest. Therefore, in the technical solution of the present application, each equipment region of interest in the multiple equipment regions of interest is converted into an equipment interest feature vector through the first linear embedding layer, so as to obtain multiple equipment interest feature vectors.
Specifically, in this embodiment of the present application, the converting, by a first linear embedding layer, each of the multiple equipment regions of interest into an equipment interest feature vector to obtain multiple equipment interest feature vectors includes: the first linear embedding layer fully-connected encodes each of the plurality of equipment regions of interest using a learnable embedding matrix to obtain the plurality of equipment feature vectors of interest.
In step S150, each of the learner interest regions is converted into a learner interest feature vector through a second linear embedding layer to obtain a plurality of learner interest feature vectors. And similarly, converting each learner interested region in the plurality of learner interested regions into a learner interested feature vector through a second linear embedding layer to obtain a plurality of learner interested feature vectors. In this way, on the one hand, the scale difference between the equipment region of interest and the learner region of interest is unified by the linear embedding layer, and on the other hand, the linear embedding layer encodes the equipment region of interest and the learner region of interest using a learnable embedding matrix to further extract useful information in the equipment region of interest and the learner region of interest.
Specifically, in this embodiment of the present application, the converting each of the plurality of learner interest regions into a learner interest feature vector through a second linear embedding layer to obtain a plurality of learner interest feature vectors includes: the second linear embedding layer fully-concatenates encoding respective ones of the plurality of learner interest regions using a learnable embedding matrix to obtain the plurality of learner interest feature vectors.
In step S160, the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors are subjected to associated encoding to obtain a plurality of co-operating feature matrices. That is, the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors are associatively encoded to construct a simultaneous representation of learner actions and state features of the experimental equipment.
Specifically, in this embodiment of the present application, the performing associated encoding on the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors to obtain a plurality of co-operation feature matrices includes: performing correlation coding on the equipment interest feature vector and the learner interest feature vector of the same operation monitoring key frame according to the following formula to obtain the cooperative operation feature matrix; wherein the formula is:
Figure 539199DEST_PATH_IMAGE001
=/>
Figure 121490DEST_PATH_IMAGE002
wherein
Figure 781141DEST_PATH_IMAGE003
A transposed vector representing the learner-interested feature vector corresponding to each of the plurality of operational monitoring key frames, and->
Figure 739870DEST_PATH_IMAGE004
Represents a feature vector of interest of the equipment, and>
Figure 785055DEST_PATH_IMAGE001
represents the co-operating characteristic matrix +>
Figure 538248DEST_PATH_IMAGE005
Representing vector multiplication. Here, the cooperative operation characteristic matrix represents simultaneous representation between the learner's operation action and the state characteristic of the experimental equipment.
In step S170, the plurality of co-operation feature matrices are arranged into a three-dimensional input tensor and then a classification feature map is obtained by using a convolutional neural network model of a three-dimensional convolution kernel. The convolutional neural network utilizes a convolution kernel as a feature factor to capture the correlation feature of the simultaneous representation between the learner operation action and the state feature of the experimental equipment in time sequence. It should be noted that the three-dimensional convolution kernel has three dimensions: width x height x channel, wherein the channel dimension corresponds to the time dimension of the plurality of co-operating feature matrices, thus, when the three-dimensional convolution kernel is used for three-dimensional convolution coding, the correlation feature of simultaneous representation between the learner operation action and the state feature of the experimental equipment in time sequence can be captured.
In step S180, feature distribution optimization is performed on the classification feature map to obtain an optimized classification feature map. In particular, in the technical solution of the present application, since the location-by-location correlation coding is performed when the cooperative operation feature matrix is obtained by performing the correlation coding on the equipment interest feature vector and the learner interest feature vector, which are obtained through the equipment target detection network and the learner target detection network, respectively, a distribution deviation inevitably exists in the distribution direction, which causes a local abnormal feature distribution to exist in the cooperative operation feature matrix. Moreover, the local abnormal feature distribution existing in the cooperative operation feature matrix is amplified by a three-dimensional convolution kernel of a convolution neural network model for simultaneously extracting the feature correlation between matrixes and the feature local correlation in the matrixes, so that the more serious local abnormality of the feature distribution is caused in the classification feature map, and the induction deviation of classification is caused when the classification is carried out through a classifier.
Therefore, before the classification feature map is classified by the classifier, the classification feature map is optimized by the micro-operator transformation of the classification deviation.
Specifically, in this embodiment of the present application, the performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map includes: performing feature distribution optimization on the classification feature map according to the following formula to obtain an optimized classification feature map; wherein the formula is:
Figure 685195DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 447615DEST_PATH_IMAGE007
is the ^ th or greater than the classification feature map>
Figure 98039DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 22133DEST_PATH_IMAGE009
Is the ^ th or greater than the optimized classification feature map>
Figure 905644DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 940596DEST_PATH_IMAGE010
Is the logarithm to the base of 2.
That is, for the induction deviation of the high-dimensional feature distribution of the classification feature map under the classification problem, the induction deviation is converted into the informatization expression combination of a micromanipulator based on the inductive constraint form of the induction convergence rate, and the induction constraint based on the classification problem converges on the decision domain under the class probability limitation, so that the certainty of the induction result under the target problem is improved, and thus, the accuracy of the classification result of the classification feature map is improved under the condition that the induction deviation exists.
In step S190, the optimized classification feature map is passed through a classifier to obtain a classification result, and the classification result is used to indicate whether the experimental operation of the learner is normative. Therefore, the experimental operation of the learner is monitored and intelligently analyzed, so that laboratory accidents caused by wrong operation are avoided.
Fig. 4 is a flowchart illustrating the optimized classification feature map being passed through a classifier to obtain a classification result in a data management method of an intelligent laboratory management platform according to an embodiment of the present disclosure. As shown in fig. 4, the passing the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the experimental operation of the learner is normative, including: s210, unfolding each optimized classification feature matrix of the optimized classification feature map into classification feature vectors according to row vectors or column vectors; s220, carrying out full-connection coding on the classification characteristic vector by using a full-connection layer of the classifier to obtain a coded classification characteristic vector; and S230, inputting the encoding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
In summary, the data management method of the intelligent laboratory management platform based on the embodiment of the present application is elucidated, which extracts the dynamic feature of the experimental operation monitoring video of the learner by using the neural network model based on deep learning as the feature extractor, and classifies the dynamic state of the experimental operation by combining the classifier to realize monitoring and intelligent analysis of the experimental operation of the learner, so as to avoid the laboratory accident caused by the wrong operation.
Exemplary System
Fig. 5 is a block diagram of a data management system of an intelligent laboratory management platform according to an embodiment of the present application. As shown in fig. 5, a data management system 100 of an intelligent lab management platform according to an embodiment of the present application includes: the monitoring acquisition module 110 is used for acquiring an experiment operation monitoring video of the learner, which is acquired by a camera deployed in a laboratory; a key frame extraction module 120, configured to extract a plurality of operation monitoring key frames from the experimental operation monitoring video; an interest region identification module 130, configured to pass each operation monitoring key frame in the multiple operation monitoring key frames through an equipment target detection network and a learner target detection network respectively to obtain multiple equipment interest regions and multiple learner interest regions; a device interest feature vector construction module 140, configured to convert, through the first linear embedding layer, each of the multiple device interest regions into a device interest feature vector to obtain multiple device interest feature vectors; a learner-interested feature vector constructing module 150, configured to convert each of the plurality of learner-interested regions into a learner-interested feature vector through a second linear embedding layer, respectively, to obtain a plurality of learner-interested feature vectors; an association module 160, configured to perform association coding on the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors to obtain a plurality of co-operation feature matrices; a classification feature map generation module 170, configured to arrange the multiple cooperative operation feature matrices into a three-dimensional input tensor, and obtain a classification feature map by using a convolutional neural network model of a three-dimensional convolution kernel; an optimization module 180, configured to perform feature distribution optimization on the classification feature map to obtain an optimized classification feature map; and a result generating module 190, configured to pass the optimized classification feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the experimental operation of the learner is normative.
In an example, in the data management system of the intelligent laboratory management platform, the key frame extraction module 120 is further configured to: extracting the plurality of operational monitoring keyframes from the experimental operational monitoring video at a predetermined sampling frequency.
In one example, in the data management system of the intelligent laboratory management platform, the equipment target detection network and/or the learner target detection network is/are anchor-window-free based target detection network.
In one example, in the data management system of the intelligent laboratory management platform, the target detection network based on the anchorless window is YOLOv1, FCOS, centrnet, extreme net or RepPoints.
In one example, in the data management system of the intelligent laboratory management platform, the equipment interest feature vector construction module 140 includes: the first linear embedding layer fully-connected encodes each of the plurality of equipment regions of interest using a learnable embedding matrix to obtain the plurality of equipment feature vectors of interest.
In one example, in the data management system of the intelligent laboratory management platform, the learner-interested feature vector constructing module 150 includes: the second linear embedding layer fully-concatenates encoding respective ones of the plurality of learner interest regions using a learnable embedding matrix to obtain the plurality of learner interest feature vectors.
In an example, in the data management system of the intelligent laboratory management platform, the association module 160 is further configured to: performing correlation coding on the equipment interest feature vector and the learner interest feature vector of the same operation monitoring key frame according to the following formula to obtain the cooperative operation feature matrix; wherein the formula is:
Figure 976685DEST_PATH_IMAGE001
=/>
Figure 806101DEST_PATH_IMAGE002
wherein
Figure 927641DEST_PATH_IMAGE003
A transposed vector representing the learner-interested feature vector corresponding to each of the plurality of operational monitoring key frames, and->
Figure 31863DEST_PATH_IMAGE004
Represents a feature vector of interest of the equipment, and>
Figure 640568DEST_PATH_IMAGE001
represents the co-operating characteristic matrix +>
Figure 172043DEST_PATH_IMAGE005
Representing vector multiplication.
In an example, in the data management system of the intelligent laboratory management platform, the optimization module 180 is further configured to: performing feature distribution optimization on the classification feature map according to the following formula to obtain an optimized classification feature map; wherein the formula is:
Figure 515300DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 423213DEST_PATH_IMAGE007
is the ^ th or greater than the classification feature map>
Figure 902736DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 605113DEST_PATH_IMAGE009
Is the ^ th or greater than the optimized classification feature map>
Figure 684933DEST_PATH_IMAGE008
The characteristic value of the position->
Figure 130958DEST_PATH_IMAGE010
Is the logarithm to the base of 2.
In an example, in the data management system of the intelligent laboratory management platform, the result generation module 190 is further configured to: expanding each optimized classification feature matrix of the optimized classification feature map into classification feature vectors according to row vectors or column vectors; performing full-concatenation coding on the classification feature vectors by using a full-concatenation layer of the classifier to obtain coded classification feature vectors; and inputting the encoding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
Here, it can be understood by those skilled in the art that the detailed functions and operations of the respective units and modules in the data management system 100 of the above-described intelligent laboratory management platform have been described in detail in the above description of the data management method of the intelligent laboratory management platform with reference to fig. 1 to 4, and thus, a repetitive description thereof will be omitted.
As described above, the data management system 100 of the intelligent laboratory management platform according to the embodiment of the present application may be implemented in various terminal devices, such as a server for data management of the intelligent laboratory management platform. In one example, the data management system 100 of the intelligent laboratory management platform according to the embodiment of the present application may be integrated into the terminal device as a software module and/or a hardware module. For example, the data management system 100 of the intelligent laboratory management platform may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the data management system 100 of the intelligent lab management platform can also be one of the hardware modules of the terminal device.
Alternatively, in another example, the data management system 100 of the intelligent laboratory management platform and the terminal device may also be separate devices, and the data management system 100 of the intelligent laboratory management platform may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 6. Fig. 6 is a block diagram of an electronic device according to an embodiment of the application. As shown in fig. 6, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 11 to implement the functions of the data management method of the intelligent laboratory management platform of the various embodiments of the present application described above and/or other desired functions. Various contents such as an experimental operation monitoring video of a learner may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may include, for example, a keyboard, a mouse, and the like.
The output device 14 can output various information including the classification result to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 6, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the functions of the data management method of the intelligent laboratory management platform according to various embodiments of the present application described in the "exemplary methods" section of this specification, above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in functions of a method for data management of an intelligent laboratory management platform according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably herein. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (8)

1. A data management method of an intelligent laboratory management platform is characterized by comprising the following steps:
acquiring an experiment operation monitoring video of a learner, which is acquired by a camera deployed in a laboratory;
extracting a plurality of operation monitoring key frames from the experiment operation monitoring video;
obtaining a plurality of equipment interested areas and a plurality of learner interested areas by respectively passing each operation monitoring key frame in the plurality of operation monitoring key frames through an equipment target detection network and a learner target detection network;
converting each equipment interested region in the equipment interested regions into equipment interested feature vectors through a first linear embedding layer respectively to obtain a plurality of equipment interested feature vectors;
converting each learner interested region in the plurality of learner interested regions into a learner interested feature vector through a second linear embedding layer respectively to obtain a plurality of learner interested feature vectors;
performing correlation coding on the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors to obtain a plurality of co-operating feature matrices;
arranging the plurality of cooperative operation feature matrixes into a three-dimensional input tensor, and then obtaining a classification feature map by using a convolution neural network model of a three-dimensional convolution kernel;
performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map; and
passing the optimized classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the experimental operation of the learner is normative or not;
wherein, the performing feature distribution optimization on the classification feature map to obtain an optimized classification feature map includes:
performing feature distribution optimization on the classification feature map according to the following formula to obtain an optimized classification feature map;
wherein the formula is:
Figure FDA0004019152690000011
wherein f is i,j,k Is the feature value of the (i, j, k) th position of the classification feature map, f i,j,k ' is the eigenvalue of the (i, j, k) th position of the optimized classification feature map, log is base-2 logarithm.
2. The method of claim 1, wherein the extracting the plurality of operation monitoring key frames from the experiment operation monitoring video comprises: extracting the plurality of operational monitoring key frames from the experimental operational monitoring video at a predetermined sampling frequency.
3. The intelligent laboratory management platform data management method according to claim 2, wherein said equipment target detection network and/or said learner target detection network is an anchorless window based target detection network.
4. The intelligent laboratory management platform according to claim 3, wherein said target detection network based on anchorless window is YOLOv1, FCOS, centerNet, extremeNet or RePoints.
5. The data management method of the intelligent laboratory management platform according to claim 4, wherein the converting each equipment region of interest of the plurality of equipment regions of interest into equipment interest feature vectors through a first linear embedding layer to obtain a plurality of equipment interest feature vectors comprises:
the first linear embedding layer fully-connected encodes each of the plurality of equipment regions of interest using a learnable embedding matrix to obtain the plurality of equipment feature vectors of interest.
6. The method as claimed in claim 5, wherein the step of converting each of the learner interesting regions into a learner interesting feature vector through a second linear embedding layer to obtain a plurality of learner interesting feature vectors comprises:
the second linear embedding layer fully-concatenates encoding respective ones of the plurality of learner interest regions using a learnable embedding matrix to obtain the plurality of learner interest feature vectors.
7. The method of claim 6, wherein the correlating and encoding the plurality of equipment interest feature vectors and the plurality of learner interest feature vectors to obtain a plurality of co-operating feature matrices comprises:
performing correlation coding on the equipment interest feature vector and the learner interest feature vector of the same operation monitoring key frame according to the following formula to obtain the cooperative operation feature matrix;
wherein the formula is:
Figure FDA0004019152690000021
wherein
Figure FDA0004019152690000022
A transposed vector, V, representing the learner-interested feature vector corresponding to each of the plurality of operational monitoring keyframes b Representing the equipment feature of interest vectorM represents the co-operation feature matrix,
Figure FDA0004019152690000031
representing vector multiplication.
8. The method as claimed in claim 7, wherein the step of passing the optimized charting through a classifier to obtain a classification result, the classification result indicating whether the experimental operation of the learner is normative or not, comprises:
expanding each optimized classification feature matrix of the optimized classification feature map into classification feature vectors according to row vectors or column vectors;
performing full-concatenation coding on the classification feature vectors by using a full-concatenation layer of the classifier to obtain coded classification feature vectors; and
inputting the encoding classification feature vector into a Softmax classification function of the classifier to obtain the classification result.
CN202211366399.6A 2022-11-03 2022-11-03 Data management method of intelligent laboratory management platform Active CN115471216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211366399.6A CN115471216B (en) 2022-11-03 2022-11-03 Data management method of intelligent laboratory management platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211366399.6A CN115471216B (en) 2022-11-03 2022-11-03 Data management method of intelligent laboratory management platform

Publications (2)

Publication Number Publication Date
CN115471216A CN115471216A (en) 2022-12-13
CN115471216B true CN115471216B (en) 2023-03-24

Family

ID=84338189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211366399.6A Active CN115471216B (en) 2022-11-03 2022-11-03 Data management method of intelligent laboratory management platform

Country Status (1)

Country Link
CN (1) CN115471216B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116794975B (en) * 2022-12-20 2024-02-02 维都利阀门有限公司 Intelligent control method and system for electric butterfly valve
CN116127019A (en) * 2023-03-07 2023-05-16 杭州国辰智企科技有限公司 Dynamic parameter and visual model generation WEB 2D automatic modeling engine system
CN116343134A (en) * 2023-05-30 2023-06-27 山西双驱电子科技有限公司 System and method for transmitting driving test vehicle signals

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3352112A1 (en) * 2017-01-20 2018-07-25 Nokia Technologies Oy Architecture adapted for recognising a category of an element from at least one image of said element

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989608A (en) * 2021-12-01 2022-01-28 西安电子科技大学 Student experiment classroom behavior identification method based on top vision
CN115205727A (en) * 2022-05-31 2022-10-18 上海锡鼎智能科技有限公司 Experiment intelligent scoring method and system based on unsupervised learning
CN115032946B (en) * 2022-06-21 2022-12-06 浙江同发塑机有限公司 Blow molding control method and system of blow molding machine
CN115147655A (en) * 2022-07-12 2022-10-04 温州宁酷科技有限公司 Oil gas gathering and transportation monitoring system and method thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3352112A1 (en) * 2017-01-20 2018-07-25 Nokia Technologies Oy Architecture adapted for recognising a category of an element from at least one image of said element

Also Published As

Publication number Publication date
CN115471216A (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN115471216B (en) Data management method of intelligent laboratory management platform
RU2703343C2 (en) Relevancy assessment for artificial neural networks
Wang et al. Research on healthy anomaly detection model based on deep learning from multiple time-series physiological signals
CN116168352B (en) Power grid obstacle recognition processing method and system based on image processing
CN115859437A (en) Jacket underwater stress detection system based on distributed optical fiber sensing system
CN116015837A (en) Intrusion detection method and system for computer network information security
WO2024060684A1 (en) Model training method, image processing method, device, and storage medium
CN116247824B (en) Control method and system for power equipment
CN116343301B (en) Personnel information intelligent verification system based on face recognition
CN117077075A (en) Water quality monitoring system and method for environmental protection
CN116665086A (en) Teaching method and system based on intelligent analysis of learning behaviors
CN114387567A (en) Video data processing method and device, electronic equipment and storage medium
CN115620303A (en) Personnel file intelligent management system
CN114067286A (en) High-order camera vehicle weight recognition method based on serialized deformable attention mechanism
CN112733785A (en) Stability detection method of information equipment based on layer depth and receptive field
CN116759053A (en) Medical system prevention and control method and system based on Internet of things system
CN114926767A (en) Prediction reconstruction video anomaly detection method fused with implicit space autoregression
CN112766810A (en) Neural network training method for intelligent comprehensive overall quality evaluation
CN117557941A (en) Video intelligent analysis system and method based on multi-mode data fusion
CN117316462A (en) Medical data management method
CN116994209A (en) Image data processing system and method based on artificial intelligence
CN117115581A (en) Intelligent misoperation early warning method and system based on multi-mode deep learning
CN114842183A (en) Convolutional neural network-based switch state identification method and system
CN114627370A (en) Hyperspectral image classification method based on TRANSFORMER feature fusion
CN112434973A (en) Key area safety control index evaluation method based on one-dimensional and two-dimensional convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant