CN114863734B - Evaluation method for organic polymer material synthesis experiment - Google Patents
Evaluation method for organic polymer material synthesis experiment Download PDFInfo
- Publication number
- CN114863734B CN114863734B CN202210789293.0A CN202210789293A CN114863734B CN 114863734 B CN114863734 B CN 114863734B CN 202210789293 A CN202210789293 A CN 202210789293A CN 114863734 B CN114863734 B CN 114863734B
- Authority
- CN
- China
- Prior art keywords
- experiment
- video
- neural network
- network model
- demonstration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/02—Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B23/00—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
- G09B23/24—Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for chemistry
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Theoretical Computer Science (AREA)
- Educational Technology (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Medicinal Chemistry (AREA)
- Pure & Applied Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computational Mathematics (AREA)
- Algebra (AREA)
- Image Analysis (AREA)
Abstract
An evaluation method for an organic polymer material synthesis experiment integrates experiment demonstration and supervision, divides experiment behaviors into key steps, and plays a corresponding demonstration video; in the process that an operator conducts an experiment by referring to a video, the behavior of the operator using experimental raw materials is supervised through a camera device, the risk behavior is identified by using the optimized neural network model, and the event of the risk behavior is reported. The method provided by the invention can effectively improve the standardization and the safety of the organic polymer material synthesis experiment.
Description
Technical Field
The invention belongs to the field of organic polymer synthesis, and particularly relates to an evaluation method for an organic polymer material synthesis experiment.
Background
The synthesis experiment of the organic polymer material relates to a large amount of combustible and volatile experimental raw materials, and has certain risk. When the experiment is carried out, an operator is required to operate according to the experimental flow specification, the operator is required to form good safety consciousness and good safety habit, experimental raw materials are used according to the experimental rule, and the experimental sequence is strictly observed. The basic premise for carrying out the experiment is to protect the safety of the experiment operator, so that a correct method for demonstrating the organic polymer material synthesis experiment to the operator is needed, and the use behaviors of the experiment raw materials are supervised and evaluated.
In the prior art, the using behaviors in the experiment are more manually evaluated (i.e. supervised), and the method is time-consuming and labor-consuming. When facing multiple operators, this may result in an inability to accurately supervise in time, thereby possibly risking experimentation. The assumption that a camera is used for collecting images for monitoring is also proposed, but no suitable algorithm special for synthetic experiment supervision exists, so that the supervision accuracy is low, and even the method cannot be put into practical use. And as for experimental risks, no quantifiable object suitable for computer image supervision exists at present. Some have proposed to supervise the actions of the operator, but this involves complicated kinematic problems, the algorithm is still immature, and the supervision effect is poor. It has also been proposed to supervise the phenomena of temperature rise, smoking, explosion, etc. However, some of these phenomena are normal phenomena in experiments, and some phenomena occur faster, so that the monitoring significance is not great. Therefore, what kind of object is supervised by using a computer and what kind of supervision strategy is also a problem to be solved urgently for controlling the experiment risk.
Disclosure of Invention
The invention innovatively provides a demonstration and supervision evaluation method for organic polymer material synthesis experiment teaching, which integrates experiment demonstration and supervision, divides experiment behaviors into key steps and plays out corresponding demonstration videos; in the process that an operator conducts an experiment by referring to a video, the behavior of the operator using experimental raw materials is supervised through a camera device, the risk behavior is identified by using the optimized neural network model, and the event of the risk behavior is reported. The method provided by the invention can effectively improve the standardization and the safety of the organic polymer material synthesis experiment.
An evaluation method of organic polymer material synthesis experiment,
dividing a complete experiment into a plurality of key steps; setting an experimental reagent to be used in each key step; playing a corresponding demonstration video to an operator in each key step, and carrying out an operation experiment by the operator according to the demonstration video; in the demonstration and experiment processes, the experiment process is supervised by a computer automatic intelligent algorithm, and risk behaviors using unnecessary reagents are identified and reported;
in the demonstration and experiment processes, a video camera collects the video of an operator and preprocesses the video;
decomposing the preprocessed video into multiple frames of images, and processing each frame of image by using a scale factor and a displacement factor;
inputting each processed frame image into a neural network model NN for identification, so as to identify risk behaviors in the demonstration process and identify risk behaviors in the experiment process, and the method specifically comprises the following steps: a1, acquiring a frame of video after the demonstration process starts, and performing sub-image traversal on the frame of image; a2, the traversing method is as follows: intercepting different displacement and scaling subgraphs as the input of the neural network model NN and obtaining the output(ii) a When a certain dimension of outputWhen the maximum value is taken, the subgraph is represented as an experimental reagent image of a corresponding type; a3, obtaining the positions of different reagents in the image, namely the positions and the sizes of the subgraph, according to the traversal result; a4, if the reagents of all kinds can not be obtained in the first frame of image, continuing to collect the next frame of the video, and repeating A2 and A3 until the positions of the reagents of all kinds in the image are obtained; a5, after A1-A4 is finished, continuously collecting the next frame of the video, and inputting sub-images of corresponding positions into a neural network model NN for detection according to the positions of various reagents in the images; a6, if the output of the model NN is not found to be a subgraph of a corresponding category, the reagent is considered to be moved or used in the demonstration stage, and risk warnings including a laboratory bench number and a raw material type are sent to control software; b1, acquiring a frame of a video after the experiment process starts, removing allowed reagents according to the types of the reagents required by the key steps, and inputting sub-images of positions corresponding to various reagents which are not necessarily used into a neural network model NN for detection; b2, if the output of the neural network model NN is not found to be a subgraph of the corresponding category, then the agent is considered to be moved or used in the demonstration phase; sending risk warnings including laboratory bench numbers and raw material types to control software;
the output of the neural network model is represented,representing elements in the 5 th hidden layerAnd output layer elementsThe connection of (a) to (b),which is indicative of a linear offset of the bias,is a process vector.
Identifying risk behaviors during the presentation, and identifying risk behaviors during the experiment includes: after several frames are continuously collected, if the output of the model NN in several frames is not a subgraph of the corresponding category, the risk warning is sent again, and the false alarm rate is reduced.
After the demonstration process of the key step is completed, the experimental process of the key step is manually controlled.
The evaluation method is carried out simultaneously in a plurality of test areas, and each test area comprises a camera, a display, a test bed and a reagent.
And arranging a display in each experimental area for playing demonstration videos of the course experiment.
The camera is used for collecting behavior videos of an operator in the experiment process.
And the system also comprises a control computer which is used for implementing 1) switching of key steps, 2) switching of demonstration processes and experimental processes in the key steps and 3) monitoring risk behaviors.
When the key step is switched, entering the demonstration process of the key step; after the demonstration is finished, the control is switched to the experimental process of the key step.
Comprising a camera, a display, and a control computer for implementing the supervision method described above.
The control computer stores a neural network model NN which is trained in advance.
The invention has the advantages that:
1. the method for demonstrating, evaluating (supervising) the synthesis experiment of the organic polymer material is innovatively provided, and the complete experiment is divided into a plurality of key steps; experimental raw materials required to be used are set in each key step, and the steps are controlled by a teacher to be carried out; and playing a corresponding demonstration video for an operator in each key step, and carrying out an experiment by the operator according to the demonstration video. Particularly, the supervision steps are divided into a demonstration process and an experiment process, and the requirements of risk prompt are reflected as the movement of experiment raw materials, so that the supervision of the experiment process through a computer automatic intelligent algorithm becomes possible. According to a large amount of experimental data, the method can quickly and accurately identify and report the risk behaviors of using unnecessary experimental raw materials.
2. Aiming at the characteristics of organic polymer synthesis experiments, the neural network model structure (including an excitation function and the like) is optimized, the image characteristics under different scales are identified by adopting a multi-layer and multi-scale network structure, different types of experimental raw materials in the collected video can be automatically identified with higher efficiency, and the identification is quicker and more accurate. Meanwhile, for convenience of neural network processing, collected video data are preprocessed, the operation amount of the neural network is reduced, the integrity of information is guaranteed, and efficiency and accuracy are further improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention. In the drawings:
FIG. 1 is a schematic diagram of a system deployment for use with the present invention.
Detailed Description
One, system structure
Each test zone comprises a camera 1, a display 2, a test stand 3, a reagent 4.
The video camera is used for collecting behavior videos of an operator in the experiment process and analyzing the videos. The display is used for playing demonstration videos of the course experiment. The test bed is used for bearing various reagents and providing an experimental site. The reagents may be a variety of reagents used in experiments. Preferably, the reagent bottle is marked with a label or texture to facilitate identification.
Demonstration and supervision method
Step 1: integral demonstration, operation and supervision method for experiment
Dividing a complete course experiment into a plurality of key steps; setting experimental raw materials to be used in each key step; playing a corresponding demonstration video to an operator in each key step, and carrying out an experiment by the operator according to the demonstration video; in the demonstration and experiment processes, the experiment process is supervised by a computer automatic intelligent algorithm, and risk behaviors using unnecessary experiment raw materials are identified and reported.
And arranging a display in each experimental area for playing demonstration videos of the course experiment.
Deploying a control computer at a teacher end, and installing control software; and the teacher implements 1) key step switching, 2) experiment process switching in the key steps and 3) risk monitoring behaviors through the control software.
And the teacher switches the control software to the key step of the experiment, and the control software selects the corresponding demonstration video and plays the demonstration video on the display of each experiment area. After the key steps are switched, the control software starts the cameras of each experimental area through the wireless network, shoots the videos, identifies the acquired videos by adopting an automatic intelligent algorithm, and reports the acquired videos to a teacher in the control software when risk behaviors are identified.
When the key step is switched, entering the demonstration process of the key step; after the demonstration is finished, the teacher sets the experimental process of entering the key step in the control software.
Step 2: experimental area video acquisition method and pretreatment method
A video camera used for collecting videos is deployed in each experimental area, behavior videos of operators in demonstration and experimental processes are collected and used for analyzing the videos, and compression preprocessing is carried out on video data at the video camera end while the videos are collected, so that data flow is reduced. The method is described in detail below.
The system comprises a camera, a supporting device and a background processing device, wherein the camera is used for acquiring videos, the supporting device is used for fixing the camera, and the background processing device is used for receiving video data of the camera; the camera is installed on the supporting equipment, is in the mode of opening after being communicated with the power supply, shoots the video in the experimental area, makes the experimental raw materials be located in the field of view of the camera by adjusting the installation height and orientation of the camera, acquires the video, preprocesses the video, and transmits the video data to the background processing equipment through the wireless local area network.
The video camera preprocesses the video data so as to reduce the data flow, so that the same background processing equipment can process the video data from a plurality of experimental areas in real time, and synchronous supervision is realized. The pretreatment method is as follows.
Setting the size of each frame of image of a video directly collected by a camera to be M x N, setting a time sequence window W, forming a three-dimensional matrix with M x N x W dimensions by W frames of video in the window, and recording as:
defining:
in the third formula, the first and the second groups are as follows,、、respectively representing the partial derivatives of the matrix V in the n, m, w directions,、、in unit increments in three directions.
representing the size of the spatial neighborhood of the image pixel, and W representing the size of the time window, taken as preferred values,W=6;Indicating empirical threshold, preferred by experiment。
According to the formula (4), the spatial size of the video is reducedShortening the video time length to. Data traffic is optimized and video quality is maintained.
And 3, step 3: identification method of experimental risk behaviors
And identifying the risk behaviors through videos shot in the experimental area, processing the videos by adopting an automatic intelligent algorithm, and outputting signals of the risk behaviors in the videos by control software.
According to the characteristics and basic experiment requirements of organic polymer material synthesis experiments, the experimental risk behaviors are classified into two types, namely, experimental raw materials are used in the demonstration process, and unnecessary experimental raw materials are used in the experimental process.
According to the two types of behaviors, respectively proposing a recognition algorithm for:
s31, identifying risk behaviors in the demonstration process;
and S32, identifying risk behaviors in the experimental process.
The neural network model NN is established and used in the methods S31 and S32.
The neural network model NN is composed of 1 input layer, 5 hidden layers, and 1 output layer.
The input layer of NN is a sub-image of a frame of image of the experimental area video preprocessed in step 1, and is represented as:
in the formula (I), the compound is shown in the specification,showing the video of the experimental area after the pretreatment,representing the spatial coordinates of the pixels in one frame of image,indicating the frame number of the video.Representing coordinates in subgraphs, i.e. input layersAnd (4) coordinates.、Representing the scale factor of the subgraph, namely the relative scaling of the subgraph relative to the original video image;、the displacement factor of the sub-picture, i.e. the relative position of the sub-picture in the original picture, is indicated. Any sub-area (namely subgraph) in the original graph can be mapped to the input layer with fixed size through the scale factor and the displacement factorThe above. Therefore, the object to be identified can be accurately positioned, the proportion of the object in the image is ensured to be suitable, and the situation that the target is too small and is difficult to identify is avoided.
The 1 st hidden layer of the NN is the following mapping of the input layers:
a window of convolution of the matrix is represented,representing the spatial coordinates within the convolution window, the size of the convolution window is 11 x 11,the number representing the convolution window of the matrix,indicating 32 convolution windows.Indicating a linear bias (offset).The 1 st hidden layer is shown as,is the layer node coordinates.
the nonlinear excitation function is used for enabling the neural network model to have nonlinear fitting capacity on data, and as an improvement, parameters are introducedThe convergence speed of the excitation function is controlled, and the adaptability of the model to different types of recognition objects can be improved. Adjusted by experimentsIs a preferred value.
max is shown inIs taken to be the maximum value within the window of (c),indicating a linear bias, the above equation will4 x 4 nodes (coordinate,) Taking the maximum value to map toOf (coordinate is). Thus, it is possible to provideIs reduced in space size to1/16 of (1).Is defined as (7).
a window of convolution of the matrix is represented,representing the spatial coordinates within the convolution window, the size of the convolution window being 7 x 7,the number representing the convolution window of the matrix,.indicating a linear bias.A3 rd hidden layer is shown,is the layer node coordinates.
max is shown inIs taken to the maximum value within the window of (c),indicating a linear bias. The above formula is4 x 4 nodes (coordinates of,) Maximum value mappingOf (coordinate is). Thus, it is possible to provideIs reduced in space size to1/16 of (1).Is defined as (7).
The 1 st to 4 th hidden layers of the neural network establish a multi-scale network structure for identifying image features under different scales, so that the features of the container and the features of the raw materials can be detected, and the detection performance is improved. Meanwhile, the number of nodes in the downstream layer number of the network is reduced, and the model calculation efficiency is improved.
a set of vectors is represented that represents a set of vectors,the coordinates representing the vector, i.e., the vector dimension, is 256.Sum and vector of 32 nodes representing the same spatial coordinates of the 4 th hidden layerFirst, theThe connection of the individual elements, i.e. the linear relationship,indicating a linear bias.Is defined as (7).
represents the output layer of the neural network model as a vector, z represents an element of the vector,representing elements in the 5 th hidden layerAnd output layer elementsThe connection of (2).Indicating a linear bias.Is defined as (7).
Each element of (a) represents a graphical representation of the experimental material whenWhen =1, the image representing that the input layer is the experimental material is shown in the specificationAn image indicating that the input layer is not the kind of experimental material. For the outsideSimilar experimental materials can be marked with special textures with distinguishing capability outside the package so as to make visual distinction in the image.Is equal to the number of types of experimental material that need supervision.
Preparing images of different kinds of experimental materials according toAnd marking the image as 0 or 1 by the value-taking rule, and taking the image as a training sample set to train the model NN. Marking of the sampleCalculating the input image of the sample according to equations (5) to (13) to obtain an output. And calculating a cost function:
To correspond toAnd (3) properly adjusting the control parameters of the components according to the training sample set, and taking the relative coefficient of the mean value of each type of image and the maximum value of the mean value of each type of image as the control parameter corresponding to the type of components as an optimal configuration. Cost function is solved by adopting backward propagation methodTo the neural network modelLearning of the parameters in the types (5) to (13).
And S31, identifying risk behaviors in the demonstration process.
According to step 2, the teacher switches to a specific key step and enters the demonstration process for that key step.
In the demonstration process, the control software collects videos from each experimental area and analyzes and identifies all the videos frame by frame. The identification process is as follows.
And A1, after the demonstration process is started, acquiring one frame of the video, and performing sub-graph traversal on the image of the one frame.
A2, the traversing method is as follows: intercepting different displacement and scaling subgraphs as the input of the neural network model NN and obtaining the output. When a certain dimension of outputAnd when the maximum value is taken, the subgraph is represented as the experimental raw material image of the corresponding type.
And A3, obtaining the positions of the different experimental materials in the image, namely the positions and the sizes of the subgraphs according to the traversal result.
A4, if the experimental materials of all kinds can not be obtained in the first frame image, continuing to collect the next frame of the video, and repeating A2 and A3 until the positions of the experimental materials of all kinds in the image are obtained.
And A5, after A1-A4 is finished, continuously acquiring the next frame of the video, and inputting the subgraphs of the corresponding positions into a neural network model NN for detection according to the positions of various experimental raw materials in the image.
A6, if the output of the model NN is not found to be a subgraph of the corresponding category, the experimental material is considered to be moved or used in the demonstration stage, and a risk warning comprising a laboratory bench number and a material type is sent to the control software for the teacher to view.
As an improvement, after several frames are continuously collected, if the output of the model NN in several frames is not a subgraph of a corresponding category, the risk warning is sent out again, and the false alarm rate is reduced.
And S32, identifying risk behaviors in the experimental process.
According to the step 2, after the demonstration process of the key step is finished, the teacher sets an experimental process for entering the key step.
During the experiment, the control software continues to collect video from each experimental area and performs monitoring according to the positions of various experimental raw materials obtained in the process A4.
And B1, acquiring a frame of the image, removing the allowed experimental raw materials according to the type of the experimental raw materials required by the key step, and inputting sub-images of the positions corresponding to the unnecessary various experimental raw materials into the neural network model NN for detection.
And B2, if the output of the model NN is not found to be a subgraph of the corresponding category, the experimental raw material is considered to be moved or used in the demonstration stage, and a risk warning comprising a laboratory bench number and a raw material type is sent to the control software for the teacher to view.
Similarly, as an improvement measure, after several frames are continuously collected, if the output of the model NN in several frames is not a subgraph of a corresponding category, the risk warning is sent out again, and the false alarm rate is reduced.
Therefore, the risk of the experiment is evaluated according to the alarm information, and supervision of the experiment process is completed.
The invention provides a demonstration and supervision method for organic polymer material synthesis experiment teaching, which integrates the demonstration and supervision of the experiment teaching, demonstrates the key steps of the experiment, and supervises the standardization of the demonstration process and the experiment process on the use of experiment raw materials in the key steps. The test result of the method shows that the method can intelligently identify unnecessary use of different types of experimental raw materials, has high identification accuracy and high identification speed, can assist in supervising the synthesis experiment process of the organic polymer material, and improves the normalization and the safety of the synthesis experiment of the organic polymer material.
It will be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described in detail herein, many other variations or modifications can be made, which are consistent with the principles of this invention, and which are directly determined or derived from the disclosure herein, without departing from the spirit and scope of the invention. Accordingly, the scope of the invention should be understood and interpreted to cover all such other variations or modifications.
Claims (10)
1. An evaluation method for an organic polymer material synthesis experiment is characterized in that:
dividing a complete experiment into a plurality of key steps; setting an experimental reagent to be used in each key step; playing a corresponding demonstration video to an operator in each key step, and carrying out an operation experiment by the operator according to the demonstration video; in the demonstration and experiment processes, the experiment process is supervised by a computer automatic intelligent algorithm, and risk behaviors using unnecessary reagents are identified and reported;
in the demonstration and experiment processes, a video camera collects the video of an operator and preprocesses the video;
decomposing the preprocessed video into multiple frames of images, and processing each frame of image by using a scale factor and a displacement factor;
inputting each processed frame image into a neural network model for identification, thereby identifying risk behaviors in the demonstration process and identifying risk behaviors in the experiment process, and specifically comprising the following steps: a1, collecting a frame of video after the demonstration process starts, and performing sub-image traversal on the frame of image; a2, the traversing method is as follows: intercepting different displacement and scaling sub-graphs as the input of the neural network model and obtaining output gamma (z); when a certain output dimension gamma (z) takes the maximum value, the subgraph is represented as an experimental reagent image of a corresponding type; a3, obtaining the positions of different reagents in the image, namely the positions and the sizes of the subgraph, according to the traversal result; a4, if the reagents of all kinds can not be obtained in the first frame of image, continuing to collect the next frame of the video, and repeating A2 and A3 until the positions of the reagents of all kinds in the image are obtained; a5, after A1-A4 is finished, continuously acquiring the next frame of the video, and inputting the subgraph of the corresponding position into the neural network model for detection according to the positions of various reagents in the image; a6, if the output of the neural network model is not found to be a subgraph of the corresponding category, the reagent is considered to be moved or used in the demonstration stage, and risk warning comprising a laboratory bench number and a raw material type is sent to the control software; b1, acquiring a frame of a video after the experiment process starts, removing allowed reagents according to the types of the reagents required by the key steps, and inputting sub-graphs of positions corresponding to various reagents which are not necessarily used into a neural network model for detection; b2, if the output of the neural network model is not found to be a subgraph of the corresponding class, then the reagent is considered to be moved or used in the experimental stage; sending risk warnings including laboratory bench numbers and raw material types to the control software;
wherein the neural network model is composed of 1 input layer, 5 hidden layers and 1 output layer:
the input layer of the neural network model is a subgraph of a frame of image of the preprocessed experimental area video, and is represented as follows:
in the formula, V' represents the preprocessed experimental region video, n, m represents the pixel space coordinate in a frame image, w 0 A frame number representing a video; x, y denote coordinates in the subgraph, i.e. input level X NN Coordinates; alpha is alpha x 、α y Representing the scale factor of the subgraph, namely the relative scaling of the subgraph relative to the original video image; beta is a x 、β y Representing the displacement factor of the subgraph, namely the relative position of the subgraph in the original graph;
the 1 st hidden layer of the neural network model is the following mapping of the input layer:
representing a matrix convolution window, u, v representing spatial coordinates within the convolution window, the convolution window being 11 x 11 in size,the number of the matrix convolution window is indicated,i.e. the number of convolution windows is 32; ϑ 1 Represents a linear bias;the 1 st hidden layer is shown as,is the layer node coordinate;
the σ function is defined as follows:
wherein the parameter τ controls the convergence speed of the excitation function;
max represents the maximum value within a window of 0. ltoreq. u, v. ltoreq.3,ϑ 2 indicating a linear bias, the above equation willTaking the maximum value of 4 x 4 nodes in the mapping tableOne node in (1); thus, it is possible to provideIs reduced in space size to1/16 of (1);
representing a matrix convolution window, u, v representing spatial coordinates within the convolution window, the convolution window having a size of 7 x 7,the number representing the convolution window of the matrix, ϑ 3 represents a linear bias;indicating the 3 rd hidden layer, x, y,is the layer node coordinate;
max represents the maximum value within a window of 0. ltoreq. u, v. ltoreq.3, ϑ 4 Represents a linear bias; the above formula isMaximum mapping of 4 x 4 nodes in the mapOne node in (1); thus, the deviceIs reduced in space size to1/16 of (1);
representing a set of vectors, 1 ≦ t ≦ 256 representing the coordinates of the vectors, i.e., the vector dimension is 256. omega 5 (j, k, t) represents the sum and vector of 32 nodes of the same spatial coordinates of the 4 th hidden layerConnection of the t-th element, i.e. linear relation, ϑ 5 Represents a linear bias;
γ (z) represents the output layer of the neural network model as a vector, z represents an element of the vector, ω 6 (j, z) represents the concatenation of element j in the 5 th hidden layer with output layer element z; ϑ 6 Representing a linear bias, and pi (z) is the process vector.
2. The method of claim 1, wherein: identifying risk behaviors during the presentation, and identifying risk behaviors during the experiment includes: after several frames can be continuously collected, if the output of the network model in several frames is not a subgraph of a corresponding category, the risk warning is sent out again, and the false alarm rate is reduced.
3. The method of claim 1, wherein: after the demonstration process of the key step is finished, the experimental process entering the key step is manually controlled.
4. The method of claim 1, wherein: the evaluation method is carried out simultaneously in a plurality of test areas, and each test area comprises a camera, a display, a test bed and a reagent.
5. The method of claim 4, wherein: and arranging a display in each experimental area for playing demonstration videos of the course experiment.
6. The method of claim 4, wherein: the camera is used for collecting behavior videos of an operator in the experiment process.
7. The method of claim 4, wherein: and the system also comprises a control computer which is used for implementing 1) switching of key steps, 2) switching of demonstration processes and experimental processes in the key steps and 3) monitoring risk behaviors.
8. The method of claim 7, wherein: when the key step is switched, entering the demonstration process of the key step; after the demonstration is finished, the control is switched to the experimental process of the key step.
9. A supervision system for an organic polymer material synthesis experiment is characterized in that: comprising a camera, a display and a control computer for implementing the evaluation method according to any one of claims 1 to 8.
10. The system of claim 9, wherein: the control computer stores a pre-trained neural network model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210789293.0A CN114863734B (en) | 2022-07-06 | 2022-07-06 | Evaluation method for organic polymer material synthesis experiment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210789293.0A CN114863734B (en) | 2022-07-06 | 2022-07-06 | Evaluation method for organic polymer material synthesis experiment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114863734A CN114863734A (en) | 2022-08-05 |
CN114863734B true CN114863734B (en) | 2022-09-30 |
Family
ID=82626201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210789293.0A Active CN114863734B (en) | 2022-07-06 | 2022-07-06 | Evaluation method for organic polymer material synthesis experiment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114863734B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107507114A (en) * | 2017-09-08 | 2017-12-22 | 赵宇航 | A kind of Internet of Things teaching platform method of controlling security and device |
CN111915460A (en) * | 2020-05-07 | 2020-11-10 | 同济大学 | AI vision-based intelligent scoring system for experimental examination |
CN114640752A (en) * | 2022-03-28 | 2022-06-17 | 杭州海康威视系统技术有限公司 | Auxiliary method and device for experimental learning |
CN114663834A (en) * | 2022-03-22 | 2022-06-24 | 天目爱视(北京)科技有限公司 | Express storage site monitoring method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3923183A1 (en) * | 2020-06-11 | 2021-12-15 | Tata Consultancy Services Limited | Method and system for video analysis |
CN114005054A (en) * | 2021-10-09 | 2022-02-01 | 上海锡鼎智能科技有限公司 | AI intelligence system of grading |
-
2022
- 2022-07-06 CN CN202210789293.0A patent/CN114863734B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107507114A (en) * | 2017-09-08 | 2017-12-22 | 赵宇航 | A kind of Internet of Things teaching platform method of controlling security and device |
CN111915460A (en) * | 2020-05-07 | 2020-11-10 | 同济大学 | AI vision-based intelligent scoring system for experimental examination |
CN114663834A (en) * | 2022-03-22 | 2022-06-24 | 天目爱视(北京)科技有限公司 | Express storage site monitoring method |
CN114640752A (en) * | 2022-03-28 | 2022-06-17 | 杭州海康威视系统技术有限公司 | Auxiliary method and device for experimental learning |
Also Published As
Publication number | Publication date |
---|---|
CN114863734A (en) | 2022-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113537106B (en) | Fish ingestion behavior identification method based on YOLOv5 | |
US11468538B2 (en) | Segmentation and prediction of low-level temporal plume patterns | |
CN109886130B (en) | Target object determination method and device, storage medium and processor | |
KR102189262B1 (en) | Apparatus and method for collecting traffic information using edge computing | |
CN111881730A (en) | Wearing detection method for on-site safety helmet of thermal power plant | |
KR102199094B1 (en) | Method and Apparatus for Learning Region of Interest for Detecting Object of Interest | |
CN106034202A (en) | Adjusting method and adjusting device for video splicing camera | |
CN113435282A (en) | Unmanned aerial vehicle image ear recognition method based on deep learning | |
WO2021041422A1 (en) | Ai-powered autonomous 3d printer | |
CA3201908A1 (en) | Plant detection and display system | |
CN114140745A (en) | Method, system, device and medium for detecting personnel attributes of construction site | |
CN116229560A (en) | Abnormal behavior recognition method and system based on human body posture | |
CN114863734B (en) | Evaluation method for organic polymer material synthesis experiment | |
JP2009123150A (en) | Object detection apparatus and method, object detection system and program | |
CN116403162B (en) | Airport scene target behavior recognition method and system and electronic equipment | |
Awalgaonkar et al. | DEEVA: a deep learning and IoT based computer vision system to address safety and security of production sites in energy industry | |
CN110111332A (en) | Collagent casing for sausages defects detection model, detection method and system based on depth convolutional neural networks | |
CN108388845A (en) | Method for checking object and system | |
EP3819817A1 (en) | A method and system of evaluating the valid analysis region of a specific scene | |
US20210142481A1 (en) | Method and System of Evaluating the Valid Analysis Region of a Specific Scene | |
CN112651330B (en) | Target object behavior detection method and device and computer equipment | |
CN113420724B (en) | Unmanned aerial vehicle remote sensing river inlet and outlet recognition method and device | |
CN114842554B (en) | Group monkey action recognition method based on local and global space-time characteristics | |
WO2023007535A1 (en) | Sewage pipe interior abnormality diagnosis assistance system, client machine and server machine for sewage pipe interior abnormality diagnosis assistance system, and related method | |
CN113139984B (en) | Long-time unmanned aerial vehicle target tracking method and system integrating detection and tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |