CN116532046A - Microfluidic automatic feeding device and method for spirofluorene xanthene - Google Patents

Microfluidic automatic feeding device and method for spirofluorene xanthene Download PDF

Info

Publication number
CN116532046A
CN116532046A CN202310815250.XA CN202310815250A CN116532046A CN 116532046 A CN116532046 A CN 116532046A CN 202310815250 A CN202310815250 A CN 202310815250A CN 116532046 A CN116532046 A CN 116532046A
Authority
CN
China
Prior art keywords
module
chemical
medicine
image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310815250.XA
Other languages
Chinese (zh)
Other versions
CN116532046B (en
Inventor
朱世远
成孝刚
宋丽敏
王晨昕
解令海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202310815250.XA priority Critical patent/CN116532046B/en
Publication of CN116532046A publication Critical patent/CN116532046A/en
Application granted granted Critical
Publication of CN116532046B publication Critical patent/CN116532046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01JCHEMICAL OR PHYSICAL PROCESSES, e.g. CATALYSIS OR COLLOID CHEMISTRY; THEIR RELEVANT APPARATUS
    • B01J4/00Feed or outlet devices; Feed or outlet control devices
    • B01J4/001Feed or outlet devices as such, e.g. feeding tubes
    • B01J4/007Feed or outlet devices as such, e.g. feeding tubes provided with moving parts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L9/00Supporting devices; Holding devices
    • B01L9/50Clamping means, tongs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01JCHEMICAL OR PHYSICAL PROCESSES, e.g. CATALYSIS OR COLLOID CHEMISTRY; THEIR RELEVANT APPARATUS
    • B01J2204/00Aspects relating to feed or outlet devices; Regulating devices for feed or outlet devices
    • B01J2204/002Aspects relating to feed or outlet devices; Regulating devices for feed or outlet devices the feeding side being of particular interest
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Organic Chemistry (AREA)
  • Clinical Laboratory Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a microfluidic automatic feeding device and method for spirofluorene xanthene, and belongs to the crossing fields of computer vision, robot technology and organic synthesis. The device comprises a micro-reactor platform, a robot module, a vision auxiliary system module and a control module. The vision auxiliary system module comprises a target detection module, a chemical tag identification module and a material type identification module in the chemical vessel, wherein the target detection module is used for detecting a target reaction bottle and a target medicine; the chemical label identification module is used for identifying a medicine label image of the target medicine and confirming whether the medicine label image is the required target medicine; the material type identification module in the chemical vessel is used for monitoring the material change in the reaction bottle and determining whether the material feeding is finished. The invention can realize automatic feeding of the continuous flow synthesis of the spirofluorene xanthene.

Description

Microfluidic automatic feeding device and method for spirofluorene xanthene
Technical Field
The invention relates to the field of computer vision, robotics and organic synthesis, in particular to a microfluidic automatic feeding device and method for spirofluorene xanthene (SFX).
Background
At present, the one-pot method is the main method for preparing the spirofluorene xanthene in a laboratory, but the method still has some defects, which limit the application of the spirofluorene xanthene, such as long reaction time, high reaction temperature, uneven mixing and the like. In recent twenty years, with the development of continuous flow technology, the continuous flow technology is used as a safe, green, economical and efficient synthesis means and is applied to the synthesis and preparation of spirofluorene xanthene, so that the preparation of spirofluorene xanthene becomes more efficient and green.
However, basic laboratory works such as chemical synthesis in a laboratory are still mainly manual operations, and in unit operations of chemical experiments, the first unit operation is dosing of medicine and reagent, and the operation includes a great deal of repeated and tedious labor, such as repeated actions of taking medicine or a container, adding reagent or solid medicine, and the like.
Disclosure of Invention
The invention aims to provide a microfluidic automatic feeding device and method for spirofluorene xanthene, which are used for solving the problems that the feeding of a pharmaceutical reagent in a chemical synthesis experiment of spirofluorene xanthene is still mainly performed manually and a great deal of repetition and tedious labor are required.
In order to achieve the above purpose, the invention adopts the following technical scheme:
in a first aspect, the invention provides a microfluidic automatic feeding device for spirofluorene xanthene, comprising: a micro-reactor platform, a robot module, a vision auxiliary system module and a control module,
the micro-reactor platform comprises a medicine area, a chemical vessel area and a reaction area, wherein the reaction area is used for placing a reaction bottle for carrying out chemical reaction, and a second high-definition camera is arranged at a fixed position away from the reaction area and used for collecting images of the reaction bottle in the reaction area;
the robot module is arranged on the micro-reactor platform and comprises a mechanical arm, and a first high-definition camera is arranged at the tail end of the mechanical arm and used for collecting images of a chemical vessel area, images of a medicine area and medicine label images of target medicines in the medicine area;
the vision auxiliary system module comprises a target detection module, a chemical tag identification module and a chemical vessel material type identification module, wherein the target detection module is used for detecting an image of a chemical vessel area acquired by a first high-definition camera, identifying a target reaction bottle, detecting an image of a medicine area acquired by the first high-definition camera and identifying a target medicine; the chemical label identification module is used for identifying text information of a medicine label image of the target medicine acquired by the first high-definition camera and confirming whether the medicine label image is the required target medicine; the material type identification module in the chemical vessel is used for identifying the image of the reaction bottle in the reaction zone acquired by the second high-definition camera, monitoring the material change in the reaction bottle and determining whether the material feeding is finished;
The control module is used for controlling the mechanical arm, the first high-definition camera, the second high-definition camera and the vision auxiliary system module to execute preset actions according to a preset automatic feeding program so as to complete automatic feeding.
Further, the target detection module adopts an SSL-YOLO network model, where the SSL-YOLO network model includes a backhaul network, an SSL-Neck network and a Head network, the SSL-YOLO network model is obtained by replacing an original Neck layer of YOLOv5s with the SSL-Neck network, and the SSL-Neck network is connected to four detection heads with different scales of the Head network in four channels respectively, specifically:
the method comprises the steps that two groups of characteristic channels are fused in a first channel, one group is that a characteristic diagram output by a fourth C3 module in a backhaul network is connected to a first SDConv module in an SSL-Neck network, the first SDConv module is sequentially connected with a first upsampling module, a first Concat module, a first SDCSP module, a second SDConv module, a second upsampling module, a second Concat module and a second SDCSP module, the other group is that the output of the second C3 module in the backhaul network is connected to the second SDCSP module through a second Concat module, and the two groups of characteristic channels are sequentially connected with a Head-1 in a Head network through a third SDConv module, a third Concat module and a third SDCSP module;
The second channel is provided with two groups of characteristic channels for fusion, one group is to directly access the output of a third SDCSP module in the first channel to a fourth SDConv module, the other group is to access the output of a third C3 module of the backhaul network to a first Concat module in the first channel, the first Concat module is sequentially connected with the first SDCSP module and the second SDConv module, and the two groups of characteristic channels are sequentially accessed to Head-2 in the Head network through the fourth Concat module and the fourth SDCSP module;
the third channel is fused by two groups of characteristic channels, one group is to directly access the output of a fourth SDCSP module in the second channel to a fifth SDConv module, the other group is to access the output of a fourth C3 module of the backhaul network to a first SDConv module in the first channel, and the two groups of characteristic channels are sequentially accessed to Head-3 in the Head network through a fifth Concat module and a fifth SDCSP module;
the fourth channel is provided with two groups of characteristic channels for fusion, one group is that the output of a third SDConv module of the first channel is continuously connected with a third up-sampling module, the third up-sampling module is connected with a sixth SDCSP module through a sixth Concat module, the other group is that the output of a first C3 module of the backhaul network is directly connected with the sixth SDCSP module through the sixth Concat module, and the sixth SDCSP module is connected with a Head-4 in the Head network.
Further, each SDCSP module includes: dividing the feature map into two parts, inputting one part into a first convolution layer for convolution operation, sequentially inputting the output of the first convolution layer into a sixth SDConv module and a seventh SDConv module to extract the bottom layer feature and the middle layer feature respectively, inputting the other part into a third convolution layer for convolution operation, and inputting the other part into the first Concat layer; adding the result of the output of the first convolution layer after the convolution operation is performed by the fourth convolution layer and the feature map output by the seventh SDConv module, inputting the result to the first Concat layer so as to perform feature fusion with the feature map input to the first Concat layer by the third convolution layer, and outputting a result feature map after the convolution operation is performed by the second convolution layer;
each SDConv module comprises a channel shuffling layer, a fifth convolution layer, a DSConv layer and a second Concat layer, the channel shuffling layer groups the input feature images, the channels among different groups are rearranged and then input into the fifth convolution layer for convolution operation, the channel number and the size of each group of feature images output are unchanged, each group of feature images output by the fifth convolution layer are input into the DSConv layer, the DSConv layer carries out deep convolution operation by using the convolution of 3*3 to check each channel in each group of feature images, then carries out point-by-point convolution operation by using the result of the convolution of 1*1 to map the channel number of each group of feature images to the required output channel number, and the second Concat layer splices the input feature images and the output feature images of the DSConv layer.
Further, the target detection module is obtained by training an SSL-YOLO network model by adopting a pre-constructed chemical object identification data set, and the construction method of the chemical object identification data set is as follows:
collecting images of environmental samples, various types of chemical vessels, chemicals and chemical labels in a chemical laboratory, and images of chemical vessels in a web page;
classifying the collected image data into 7 types, namely conical flasks, beakers, flasks, chemical labels, sample bottles, medicines and other containers, and labeling each image data;
and carrying out data expansion on the marked image data to obtain a chemical article identification data set.
Further, the chemical tag identification module is composed of a CTPN model and an improved CRNN network model, wherein the improved CRNN network model is obtained by replacing a MobileNet V3 module with an original convolution layer structure of CRNN.
Further, the material type identification module in the chemical vessel adopts an FCN-GES Net model, wherein the FCN-GES Net model consists of an FCN model, a first GES Net model and a second GES Net model, and the FCN model is used for carrying out semantic segmentation on all chemical vessel areas in an input image; the first GES Net model is used for dividing the chemical vessel area found by the FCN model into separate chemical vessel instance images; the second GES Net model is used for carrying out example segmentation of the chemical material types on the chemical vessel example image output by the first GES Net model to obtain a specific material phase.
Further, the first GES Net model comprises a plurality of generators, each generator is used for dividing the chemical vessel region found by the FCN model to generate a chemical vessel image division result, each generator is connected with an evaluator, each evaluator is used for evaluating the quality of the image division result generated by the generator, and all the evaluators are connected with a selector, and the selector is used for selecting the optimal division result according to the evaluation result of the evaluator.
Further, the material type identification module in the chemical vessel is obtained by training the FCN-GES net model by adopting a pre-constructed data set, wherein the data set used for training comprises chemical vessel images with solid and liquid materials selected from chemical article identification data sets, and a Vector-labPics data set, and the Vector-labPics data set comprises image data which are derived from chemical websites and biochemical laboratories and contain containers and material types in the containers.
In a second aspect, the present invention provides a microfluidic automatic feeding method for spirofluorene xanthene, implemented by using the microfluidic automatic feeding device of the first aspect, the method comprising:
weighing 9-fluorenone, phenol, methanesulfonic acid and 1, 2-dichlorobenzene according to a reaction proportion, respectively placing the weighed 9-fluorenone, phenol and methanesulfonic acid in matched containers, dividing the 1, 2-dichlorobenzene into two parts with equal volume, respectively placing the two parts in two matched containers, and attaching a label on each container and placing the label in a medicine area;
Acquiring an image of a chemical vessel region by a first high-definition camera on the mechanical arm; detecting the acquired images of the chemical vessel areas through a target detection module, and identifying a target reaction bottle A and a reaction bottle B; the control mechanical arm sequentially grabs the reaction bottle A and the reaction bottle B and respectively places the reaction bottle A and the reaction bottle B in a reaction area;
acquiring an image of a medicine area through a first high-definition camera on the mechanical arm; detecting the acquired image of the medicine area through a target detection module to identify a target medicine; controlling the mechanical arm to move close to the identified target medicine, and acquiring a medicine label image of the target medicine through a second high-definition camera; carrying out text information identification on the acquired medicine label image through a chemical label identification module, and confirming whether the acquired medicine label image is a required target medicine; according to the set confirmation flow of each target medicine, the mechanical arm is controlled to sequentially grasp the target medicines 9-fluorenone, phenol and 1, 2-dichlorobenzene and put the target medicines into the reaction bottle A, and the mechanical arm is controlled to sequentially grasp the target medicines methylsulfonic acid and 1, 2-dichlorobenzene and put the target medicines into the reaction bottle B;
in the feeding process, the images of the reaction bottle A and the reaction bottle B are acquired in real time by using a second high-definition camera; and identifying the acquired images of the reaction bottle A and the reaction bottle B through a material type identification module in the chemical vessel, monitoring the material change in the reaction bottle, and determining whether the material feeding is finished.
Further, the monitoring of material changes in the reaction flask to determine whether the feeding is completed includes:
determining the distribution condition of various types of materials in the reaction bottle through an image segmentation result of the types of the materials, and monitoring whether the reaction bottle is filled with chemicals or not;
monitoring whether the filling amount of materials in the reaction bottle exceeds a preset safety detection line, if so, sending out an alarm reminding;
and monitoring whether the filling quantity of a certain batch of medicines exceeds a preset feeding completion line, and if so, judging that the batch of medicines is fed completely.
Compared with the prior art, the invention has the following beneficial technical effects:
according to the invention, high-precision identification of chemical elements is realized through the target detection module, identification of chemical tags and handwriting tags is realized through the chemical tag identification module, and identification of chemical vessels and material types in the chemical vessels is realized through the material type identification module in the chemical vessels, so that whether a feeding process is safe and successful is detected in an auxiliary manner, thereby completing automatic feeding of continuous flow synthesis of spirofluorene xanthene, and solving the problems that a great deal of repetition and complicated labor are required for feeding of a chemical synthesis experiment of spirofluorene xanthene; the invention provides an automatic feeding visual auxiliary scheme for the continuous flow synthesis experiment of spirofluorene xanthene, is beneficial to further realizing the automation of a chemical laboratory, and provides a research foundation for carrying out the robot chemical experiment by utilizing intelligent vision.
Drawings
FIG. 1 is a schematic view of a microreactor platform and robotic module of the present invention;
FIG. 2 is a schematic diagram of SSL-YOLO network model structure;
fig. 3 is a schematic diagram of SDConv module structure;
FIG. 4 is a schematic diagram of a SDCSP module architecture;
FIG. 5 is an exemplary diagram of a chemical identification dataset;
FIG. 6 is an exemplary diagram of a data expansion scheme;
FIG. 7 is a graph of chemical type identification effect;
FIG. 8 is a schematic diagram of a modified CRNN network architecture;
FIG. 9 is a graph of chemical label text recognition effects;
fig. 10 is a schematic diagram of FCN model;
FIG. 11 is a schematic diagram of a first GES Net model and a second GES Net model;
FIG. 12 is a schematic diagram of a generator;
FIG. 13 is a flow chart for material type identification within a chemical vessel;
FIG. 14 is a graph of image segmentation effects for different material types within different chemical vessels;
FIG. 15 is a flow chart of the microfluidic automated feeding method for spirofluorene xanthenes according to the present invention;
fig. 16 is a graph showing the effect of dividing the dynamic change image of the material in the chemical vessel during the feeding process.
Wherein, 1 a microreactor platform; 2, a mechanical arm; 3 a first high definition camera; 4, a second high-definition camera; 5 chemical vessel rack; 6 a replaceable actuator.
Description of the embodiments
The invention is further described below in connection with specific embodiments. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Examples
Referring to fig. 1, a microfluidic automatic feeding device for spirofluorene xanthene comprises: a microreactor platform 1, a robot module, a vision assistance system module, and a control module.
The micro-reactor platform 1 takes a corning micro-reactor as a platform equipment foundation of an experiment, and the platform design needs to accord with automatic operation. The microreactor platform 1 comprises a medicine zone, a chemical vessel zone, a reaction zone and the like. Wherein the chemical area is used for placing chemicals to be used, the chemical vessel area is provided with a chemical vessel rack 5 for placing various types of chemical vessels, such as conical flasks, reaction bottles, beakers and the like, and the reaction area is used for placing reaction bottles for performing chemical reactions.
The robot module is arranged on the micro-reactor platform 1, the robot module comprises a mechanical arm 2, and the tail end of the mechanical arm 2 is provided with a replaceable actuator 6, such as a clamping jaw, for grabbing chemical vessels, sample bottles filled with medicines and the like. According to different purposes, different clamping jaws can be installed selectively, so that grabbing actions can be accurately completed.
The mechanical arm 2 may be a six-axis mechanical arm.
The end of the robot arm 2 is further provided with a first high definition camera 3 (e.g. a monocular camera) for capturing images of the chemical vessel area, the medicine area and the medicine label image of the target medicine in the medicine area.
A second high-definition camera 4 is arranged on the micro-reactor platform 1 at a fixed position away from the reaction zone and is used for collecting images of the reaction bottles in the reaction zone.
The vision assistance system module comprises a target detection module, a chemical label identification module and a material type identification module in the chemical vessel.
The target detection module is used for detecting the image of the chemical vessel area acquired by the first high-definition camera, identifying the target reaction bottle, detecting the image of the medicine area acquired by the first high-definition camera and identifying the target medicine.
The chemical label recognition module is used for recognizing text information of the medicine label image of the target medicine acquired by the first high-definition camera and confirming whether the medicine label image is the required target medicine.
The material type identification module in the chemical vessel is used for identifying the image of the reaction bottle in the reaction zone acquired by the second high-definition camera, monitoring the material change in the reaction bottle and determining whether the feeding is finished.
And the control module is used for controlling the mechanical arm, the first high-definition camera, the second high-definition camera and the vision auxiliary system module to execute preset actions according to a pre-designed automatic feeding program so as to complete automatic feeding.
The target detection module is constructed by adopting an SSL-YOLO (Small-Slim-YOLO) network model and is obtained after training. As shown in fig. 2, the SSL-YOLO network model is mainly composed of a backhaul network, an SSL-neg network, and a Head network, and is obtained by replacing the original neg layer of YOLOv5s with the SSL-neg network.
The backhaul network uses convolutional layer separation (channel-wise and spatial-wise) and residual connection technology, so that the operation speed can be improved while the accuracy is ensured. As shown in fig. 2, the original 608×608×3 image is input to the Focus module, and the input image is first changed into a feature map of 304×304×32 by slicing. The Focus module is used for carrying out preliminary processing on the input image, comprises two convolution layers and a pooling layer, can reduce the resolution of the input image, simultaneously enhances the expression capability of image characteristics, and has the input channel number of 3 and the output channel number of 32. Then, the image resolution of the feature map output by the Focus module is reduced to 152×152 by the first convolution operation, and the channel number becomes 64. And then inputting the characteristic diagram after the first convolution operation into a first C3 module (C3_1), wherein the C3 module consists of a convolution layer, a batch normalization layer and a LeakyReLU activation function, so that the expression capability of a network can be improved, and the number of network parameters can be reduced. The input image size of the first C3 block is 152×152, and the output image size is 152×152. Then, performing a second convolution operation, wherein the output image size is 76 x 76, and the channel number is 128; inputting a second C3 module (C3_2), wherein the output image size is 76 x 76, and the channel number is unchanged; then, after the third convolution operation, the output image size becomes 38×38, and then the output image size is input to the third C3 module (c3_3), the output image size is still 38×38, and after the fourth convolution operation, the output image size becomes 19×19, and the structure SPP (Spatial Pyramid Pooling) is connected. The SPP is used for extracting features of different scales, can carry out pooling operation of different sizes on an input feature map, and then combines all pooling results to generate feature vectors of specific dimensions, and can retain information of different scales, so that the performance of target detection is improved. And finally, continuously inputting the 19 x 19 characteristic images into a fourth C3 module (C3_4), and outputting the images with unchanged sizes.
The SSL-Neck network is mainly used for fusing feature graphs of different scales extracted by the backhaul network to obtain feature graphs with semantic information so as to carry out target detection and classification tasks.
In order to detect chemical vessels such as sample bottles of small targets, a small target detection layer is introduced on the basis that 3 detection heads are output by a Neck layer of an original YOLOv5 framework, so that the model has the small target detection capability.
As shown in fig. 2, the SSL-Neck network accesses four different-scale detection heads (heads) of the Head network in four channels, respectively. The method comprises the following steps:
the method comprises the steps that two groups of characteristic channels are fused in a first channel, firstly, a characteristic map with the channel number 512 and the size 19 x 19 output by a fourth C3 module in a back network is introduced into a first SDConv (separable channel depth convolution, split-channel Depthwise Convolution) module (SDConv 1) in an SSL-Neck network, the size of the output characteristic map is changed to 38 x 38 through a first upsampling module (Upsample1), the channel number is changed to 256, the output characteristic map is connected to a first SDCSP module (SDCSP 1) through a first Concat module (Concat 1), the size of the input and output characteristic map is 38 x 38, the input and output characteristic map is continuously input to a second SDConv module (SDConv 2), the characteristic map size is unchanged, the channel number is changed to 128, the characteristic map size is changed to 76 x 76 through a second upsampling module (Uppsample 2), the output characteristic map is connected to a second SDCSP module (SDCSP 2) through a second Concat 2), and the input characteristic map size is still 76. Secondly, a feature map with the channel number of 128 and the size of 76 x 76 output by a second C3 module (C3_2) of the backhaul network is accessed into a second SDCSP module of the SSL-Neck network by a second Concat module, and at the moment, the size of the input and output feature map is 76 x 76; both feature channels are connected with the Head-1 in the Head network through a third SDConv module (SDConv 3), a third Concat module (Concat 3) and a third SDCSP module (SDCSP 3) in sequence.
The method comprises the steps that two groups of characteristic channels are fused in a second channel, wherein one group is that a third SDCSP module in the first channel is directly connected with a fourth SDConv module (SDConv 4), the size of a characteristic diagram is changed to 38 x 38, and the other group is that a third C3 module (C3_3) of a backhaul network is connected with the first SDCSP module through a first Concat module in the first channel, the size of the characteristic diagram is also 38 x 38, and the output characteristic diagram is unchanged in size through the second SDConv module, and the number of the channels is changed to 128; the two groups of characteristic channels are connected to the Head-2 in the Head network through a fourth Concat module (Concat 4) and a fourth SDCSP module (SDCSP 4) in sequence.
Two groups of characteristic channels are fused in the third channel, one group is a fourth SDCSP module in the second channel and is directly connected with a fifth SDConv module (SDConv 5), the size of a characteristic diagram is changed from 38 to 19 from 38, the other group is connected with a first SDConv module in the first channel by a fourth C3 module in a backhaul network, the size of the characteristic diagram is 19, and the two groups of characteristic channels are connected with a Head-3 in a Head network through a fifth Concat module (Concat 5) and a fifth SDCSP module (SDCSP 5) in sequence.
The fourth channel is an added characteristic channel aiming at small target detection, two groups of characteristic channels are fused in the fourth channel, one group is that a new third up-sampling module (Upsample 3) is continuously connected after passing through a third SDConv module in the first channel, the size of a characteristic diagram is changed to 152 x 152, then the characteristic diagram is connected with a sixth SDCSP module (SDCSP 6) through a sixth Concat module (Concat 6), and the other group is that a first C3 module (C3_1) of a backhaul network is directly connected with the sixth SDCSP module through the sixth Concat module, and then the Head-4 in a Head layer is connected.
By designing a small target detection layer, smaller targets can be captured by adding a convolution layer with a small-sized convolution kernel. The structure can integrate the low-level characteristics of the backbone network after Focus and first convolution with the high-level characteristics of the SSL-Neck layer to strengthen the characteristics of small targets, the receptive field of the framework is further improved through deep convolution, meanwhile, a convolution layer with a small-size convolution kernel is added to capture smaller targets, a fourth detection head is finally generated, and the novel detection head is responsible for extracting and detecting the characteristics of a smaller-size characteristic matrix.
The Head network connected by the SSL-Neck network is provided with four detection heads with different scales, each detection Head can detect targets with different sizes, wherein the output size of the Head-1 is 75×76×76, the output size of the Head-2 is 75×38×38, the output size of the Head-3 is 75×19×19, and the output size of the Head-4 is 75×152×152.
As shown in fig. 3, the SDConv module includes a Channel Shuffle layer (Channel Shuffle), a fifth convolution layer, a DSConv (depth separable convolution) layer, and a second Concat layer. The channel shuffling layer groups the input feature images, rearranges channels among different groups, inputs the rearranged channels into a fifth convolution layer for convolution operation, the number and the size of channels of each group of feature images output by the fifth convolution layer are unchanged, inputs each group of feature images output by the fifth convolution layer into a DSConv layer for deep convolution operation and mapping of the channels, and finally, the input feature images and the output feature images of the DSConv layer are spliced through a second Concat layer so as to retain richer feature representations.
The channel shuffling layer aims at introducing information exchange among channels and feature recombination across channels, uses the ShuffleNets to group input feature images, and generally, when the number of channels is large, can divide the input feature images into 4 groups or more; and when the number of channels is small, the input feature map may be selected to be divided into 2 groups. The channels between the different groups are then rearranged, facilitating the transfer of information and the diversity of features.
More specifically, the input data of the SDConv module is of size (C in , H in , W in ) Wherein C is a three-dimensional tensor of in Is the channel number of the input characteristic diagram, H in Is of height, W in Is the width. Firstly, grouping input feature graphs through a channel shuffling layer, rearranging channels among different groups, inputting the rearranged channels into a fifth convolution layer for convolution operation, wherein the number of the channels is unchanged from the size of the feature graphs, then, performing deep convolution operation on each channel in each group of feature graphs through a DSConv layer, wherein each channel in each group of feature graphs can be independently processed by using a convolution kernel of 3*3, the number of the channels of the output feature graphs is changed and the size of the channels is unchanged, and outputting a channel with the size of (C) out , H in , W in ) Wherein C is a characteristic diagram of out The number of output channels of the depth separable convolution; then DSConv layer uses 1*1 convolution kernel to make point-by-point convolution operation to make channel number C of each group of feature map 1 Mapped to the required number of output channels C 2 . Finally, the second Concat layer splices the feature map of each group input to the DSConv layer with the feature map output by the DSConv layer to obtain feature map data with a size of (C in +C out , H in , W in )。
Typically, feature images undergo a conversion process in which spatial information is gradually transferred to channels in a backbox, and each time the spatial (width and height) compression and channel expansion of the feature images results in partial loss of semantic information. And the SDConv module is beneficial to solving the problem of partial loss of semantic information caused by space compression and channel expansion through intensive convolution calculation.
The SDCSP module is built on the basis of the SDConv module. As shown in fig. 4, the SDCSP module includes: the feature map is divided into two parts, one part is input into a first convolution layer for convolution operation, the output of the first convolution layer is sequentially input into a sixth SDConv module and a seventh SDConv module to extract bottom layer features and middle layer features respectively, the other part is input into a third convolution layer for convolution operation and then is input into the first Concat layer, meanwhile, the result of the output of the first convolution layer after the convolution operation through the fourth convolution layer is added with the feature map output by the seventh SDConv module and then is input into the first Concat layer, so that feature fusion is carried out on the result and the feature map input into the first Concat layer through the third convolution layer, and the result feature map is output after the convolution operation through the second convolution layer.
The SDCSP module divides the feature map into two parts, the two SDConv modules are connected in a trunk path by adopting an aggregation method, the trunk path is responsible for extracting the feature representation of the bottom layer and the middle layer, the cross-stage connection is used for introducing high-level semantic information, and the cross-stage connection uses a 1*1 convolution layer to reduce the channel number of the feature map so as to reduce the calculation amount, and is a key part for connecting the trunk path and the jump connection. By fusing the features of the lower layer and the higher layer, the expression capacity of the model is improved. The SDCSP module can not only reduce the complexity of computation and network structure, but also maintain sufficient accuracy.
After the SSL-YOLO network model is built, it needs to be trained using the dataset.
The chemical species are of a wide variety including beakers, sample bottles, flasks, and the like. Since a data set for identifying a chemical type has not been disclosed so far, in order to achieve identification of a chemical type, the present invention constructs a chemical identification data set (CViG) for training a model, a part of picture data of which is shown in fig. 5.
The data set consists of two parts, wherein the main part is that the images are shot and collected by a mobile phone in a synthesis laboratory, the contents in the images comprise environmental samples, various types of chemical vessels, chemicals, chemical labels and other factors in the chemical laboratory, and the image data of the rest part is derived from the images of the chemical vessels in the webpage.
After data image acquisition, image labeling is carried out on a data set by using a LabelImg kit, chemical objects required by experiments are classified, objects to be detected are set to 7 types of labels, namely a container (vessel), an erlenmeyer flask, a beaker, a flask, a chemical label (label), a sample bottle and a medicine (drug), wherein the container (vessel) refers to other chemical vessels except the erlenmeyer flask, the beaker, the flask and the sample bottle; chemical labels (label) refer to labels attached to chemical utensils and containing text information such as drug names; sample bottles (sample devices) refer to chemical vessels labeled with chemical labels (labels) for holding the drugs required for chemical reactions.
After the image marking is completed, in order to enlarge the size of the data set, the data set is subjected to data enhancement and expansion, and the method comprises random cutting, random rotation, horizontal mirroring, vertical mirroring, noise addition and the like, so that a CViG data set containing 4972 pictures is finally obtained as shown in fig. 6.
The object detection module may be obtained by training the SSL-YOLO network model using the CViG dataset. The effect of the object detection module on the identification of the chemical type is shown in fig. 7.
In order to achieve automated feeding, it is also desirable to be able to identify the text on the chemical label decal to confirm whether it is the target drug.
The invention adopts CTPN (Connectionist Text Proposal Network) model and improved CRNN network model to carry out text detection and identification. Firstly, text detection is carried out based on a CTPN model, training is carried out on the model through deep learning, the purpose of text position detection is achieved, and then a text region is transmitted into a text recognition network module. The text recognition network module is constructed based on a CRNN (Convolutional Recurrent Neural Network) network, the invention improves the CRNN network by light weight, and the improved CRNN network structure is shown in figure 8 by introducing a MobileNet V3 module to replace the original convolution layer structure of the CRNN.
In the improved CRNN network, a MobileNet V3 network module is used for replacing a convolution layer as an extraction layer, a Bi-LSTM method is used for obtaining spatial sequence characteristics of texts in images, and after characteristic extraction and model training, CTC Loss is used for transcription, so that text information identification of chemical labels is realized, and whether the chemical labels are target medicines or not is determined through identification of text information such as medicine names, CAS numbers and capacities. The effect of text recognition on chemical labels is shown in fig. 9.
For identifying the material types in the chemical vessel, the image segmentation technology is a key technology capable of realizing the identification of the material types in the chemical vessel, and the image characteristics are extracted through training a model to carry out image segmentation on the material in the chemical vessel, so that the existence and distribution conditions of different material phases can be identified and quantified. The identification of the material type in the chemical vessel is realized by constructing the FCN-GES net model, and the identification comprises a chemical container, a filler, a liquid phase and a solid phase, and is used for dividing the image areas of the chemical vessel and the material phase and determining the area distribution of the chemical vessel and the material phase.
The FCN-GES Net model is composed of a FCN (Fully Convolutional Network) model, a first GES Net (Generator-Evaluator-Selector Net) model (GES Net 1) and a second GES Net model (GES Net 2), and is used for semantically segmenting all chemical vessel regions in an input image in combination with the illustration of fig. 13; the first GES Net model is used for dividing the chemical vessel area discovered by the FCN model into separate chemical vessel instance images; the second GES Net model is used for carrying out example segmentation of the chemical vessel example image output by the first GES Net model into chemical material types, and finally, each chemical vessel area is segmented into specific material phases, such as solid phase, liquid phase and the like.
As shown in fig. 10, the FCN model structure includes two parts, an encoder and a decoder. The encoder part consists of a convolution layer and a pooling layer for extracting features from the original image, and the decoder part consists of a deconvolution layer or an upsampling layer for mainly restoring the feature map to the same size as the input image and generating a segmentation result. As shown in fig. 10, after the original image of h×w is convolved twice, the image is reduced to 1/4; performing a third convolution operation on the image, reducing the image to 1/8 of the original image, and retaining the feature map at the moment; then, carrying out a fourth convolution operation on the image, reducing the image to 1/16 of the original image, and keeping the feature map of 1/16; and finally, carrying out a fifth convolution operation on the image, reducing the image to 1/32 of the original image, then changing all the full-connection layers of the CNN into convolution layers, changing the number of the feature images of the image, wherein the size of the image is still 1/32 of the original image, and retaining the feature images of 1/32. At this time, the 1/32-size feature map, the 1/16-size feature map, and the 1/8-size feature map are up-sampled, and then the three prediction results of different depths are fused and restored to the same size as the input image.
The basic principle of the GES Net model is to decompose the image segmentation task into three subtasks: the system comprises a generator, an evaluator and a selector, wherein each subtask is completed by a neural network module to achieve a specific task, and finally three subtasks are integrated to obtain a final segmentation result. The goal of the generator is to generate accurate image segmentation results so that they can pass the screening of subsequent estimators and selectors.
As shown in fig. 11, the first GES Net model includes three generators, each for dividing the chemical vessel region found by the FCN model, generating chemical vessel image division results, each connected to an evaluator, each for evaluating the quality of the image division results generated by the generator, and all connected to a selector for selecting the best division result according to the evaluation results of the evaluators. The second GES Net model structure is identical to the first GES Net model.
As shown in fig. 12, the structure of the generator is shown, firstly, the generator inputs random noise of n×100, forms noise data of n×1024 and other different sizes through a full connection layer, forms characteristic channels of 128×7×7 through a Resize operation, after passing through an upsampling layer, the channel number is 128, the size of a characteristic map is 14×14, then passes through a convolution layer, the channel number becomes 64, the size of the characteristic map is unchanged, then passes through an upsampling layer, the channel number is unchanged, the size of the characteristic map becomes 56×56, then passes through a convolution layer, the channel number becomes 1, the characteristic map size is unchanged, and finally generates a characteristic image of 1×56×56.
The generator generates image feature segments, i.e. various segments corresponding to regions of containers, materials, etc. in the image. The result of the generator is input into an evaluator, the evaluator mainly plays a role in evaluating the quality of the segmentation result generated by the generator, the evaluator consists of a convolution layer and a pooling layer and is used for calculating the difference between each segmentation result and a real segmentation result, and the purpose of the evaluator is to improve the quality and accuracy of the segmentation result. The selector consists of a full connection layer and mainly aims to select the optimal segmentation result.
After the FCN-GES net model is constructed, training is needed to obtain the material type identification module in the chemical vessel.
The data set for training the FCN-GES net model mainly comprises two data sources, wherein one data source is a CViG data set, 542 pictures of chemical vessels with materials such as solids, liquids and the like are selected from the CViG data set, the other data source is a Vector-LabPics data set, the data is mainly derived from image data in chemical websites and biochemical laboratories, the data source comprises image elements such as containers, liquids, solids and the like, and the data set comprises 2187 pictures in total. The two parts of data sets are 2729 pictures in total, a data set for training the FCN-GES net model is formed, and labelme labeling processing is carried out on the data set to form a josn file format. Wherein the image segmentation effect of different material types within different chemical vessels is shown in fig. 14.
Examples
The embodiment provides a microfluidic automatic feeding method for spirofluorene xanthene, which is shown in fig. 15, and includes:
weighing 9-fluorenone, phenol, methanesulfonic acid and 1, 2-dichlorobenzene according to a reaction proportion, respectively placing the weighed 9-fluorenone, phenol and methanesulfonic acid in matched containers, dividing the 1, 2-dichlorobenzene into two parts with equal volume, respectively placing the two parts in two matched containers, and attaching a label on each container and placing the label in a medicine area;
according to a pre-designed automatic feeding program, the control module controls the mechanical arm, the first high-definition camera, the second high-definition camera and the vision auxiliary system module to execute the following operations:
acquiring an image of a chemical vessel region by a first high-definition camera on the mechanical arm; detecting the acquired images of the chemical vessel areas through a target detection module, and identifying a target reaction bottle A and a reaction bottle B; the control mechanical arm sequentially grabs the reaction bottle A and the reaction bottle B and respectively places the reaction bottle A and the reaction bottle B in a reaction area;
acquiring an image of a medicine area through a first high-definition camera on the mechanical arm; detecting the acquired image of the medicine area through a target detection module to identify a target medicine; controlling the mechanical arm to move close to the identified target medicine, and acquiring a medicine label image of the target medicine through a second high-definition camera; carrying out text information identification on the acquired medicine label image through a chemical label identification module, and confirming whether the acquired medicine label image is a required target medicine; according to the set confirmation flow of each target medicine, the mechanical arm is controlled to sequentially grasp the target medicines 9-fluorenone, phenol and 1, 2-dichlorobenzene and put the target medicines into the reaction bottle A, and the mechanical arm is controlled to sequentially grasp the target medicines methylsulfonic acid and 1, 2-dichlorobenzene and put the target medicines into the reaction bottle B;
In the feeding process, the images of the reaction bottle A and the reaction bottle B are acquired in real time by using a second high-definition camera; and identifying the acquired images of the reaction bottle A and the reaction bottle B through a material type identification module in the chemical vessel, monitoring the material change in the reaction bottle, and determining whether the material feeding is finished.
Wherein, the material in the monitoring reaction bottle changes, confirms whether to accomplish the material of throwing, includes:
determining the distribution condition of various types of materials in the reaction bottle through an image segmentation result of the types of the materials, and monitoring whether the reaction bottle is filled with chemicals or not;
monitoring whether the filling amount of materials in the reaction bottle exceeds a preset safety detection line, if so, sending out an alarm reminding;
and monitoring whether the filling quantity of a certain batch of medicines exceeds a preset feeding completion line, and if so, judging that the batch of medicines is fed completely.
Specific embodiments are given below:
automatic feeding for preparing spirofluorene xanthene by taking 9-fluorenone and phenol as raw materials at a time:
the SFX synthesis reaction equation is:
in the embodiment, the material ratio, concentration and other aspects of materials required by the synthesis and preparation of the spirofluorene xanthene by the micro-reactor are standardized, and quantitative standardization is carried out on the materials of each robot experiment, so that the taking and flow application of the robot module are facilitated. The reagents required for preparing spirofluorene xanthene by synthesis in the microreactor include 9-fluorenone, phenol, methylsulfonic acid and 1, 2-dichlorobenzene.
The experiment initially sets the feeding mole ratio of 9-fluorenone, phenol and methanesulfonic acid to be 1:6: the concentration of 4, 9-fluorenone is set to be 1mol/L, the dosage of 1, 2-dichlorobenzene solvent is set to be 100mL, and the weight of the final 9-fluorenone is 18.0g, the weight of phenol is 56.4g and the weight of methanesulfonic acid is 38.2g according to the feeding molar ratio. Taking into consideration the design meeting the requirement of an automatic process and the different states of materials at normal temperature, the material is pretreated firstly by taking 9-fluorenone as yellow solid particles, phenol liquid and 1, 2-dichlorobenzene and methanesulfonic acid as solutions, the 9-fluorenone, the phenol and the methanesulfonic acid are respectively put into quantitative sample bottles, the use amount of the 1, 2-dichlorobenzene solvent is equally divided into two parts, and the two parts are respectively put into the quantitative sample bottles. All medicines are placed in a medicine area, the positions of the medicines are fixed, and text information labels are attached to all the chemicals. In addition, in the chemical vessel area, the required reaction vessels including conical flasks, reaction bottles, beakers, etc. are placed.
And (3) automatically programming the robot module according to a designed experimental flow to complete automatic feeding. As shown in fig. 15, first, an image of a chemical vessel area is acquired in real time by using a first high-definition camera at the tail end of a mechanical arm, the acquired image of the chemical vessel area is detected by using an SSL-YOLO network model, two targets of a reaction bottle a and a reaction bottle B to be used are identified, and after the targets are confirmed, the mechanical arm is controlled to sequentially grasp the reaction bottle a and the reaction bottle B and respectively place the two targets at positions defined as chemical reaction areas.
Then, acquiring images of the medicine area in real time by using a first high-definition camera at the tail end of the mechanical arm; continuously identifying a target medicine through an SSL-YOLO network model, after detecting the target medicine on a platform, enabling the mechanical arm to start to approach the target medicine, and collecting a medicine label image of the target medicine at a position, opposite to a medicine label, of a first high-definition camera arranged on the mechanical arm; the chemical label identification module is used for identifying the acquired medicine label image by text information and confirming whether the acquired medicine label image is a required target medicine; after the target medicine is confirmed, the mechanical arm is controlled to grasp the target medicine, and the target medicine is placed into the target reaction bottle. Program setting is carried out according to the confirmation flow of each set target medicine, and quantitative 9-fluorenone, phenol, methylsulfonic acid, 1, 2-dichlorobenzene and other medicines are placed in a target reaction bottle in sequence, wherein the adding sequence comprises the following steps: 9-fluorenone, phenol, 1, 2-dichlorobenzene (50 mL) were sequentially added to reaction flask A, and methanesulfonic acid, 1, 2-dichlorobenzene (50 mL) were sequentially added to reaction flask B.
In the whole feeding process, the positions of the second high-definition camera, which are opposite to the reaction bottle A and the reaction bottle B, are utilized to acquire images of the reaction bottle A and the reaction bottle B in real time, and the material changes inside the reaction bottle A and the reaction bottle B are monitored through an FCN-GES net model to determine whether the feeding is successfully and safely completed.
Wherein, the inside material change of monitoring reaction bottle A and reaction bottle B confirms whether successfully accomplish the material of throwing safely, specifically includes:
determining the distribution condition of various types of materials (solid and liquid) in a reaction bottle through image segmentation of the material types, and monitoring whether the feeding of chemicals and reagents is finished or not;
setting a safety detection line for the filling amount of materials in the reaction bottle in the process of monitoring the feeding, wherein the safety detection line is generally set at the position where the amount of materials in the chemical reaction bottle is 2/3 of the total capacity, and when the feeding amount image of the medicine reagent is divided to exceed the safety detection line, sending out an alarm prompt;
and setting a feeding completion line according to the total amount of a batch of medicines, and judging that the batch of medicines are fed completely when the filling amount image of the medicine reagents is segmented to reach the preset feeding completion line, so as to realize successful feeding.
The image segmentation effect of the dynamic change of the materials in the chemical vessel in the feeding process is shown in fig. 16.
In addition, in the feeding process, the mechanical arm is controlled to put the medicine back to the original position after adding the medicine each time.
The present invention has been disclosed in the preferred embodiments, but the invention is not limited thereto, and the technical solutions obtained by adopting equivalent substitution or equivalent transformation fall within the protection scope of the present invention.

Claims (10)

1. Microfluidic automatic feeding device for spirofluorene xanthene is characterized by comprising: a micro-reactor platform, a robot module, a vision auxiliary system module and a control module,
the micro-reactor platform comprises a medicine area, a chemical vessel area and a reaction area, wherein the reaction area is used for placing a reaction bottle for carrying out chemical reaction, and a second high-definition camera is arranged at a fixed position away from the reaction area and used for collecting images of the reaction bottle in the reaction area;
the robot module is arranged on the micro-reactor platform and comprises a mechanical arm, and a first high-definition camera is arranged at the tail end of the mechanical arm and used for collecting images of a chemical vessel area, images of a medicine area and medicine label images of target medicines in the medicine area;
the vision auxiliary system module comprises a target detection module, a chemical tag identification module and a chemical vessel material type identification module, wherein the target detection module is used for detecting an image of a chemical vessel area acquired by a first high-definition camera, identifying a target reaction bottle, detecting an image of a medicine area acquired by the first high-definition camera and identifying a target medicine; the chemical label identification module is used for identifying text information of a medicine label image of the target medicine acquired by the first high-definition camera and confirming whether the medicine label image is the required target medicine; the material type identification module in the chemical vessel is used for identifying the image of the reaction bottle in the reaction zone acquired by the second high-definition camera, monitoring the material change in the reaction bottle and determining whether the material feeding is finished;
The control module is used for controlling the mechanical arm, the first high-definition camera, the second high-definition camera and the vision auxiliary system module to execute preset actions according to a preset automatic feeding program so as to complete automatic feeding.
2. The microfluidic automatic feeding device for spirofluorene xanthene according to claim 1, wherein the target detection module adopts an SSL-YOLO network model, the SSL-YOLO network model includes a backhaul network, an SSL-neg network and a Head network, the SSL-YOLO network model is obtained by replacing an original neg layer of YOLOv5s with the SSL-neg network, and the SSL-neg network is respectively connected to four detection heads with different scales of the Head network in four channels, specifically:
the method comprises the steps that two groups of characteristic channels are fused in a first channel, one group is that a characteristic diagram output by a fourth C3 module in a backhaul network is connected to a first SDConv module in an SSL-Neck network, the first SDConv module is sequentially connected with a first upsampling module, a first Concat module, a first SDCSP module, a second SDConv module, a second upsampling module, a second Concat module and a second SDCSP module, the other group is that the output of the second C3 module in the backhaul network is connected to the second SDCSP module through a second Concat module, and the two groups of characteristic channels are sequentially connected with a Head-1 in a Head network through a third SDConv module, a third Concat module and a third SDCSP module;
The second channel is provided with two groups of characteristic channels for fusion, one group is to directly access the output of a third SDCSP module in the first channel to a fourth SDConv module, the other group is to access the output of a third C3 module of the backhaul network to a first Concat module in the first channel, the first Concat module is sequentially connected with the first SDCSP module and the second SDConv module, and the two groups of characteristic channels are sequentially accessed to Head-2 in the Head network through the fourth Concat module and the fourth SDCSP module;
the third channel is fused by two groups of characteristic channels, one group is to directly access the output of a fourth SDCSP module in the second channel to a fifth SDConv module, the other group is to access the output of a fourth C3 module of the backhaul network to a first SDConv module in the first channel, and the two groups of characteristic channels are sequentially accessed to Head-3 in the Head network through a fifth Concat module and a fifth SDCSP module;
the fourth channel is provided with two groups of characteristic channels for fusion, one group is that the output of a third SDConv module of the first channel is continuously connected with a third up-sampling module, the third up-sampling module is connected with a sixth SDCSP module through a sixth Concat module, the other group is that the output of a first C3 module of the backhaul network is directly connected with the sixth SDCSP module through the sixth Concat module, and the sixth SDCSP module is connected with a Head-4 in the Head network.
3. The microfluidic automated feed apparatus for spirofluorene xanthene according to claim 2, wherein each SDCSP module comprises: dividing the feature map into two parts, inputting one part into a first convolution layer for convolution operation, sequentially inputting the output of the first convolution layer into a sixth SDConv module and a seventh SDConv module to extract the bottom layer feature and the middle layer feature respectively, inputting the other part into a third convolution layer for convolution operation, and inputting the other part into the first Concat layer; adding the result of the output of the first convolution layer after the convolution operation is performed by the fourth convolution layer and the feature map output by the seventh SDConv module, inputting the result to the first Concat layer so as to perform feature fusion with the feature map input to the first Concat layer by the third convolution layer, and outputting a result feature map after the convolution operation is performed by the second convolution layer;
each SDConv module comprises a channel shuffling layer, a fifth convolution layer, a DSConv layer and a second Concat layer, the channel shuffling layer groups the input feature images, the channels among different groups are rearranged and then input into the fifth convolution layer for convolution operation, the channel number and the size of each group of feature images output are unchanged, each group of feature images output by the fifth convolution layer are input into the DSConv layer, the DSConv layer carries out deep convolution operation by using the convolution of 3*3 to check each channel in each group of feature images, then carries out point-by-point convolution operation by using the result of the convolution of 1*1 to map the channel number of each group of feature images to the required output channel number, and the second Concat layer splices the input feature images and the output feature images of the DSConv layer.
4. The microfluidic automatic feeding device for spirofluorene xanthene according to claim 2, wherein the target detection module is obtained by training an SSL-YOLO network model by using a pre-constructed chemical identification dataset, and the method for constructing the chemical identification dataset is as follows:
collecting images of environmental samples, various types of chemical vessels, chemicals and chemical labels in a chemical laboratory, and images of chemical vessels in a web page;
classifying the collected image data into 7 types, namely conical flasks, beakers, flasks, chemical labels, sample bottles, medicines and other containers, and labeling each image data;
and carrying out data expansion on the marked image data to obtain a chemical article identification data set.
5. The microfluidic automatic feeding device for spirofluorene xanthene according to claim 1, wherein the chemical tag identification module is composed of a CTPN model and an improved CRNN network model, and the improved CRNN network model is obtained by replacing a mobilenet v3 module with an original convolution layer structure of CRNN.
6. The microfluidic automatic feeding device for spirofluorene xanthene according to claim 1, wherein the material type identification module in the chemical vessel adopts an FCN-GES Net model, the FCN-GES Net model is composed of an FCN model, a first GES Net model and a second GES Net model, and the FCN model is used for semantically segmenting all chemical vessel regions in an input image; the first GES Net model is used for dividing the chemical vessel area found by the FCN model into separate chemical vessel instance images; the second GES Net model is used for carrying out example segmentation of the chemical material types on the chemical vessel example image output by the first GES Net model to obtain a specific material phase.
7. The microfluidic automated feed device for spirofluorene xanthene according to claim 6, wherein the first GES Net model comprises a plurality of generators, each generator is used for dividing a chemical vessel region found by the FCN model to generate a chemical vessel image division result, each generator is connected to an evaluator, each evaluator is used for evaluating the quality of the image division result generated by the generator, and all the evaluators are connected to a selector, and the selector is used for selecting the optimal division result according to the evaluation result of the evaluator.
8. The microfluidic automated feed device for spirofluorene xanthene according to claim 6, wherein the material type identification module in the chemical vessel is obtained by training FCN-GES net model using a pre-constructed data set, the data set used for training comprises a chemical vessel image with solid and liquid materials selected from the chemical article identification data set, and a Vector-LabPics data set, the Vector-LabPics data set comprises image data from chemical websites and biochemical laboratories, wherein the image data comprises containers and material types in the containers.
9. A microfluidic automated feeding method for spirofluorene xanthenes, characterized in that it is implemented by using the microfluidic automated feeding device according to any one of claims 1-8, said method comprising:
Weighing 9-fluorenone, phenol, methanesulfonic acid and 1, 2-dichlorobenzene according to a reaction proportion, respectively placing the weighed 9-fluorenone, phenol and methanesulfonic acid in matched containers, dividing the 1, 2-dichlorobenzene into two parts with equal volume, respectively placing the two parts in two matched containers, and attaching a label on each container and placing the label in a medicine area;
acquiring an image of a chemical vessel region by a first high-definition camera on the mechanical arm; detecting the acquired images of the chemical vessel areas through a target detection module, and identifying a target reaction bottle A and a reaction bottle B; the control mechanical arm sequentially grabs the reaction bottle A and the reaction bottle B and respectively places the reaction bottle A and the reaction bottle B in a reaction area;
acquiring an image of a medicine area through a first high-definition camera on the mechanical arm; detecting the acquired image of the medicine area through a target detection module to identify a target medicine; controlling the mechanical arm to move close to the identified target medicine, and acquiring a medicine label image of the target medicine through a second high-definition camera; carrying out text information identification on the acquired medicine label image through a chemical label identification module, and confirming whether the acquired medicine label image is a required target medicine; according to the set confirmation flow of each target medicine, the mechanical arm is controlled to sequentially grasp the target medicines 9-fluorenone, phenol and 1, 2-dichlorobenzene and put the target medicines into the reaction bottle A, and the mechanical arm is controlled to sequentially grasp the target medicines methylsulfonic acid and 1, 2-dichlorobenzene and put the target medicines into the reaction bottle B;
In the feeding process, the images of the reaction bottle A and the reaction bottle B are acquired in real time by using a second high-definition camera; and identifying the acquired images of the reaction bottle A and the reaction bottle B through a material type identification module in the chemical vessel, monitoring the material change in the reaction bottle, and determining whether the material feeding is finished.
10. The microfluidic automated feed method for spirofluorene xanthene according to claim 9, wherein the monitoring material changes in the reaction flask to determine whether the feed is completed comprises:
determining the distribution condition of various types of materials in the reaction bottle through an image segmentation result of the types of the materials, and monitoring whether the reaction bottle is filled with chemicals or not;
monitoring whether the filling amount of materials in the reaction bottle exceeds a preset safety detection line, if so, sending out an alarm reminding;
and monitoring whether the filling quantity of a certain batch of medicines exceeds a preset feeding completion line, and if so, judging that the batch of medicines is fed completely.
CN202310815250.XA 2023-07-05 2023-07-05 Microfluidic automatic feeding device and method for spirofluorene xanthene Active CN116532046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310815250.XA CN116532046B (en) 2023-07-05 2023-07-05 Microfluidic automatic feeding device and method for spirofluorene xanthene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310815250.XA CN116532046B (en) 2023-07-05 2023-07-05 Microfluidic automatic feeding device and method for spirofluorene xanthene

Publications (2)

Publication Number Publication Date
CN116532046A true CN116532046A (en) 2023-08-04
CN116532046B CN116532046B (en) 2023-10-10

Family

ID=87452809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310815250.XA Active CN116532046B (en) 2023-07-05 2023-07-05 Microfluidic automatic feeding device and method for spirofluorene xanthene

Country Status (1)

Country Link
CN (1) CN116532046B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117583345A (en) * 2024-01-19 2024-02-23 上海奔曜科技有限公司 Vessel cleaning device, vessel cleaning module, liquid distribution platform and liquid distribution system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205698602U (en) * 2016-04-29 2016-11-23 中华人民共和国黄岛出入境检验检疫局 Automatic medicine adding apparatus
KR20180105417A (en) * 2017-03-15 2018-09-28 주식회사 엔티로봇 Medicine compounding apparatus recognizing material information by RFID and medicine compounding system comprising the same
CN112101711A (en) * 2020-06-17 2020-12-18 浙江中控技术股份有限公司 Flow control method for preventing manual feeding from being mistaken in intermittent production
CN112784641A (en) * 2019-11-08 2021-05-11 张玮 Food material feeding method and device and cooking machine
CN213966441U (en) * 2020-11-11 2021-08-17 淮阴工学院 Unmanned liquid proportioning laboratory
CN113591866A (en) * 2021-07-29 2021-11-02 云南大学 Special job certificate detection method and system based on DB and CRNN
CN114565753A (en) * 2022-02-22 2022-05-31 电子科技大学长三角研究院(湖州) Unmanned aerial vehicle small target identification method based on improved YOLOv4 network
CN115646857A (en) * 2022-09-07 2023-01-31 苏州华兴源创科技股份有限公司 Automatic blanking system
CN116051953A (en) * 2022-11-23 2023-05-02 中国铁塔股份有限公司重庆市分公司 Small target detection method based on selectable convolution kernel network and weighted bidirectional feature pyramid
US20230134651A1 (en) * 2021-10-28 2023-05-04 Akporefe Agbamu Synchronized Identity, Document, and Transaction Management

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205698602U (en) * 2016-04-29 2016-11-23 中华人民共和国黄岛出入境检验检疫局 Automatic medicine adding apparatus
KR20180105417A (en) * 2017-03-15 2018-09-28 주식회사 엔티로봇 Medicine compounding apparatus recognizing material information by RFID and medicine compounding system comprising the same
CN112784641A (en) * 2019-11-08 2021-05-11 张玮 Food material feeding method and device and cooking machine
CN112101711A (en) * 2020-06-17 2020-12-18 浙江中控技术股份有限公司 Flow control method for preventing manual feeding from being mistaken in intermittent production
CN213966441U (en) * 2020-11-11 2021-08-17 淮阴工学院 Unmanned liquid proportioning laboratory
CN113591866A (en) * 2021-07-29 2021-11-02 云南大学 Special job certificate detection method and system based on DB and CRNN
US20230134651A1 (en) * 2021-10-28 2023-05-04 Akporefe Agbamu Synchronized Identity, Document, and Transaction Management
CN114565753A (en) * 2022-02-22 2022-05-31 电子科技大学长三角研究院(湖州) Unmanned aerial vehicle small target identification method based on improved YOLOv4 network
CN115646857A (en) * 2022-09-07 2023-01-31 苏州华兴源创科技股份有限公司 Automatic blanking system
CN116051953A (en) * 2022-11-23 2023-05-02 中国铁塔股份有限公司重庆市分公司 Small target detection method based on selectable convolution kernel network and weighted bidirectional feature pyramid

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZUO, ZONGYAN, ET.AL: "Spiro-substitution effect of terfluorenes on amplified spontaneous emission and lasing behaviors", JOURNAL OF MATERIALS CHEMISTRY C, vol. 6, no. 16, pages 4501 - 4507 *
张伟等: "DS-YOLO:一种部署在无人机终端上的小目标实时检测算法", 南京邮电大学学报(自然科学版), vol. 41, no. 1, pages 86 - 98 *
王汉谱等: "基于FCN的图像语义分割算法研究", 成都工业学院学报, vol. 25, no. 1, pages 36 - 41 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117583345A (en) * 2024-01-19 2024-02-23 上海奔曜科技有限公司 Vessel cleaning device, vessel cleaning module, liquid distribution platform and liquid distribution system
CN117583345B (en) * 2024-01-19 2024-04-26 上海奔曜科技有限公司 Vessel cleaning device, vessel cleaning module, liquid distribution platform and liquid distribution system

Also Published As

Publication number Publication date
CN116532046B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN116532046B (en) Microfluidic automatic feeding device and method for spirofluorene xanthene
Dwivedi et al. Deep learning-based car damage classification and detection
CN110751090B (en) Three-dimensional point cloud labeling method and device and electronic equipment
Ullah et al. Barrier access control using sensors platform and vehicle license plate characters recognition
Thokrairak et al. Valuable waste classification modeling based on ssd-mobilenet
Varadarajan et al. An efficient deep convolutional neural network approach for object detection and recognition using a multi-scale anchor box in real-time
Gautam et al. Automatic traffic light detection for self-driving cars using transfer learning
CN113762159A (en) Target grabbing detection method and system based on directional arrow model
JP5686408B2 (en) Fine particle array method and apparatus
US7706984B2 (en) Method and device for identifying molecular species
Philipsen et al. Cutting pose prediction from point clouds
Wang et al. DroneNet: Rescue Drone-View Object Detection
Almomen et al. Date fruit classification based on surface quality using convolutional neural network models
JP5516928B2 (en) Method and apparatus for producing fine particle array
EP3979131A1 (en) A method for determining at least one state of at least one cavity of a transport interface configured for transporting sample tubes
Hinchy et al. The Development of a Robotic Digital Twin for the Life Science Sector
Mao et al. Dynamic video recognition for cell encapsulated microfluidic droplets
Vogel et al. Fully automated detection of dendritic spines in 3D live cell imaging data using deep convolutional neural networks
Sharma et al. SURVEY ON WBC CLASSIFICATION AND DETECTION USING YOLO
Sharma et al. Smart Waste Management System Using Machine Learning
Cheng et al. Intelligent vision for the detection of chemistry glassware toward AI robotic chemists
Alexandrov et al. Intelligent Analysis of Optical Microscopy Images for Microfluidic Synthesis Results
Kotecha Waste Material Detection
CN117011852A (en) Method for improving grabbing precision of chemical synthesis robot
Botezatu et al. Enhancing Visual Feedback Control through Early Fusion Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant