CN112364687A - Improved Faster R-CNN gas station electrostatic sign identification method and system - Google Patents

Improved Faster R-CNN gas station electrostatic sign identification method and system Download PDF

Info

Publication number
CN112364687A
CN112364687A CN202011052340.0A CN202011052340A CN112364687A CN 112364687 A CN112364687 A CN 112364687A CN 202011052340 A CN202011052340 A CN 202011052340A CN 112364687 A CN112364687 A CN 112364687A
Authority
CN
China
Prior art keywords
electrostatic
cnn
faster
target
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011052340.0A
Other languages
Chinese (zh)
Inventor
陈志军
关超华
周斯加
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shangshan Zhicheng Suzhou Information Technology Co ltd
Original Assignee
Shangshan Zhicheng Suzhou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shangshan Zhicheng Suzhou Information Technology Co ltd filed Critical Shangshan Zhicheng Suzhou Information Technology Co ltd
Priority to CN202011052340.0A priority Critical patent/CN112364687A/en
Publication of CN112364687A publication Critical patent/CN112364687A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides an improved Faster R-CNN gas station electrostatic sign identification method, which comprises the steps of collecting a vehicle picture containing a vehicle body and an electrostatic sign arranged on the vehicle body on an oil unloading operation site, and preprocessing the vehicle picture into a sample data set; constructing a Faster R-CNN neural network model, extracting a characteristic diagram by combining a ResNet structure in the Faster R-CNN neural network when a sample data set is introduced into the Fast R-CNN neural network model to carry out end-to-end joint training on an RPN, obtaining a target area and a target type of the electrostatic mark, and further integrating the target area and the target type of the electrostatic mark into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic mark; and acquiring a real-time vehicle picture of an oil unloading operation site, identifying the real-time vehicle picture in a detection model of the electrostatic mark, and determining the type and the position of the electrostatic mark on the real-time vehicle picture. The invention can realize real-time detection, and has the advantages of high self-checking speed and high detection accuracy.

Description

Improved Faster R-CNN gas station electrostatic sign identification method and system
Technical Field
The invention relates to the technical field of computer vision recognition, in particular to a method and a system for recognizing electrostatic marks of a gas station based on improved Faster R-CNN.
Background
The identification of the static mark of the computer means that for a given picture, a certain method and strategy are adopted to determine whether the picture contains the static mark, and if so, the position and the size of the mark in the picture are returned. The recognition of the electrostatic mark by the computer is actually a pattern recognition, which refers to a process of analyzing and processing information of different forms of things or phenomena so as to describe, distinguish, classify and the like the things or phenomena.
At present, the static mark identification of the computer adopts an object detection algorithm of Faster R-CNN. The Faster R-CNN is developed on the basis of the Faster R-CNN, the detection speed and the detection accuracy are greatly improved, but certain defects exist, and the method mainly comprises the following steps: the Faster R-CNN can not be detected in real time, and the detection speed and precision are to be improved.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present invention is to provide an improved Faster R-CNN gas station electrostatic mark identification method and system, which can realize real-time detection, and have not only Faster self-checking speed, but also higher detection accuracy.
In order to solve the above technical problem, an embodiment of the present invention provides an improved static mark identification method for a Faster R-CNN gas station, where the method includes the following steps:
s1, collecting a vehicle picture containing a vehicle body and a static mark on the vehicle body in an oil unloading operation site, and preprocessing the vehicle picture into a sample data set;
s2, constructing a Faster R-CNN neural network model, extracting a characteristic diagram by combining a ResNet structure in the FasterR-CNN neural network when the sample data set is imported into the FasterR-CNN neural network model to carry out end-to-end joint training on an RPN, obtaining a target region and a target type of the electrostatic marker, and further integrating the obtained target region and the target type of the electrostatic marker into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic marker;
and S3, acquiring a real-time vehicle picture of the oil unloading operation site, importing the acquired real-time vehicle picture into the detection model of the electrostatic sign for identification, and determining the type and the position of the electrostatic sign on the acquired real-time vehicle picture.
Wherein the electrostatic mark is triangular in shape.
In step S2, when the fast R-CNN neural network model is imported into the sample data set to perform end-to-end joint training on an RPN network, the step of extracting a feature map by combining a ResNet structure in the fast R-CNN neural network to obtain a target region and a target category of an electrostatic marker specifically includes:
in the training of the upper half branch of the RPN, a characteristic diagram extracted by a ResNet structure is divided into a plurality of anchor frames anchor, and a target area with a label serving as an electrostatic mark on a target area propofol is selected from the anchor frames anchor;
in the training of the lower half branch of the RPN network, the position of the target region proposal of the upper half branch is obtained as the target category of the electrostatic mark.
The step of acquiring the position of the target region propofol specifically comprises the following steps:
and selecting N target regions propusal with highest probability, and performing non-maximum suppression on the selected N target regions propusal to obtain M target regions propusal with highest probability.
When the RPN network is trained, the loss function generated by joint training of the two branches is as follows:
Figure BDA0002709948480000021
Lcls(pi,pi*)=-log[pi*pi+(1-pi*)(1-pi)]
Lreg(ti,ti*)=R(ti-ti*)
wherein pi represents a target probability value of the picture,
Figure BDA0002709948480000022
i represents the serial number of the anchor point in each small batch; pi represents the target probability of anchor point i; p represents a tag, 0 or 1; t represents 4 parameters of the prediction box; t denotes the parameters of the calibration frame; lcls represents the classification loss function; lreg represents the regression loss function; p i Lreg indicates that regression was performed only on positive samples, where p i 0 for negative samples; cls and reg output pi and ti, respectively.
The embodiment of the invention also provides an improved static mark identification system for the Faster R-CNN gas station, which comprises the following steps:
the data acquisition unit is used for acquiring a vehicle picture comprising a vehicle body and a static mark arranged on the vehicle body on an oil unloading operation site, and preprocessing the vehicle picture into a sample data set;
the detection model construction unit is used for constructing a Faster R-CNN neural network model, extracting a characteristic diagram by combining a ResNet structure in the Faster R-CNN neural network when the sample data set is imported into the Fast R-CNN neural network model to perform end-to-end joint training on an RPN network, obtaining a target region and a target type of the electrostatic marker, and further integrating the obtained target region and the target type of the electrostatic marker into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic marker;
and the detection result output unit is used for acquiring a real-time vehicle picture of an oil unloading operation site, importing the acquired real-time vehicle picture into the detection model of the electrostatic sign for identification, and determining the type and the position of the electrostatic sign on the acquired real-time vehicle picture.
Wherein the electrostatic mark is triangular in shape.
The embodiment of the invention has the following beneficial effects:
compared with the prior art, the method has the advantages that the fast R-CNN is modified, the advantages of the fast R-CNN are reserved, the RPN and the detection network share the convolutional layer and the target area and the target type are integrated in a neural network model for prediction, so that the effect of high-speed and real-time detection is achieved, end-to-end optimization can be performed on the detection performance, and the electrostatic mark can be efficiently identified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.
FIG. 1 is a flow chart of an improved static tag identification method for a Faster R-CNN gas station according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an improved static tag identification system for Faster R-CNN gas stations according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, in an embodiment of the present invention, an improved method for identifying electrostatic tags of a Faster R-CNN gas station is provided, where the method includes the following steps:
s1, collecting a vehicle picture containing a vehicle body and a static mark on the vehicle body in an oil unloading operation site, and preprocessing the vehicle picture into a sample data set;
the specific process is that the camera is used for shooting the electrostatic mark at multiple angles, distances and heights to obtain a video material containing the electrostatic mark; if the shooting is carried out by a mobile camera, the shooting and video recording are carried out on the electrostatic connection clamp around a circle by taking 5 m, 8 m, 10 m, 15 m and 20 m as radiuses. In the shooting process, the height of the camera is 2-4 meters. And secondly, decomposing and processing the video material containing the vehicle body and the static marks contained in the vehicle body into a sample data set by writing a PYTHON script and calling OPENCV. The electrostatic sign cannot use a reflective material, and a triangular shape with stability is used in the shape.
It should be noted that the sample data set is a data set in the VOC2007 format produced by labelimg software, and a training data set, a test data set, a verification data set, and the like are generated by a PYTHON script.
S2, constructing a Faster R-CNN neural network model, extracting a characteristic diagram by combining a ResNet structure in the Faster R-CNN neural network when the sample data set is imported into the Fast R-CNN neural network model to perform end-to-end joint training on an RPN network, obtaining a target region and a target type of the electrostatic marker, and further integrating the obtained target region and the target type of the electrostatic marker into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic marker;
the specific process is that firstly, a Faster R-CNN neural network model is constructed, when the sample data set of the step S1 is imported into the fast R-CNN neural network model to carry out end-to-end joint training on the RPN, a feature map is extracted through a ResNet structure in the fast R-CNN neural network model, the structure makes a reference on the input of each layer to form a residual function, the residual function is easier to optimize, the number of network layers can be greatly deepened, and the depth of the network which can be converged is also greatly improved.
Meanwhile, training of the RPN is mainly divided into two parts, in the training of the upper half branch of the RPN network, a characteristic diagram extracted by a ResNet structure is divided into a plurality of anchor frames anchor (3 length-width ratios multiplied by 3 sizes), and a target area with a label on a target area proposal is selected from the plurality of anchor frames anchor as a target area of an electrostatic mark; in the training of the lower half branch of the RPN network, the position of the target region proposal of the upper half branch is obtained as the target category of the electrostatic mark.
The method comprises the following steps of obtaining the position of a target region propofol: and selecting N target regions propusal with highest probability, and performing non-maximum suppression on the selected N target regions propusal to obtain M target regions propusal with highest probability.
When the RPN network is trained, the loss function generated by joint training of the two branches is as follows:
Figure BDA0002709948480000051
Lcls(pi,pi*)=-log[pi*pi+(1-pi*)(1-pi)]
Lreg(ti,ti*)=R(ti-ti*)
wherein pi represents a target probability value of the picture,
Figure BDA0002709948480000052
i represents the serial number of the anchor point in each small batch; pi represents the target probability of anchor point i; p represents a tag, 0 or 1; t represents 4 parameters of the prediction box; t denotes the parameters of the calibration frame; lcls represents the classification loss function; lreg represents the regression loss function; p i Lreg indicates that regression was performed only on positive samples, where p i 0 for negative samples; cls and reg output pi and ti, respectively.
Secondly, integrating the target area and the target category into a preset Fast R-CNN neural network model to form a detection model of the electrostatic marker; the detection model of the electrostatic mark can directly extract single problems of bounding boxes and class probability from the image and convert the single problems into a mode of obtaining specific classes and positions of the target, so that the detection speed is very high, and the purpose of real-time detection is realized.
And S3, acquiring a real-time vehicle picture of the oil unloading operation site, importing the acquired real-time vehicle picture into the detection model of the electrostatic sign for identification, and determining the type and the position of the electrostatic sign on the acquired real-time vehicle picture.
The specific process is that a real-time vehicle picture of an oil unloading operation site is obtained and identified in a detection model of the electrostatic sign, and the category and the position of the electrostatic sign on the real-time vehicle picture are quickly obtained.
As shown in fig. 2, in an embodiment of the present invention, an improved static tag identification system for a Faster R-CNN gas station is provided, which includes:
the data acquisition unit 110 is used for acquiring a vehicle picture including a vehicle body and a static mark on the vehicle body in an oil unloading operation field, and preprocessing the vehicle picture into a sample data set;
a detection model constructing unit 120, configured to construct a Fast R-CNN neural network model, extract a feature map by combining a ResNet structure in the Fast R-CNN neural network when the sample data set is imported to the Fast R-CNN neural network model to perform end-to-end joint training on an RPN network, obtain a target region and a target category of an electrostatic marker, and further integrate the obtained target region and the obtained target category of the electrostatic marker into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic marker;
and the detection result output unit 130 is used for acquiring a real-time vehicle picture of the oil unloading operation site, importing the acquired real-time vehicle picture into the detection model of the electrostatic sign for identification, and determining the type and the position of the electrostatic sign on the acquired real-time vehicle picture.
Wherein the electrostatic mark is triangular in shape.
The embodiment of the invention has the following beneficial effects:
compared with the prior art, the method has the advantages that the fast R-CNN is modified, the advantages of the fast R-CNN are reserved, the RPN and the detection network share the convolutional layer and the target area and the target type are integrated in a neural network model for prediction, so that the effect of high-speed and real-time detection is achieved, end-to-end optimization can be performed on the detection performance, and the electrostatic mark can be efficiently identified.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by relevant hardware instructed by a program, and the program may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present invention, and it is therefore to be understood that the invention is not limited by the scope of the appended claims.

Claims (7)

1. An improved static mark identification method for a Faster R-CNN gas station, which is characterized by comprising the following steps:
s1, collecting a vehicle picture containing a vehicle body and a static mark on the vehicle body in an oil unloading operation site, and preprocessing the vehicle picture into a sample data set;
s2, constructing a Faster R-CNN neural network model, extracting a characteristic diagram by combining a ResNet structure in the Faster R-CNN neural network when the sample data set is imported into the Fast R-CNN neural network model to carry out end-to-end joint training on an RPN, obtaining a target region and a target type of the electrostatic marker, and further integrating the obtained target region and the target type of the electrostatic marker into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic marker;
and S3, acquiring a real-time vehicle picture of the oil unloading operation site, importing the acquired real-time vehicle picture into the detection model of the electrostatic sign for identification, and determining the type and the position of the electrostatic sign on the acquired real-time vehicle picture.
2. The improved Faster R-CNN gasoline station based electrostatic tag recognition method as claimed in claim 1, wherein said electrostatic tag is in a triangular shape.
3. The improved method for recognizing electrostatic tags of a Faster R-CNN gasoline station as claimed in claim 1, wherein in said step S2, when said sample data set is imported into said fast R-CNN neural network model for performing end-to-end joint training on RPN network, said step of extracting feature map by combining ResNet structure in said fast R-CNN neural network to obtain target region and target class of electrostatic tag specifically comprises:
in the training of the upper half branch of the RPN, a characteristic diagram extracted by a ResNet structure is divided into a plurality of anchor frames anchor, and a target area with a label serving as an electrostatic mark on a target area propofol is selected from the anchor frames anchor;
in the training of the lower half branch of the RPN network, the position of the target region proposal of the upper half branch is obtained as the target category of the electrostatic mark.
4. The improved static mark recognition method for Faster R-CNN gasoline stations as claimed in claim 3, wherein the step of obtaining the position of the target region propofol comprises:
and selecting N target regions propusal with highest probability, and performing non-maximum suppression on the selected N target regions propusal to obtain M target regions propusal with highest probability.
5. The improved static logo recognition method for Faster R-CNN gasoline stations as claimed in claim 3, wherein the loss function generated by the joint training of two branches when said RPN network is trained is:
Figure FDA0002709948470000021
Lcls(pi,pi*)=-log[pi*pi+(1-pi*)(1-pi)]
Lreg(ti,ti*)=R(ti-ti*)
wherein pi represents a target probability value of the picture,
Figure FDA0002709948470000022
i represents the serial number of the anchor point in each small batch; pi represents the target probability of anchor point i; p represents a tag, 0 or 1; t represents 4 parameters of the prediction box; t denotes the parameters of the calibration frame; lcls represents the classification loss function; lreg represents the regression loss function; p i Lreg indicates that regression was performed only on positive samples, where p i 0 for negative samples; cls and reg output pi and ti, respectively.
6. An improved static mark recognition system for a Faster R-CNN gas station, comprising:
the data acquisition unit is used for acquiring a vehicle picture comprising a vehicle body and a static mark arranged on the vehicle body on an oil unloading operation site, and preprocessing the vehicle picture into a sample data set;
the detection model construction unit is used for constructing a Faster R-CNN neural network model, extracting a characteristic diagram by combining a ResNet structure in the Faster R-CNN neural network when the sample data set is imported into the Fast R-CNN neural network model to perform end-to-end joint training on an RPN network, obtaining a target region and a target type of the electrostatic marker, and further integrating the obtained target region and the target type of the electrostatic marker into a preset Fast R-CNN neural network model to obtain a detection model of the electrostatic marker;
and the detection result output unit is used for acquiring a real-time vehicle picture of an oil unloading operation site, importing the acquired real-time vehicle picture into the detection model of the electrostatic sign for identification, and determining the type and the position of the electrostatic sign on the acquired real-time vehicle picture.
7. The improved Faster R-CNN based station electrostatic marker recognition system as claimed in claim 6, wherein said electrostatic marker is triangular shaped.
CN202011052340.0A 2020-09-29 2020-09-29 Improved Faster R-CNN gas station electrostatic sign identification method and system Pending CN112364687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011052340.0A CN112364687A (en) 2020-09-29 2020-09-29 Improved Faster R-CNN gas station electrostatic sign identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011052340.0A CN112364687A (en) 2020-09-29 2020-09-29 Improved Faster R-CNN gas station electrostatic sign identification method and system

Publications (1)

Publication Number Publication Date
CN112364687A true CN112364687A (en) 2021-02-12

Family

ID=74506482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011052340.0A Pending CN112364687A (en) 2020-09-29 2020-09-29 Improved Faster R-CNN gas station electrostatic sign identification method and system

Country Status (1)

Country Link
CN (1) CN112364687A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011514A (en) * 2021-03-29 2021-06-22 吉林大学 Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling
CN114545102A (en) * 2022-01-14 2022-05-27 深圳市中明科技股份有限公司 Online monitoring system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN108009509A (en) * 2017-12-12 2018-05-08 河南工业大学 Vehicle target detection method
CN108509839A (en) * 2018-02-02 2018-09-07 东华大学 One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks
CN110031690A (en) * 2019-04-24 2019-07-19 南京迈品防静电设备有限公司 A kind of electrostatic detection gate inhibition for gas station
CN110211097A (en) * 2019-05-14 2019-09-06 河海大学 A kind of crack image detecting method based on the migration of Faster R-CNN parameter
CN110456723A (en) * 2019-08-15 2019-11-15 成都睿晓科技有限公司 A kind of emptying area, gas station security management and control system based on deep learning
CN110555842A (en) * 2019-09-10 2019-12-10 太原科技大学 Silicon wafer image defect detection method based on anchor point set optimization
WO2020020472A1 (en) * 2018-07-24 2020-01-30 Fundación Centro Tecnoloxico De Telecomunicacións De Galicia A computer-implemented method and system for detecting small objects on an image using convolutional neural networks
CN111582344A (en) * 2020-04-29 2020-08-25 上善智城(苏州)信息科技有限公司 Method for identifying state of oil discharge port cover of gas station
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451602A (en) * 2017-07-06 2017-12-08 浙江工业大学 A kind of fruits and vegetables detection method based on deep learning
CN108009509A (en) * 2017-12-12 2018-05-08 河南工业大学 Vehicle target detection method
CN108509839A (en) * 2018-02-02 2018-09-07 东华大学 One kind being based on the efficient gestures detection recognition methods of region convolutional neural networks
WO2020020472A1 (en) * 2018-07-24 2020-01-30 Fundación Centro Tecnoloxico De Telecomunicacións De Galicia A computer-implemented method and system for detecting small objects on an image using convolutional neural networks
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN110031690A (en) * 2019-04-24 2019-07-19 南京迈品防静电设备有限公司 A kind of electrostatic detection gate inhibition for gas station
CN110211097A (en) * 2019-05-14 2019-09-06 河海大学 A kind of crack image detecting method based on the migration of Faster R-CNN parameter
CN110456723A (en) * 2019-08-15 2019-11-15 成都睿晓科技有限公司 A kind of emptying area, gas station security management and control system based on deep learning
CN110555842A (en) * 2019-09-10 2019-12-10 太原科技大学 Silicon wafer image defect detection method based on anchor point set optimization
CN111582344A (en) * 2020-04-29 2020-08-25 上善智城(苏州)信息科技有限公司 Method for identifying state of oil discharge port cover of gas station

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHAOQING REN等: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, vol. 39, no. 6, pages 1137 - 1149, XP055705510, DOI: 10.1109/TPAMI.2016.2577031 *
李就好等: "改进Faster R-CNN 的田间苦瓜叶部病害检测", 《农业工程学报》, vol. 36, no. 12, pages 179 - 185 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011514A (en) * 2021-03-29 2021-06-22 吉林大学 Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling
CN113011514B (en) * 2021-03-29 2022-01-14 吉林大学 Intracranial hemorrhage sub-type classification algorithm applied to CT image based on bilinear pooling
CN114545102A (en) * 2022-01-14 2022-05-27 深圳市中明科技股份有限公司 Online monitoring system

Similar Documents

Publication Publication Date Title
CN108090906B (en) Cervical image processing method and device based on region nomination
CN111461209B (en) Model training device and method
CN105574550A (en) Vehicle identification method and device
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN110555420A (en) fusion model network and method based on pedestrian regional feature extraction and re-identification
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN111008576A (en) Pedestrian detection and model training and updating method, device and readable storage medium thereof
CN114140665A (en) Dense small target detection method based on improved YOLOv5
CN112364687A (en) Improved Faster R-CNN gas station electrostatic sign identification method and system
CN111027538A (en) Container detection method based on instance segmentation model
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN113850799A (en) YOLOv 5-based trace DNA extraction workstation workpiece detection method
CN116385374A (en) Cell counting method based on convolutional neural network
CN117437647B (en) Oracle character detection method based on deep learning and computer vision
CN114882204A (en) Automatic ship name recognition method
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN111767919B (en) Multilayer bidirectional feature extraction and fusion target detection method
CN110889418A (en) Gas contour identification method
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN114927236A (en) Detection method and system for multiple target images
CN115953744A (en) Vehicle identification tracking method based on deep learning
Jain et al. Number plate detection using drone surveillance
CN112347967B (en) Pedestrian detection method fusing motion information in complex scene
CN111680691B (en) Text detection method, text detection device, electronic equipment and computer readable storage medium
CN114782983A (en) Road scene pedestrian detection method based on improved feature pyramid and boundary loss

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination