CN111968114B - Orthopedics consumable detection method and system based on cascade deep learning method - Google Patents

Orthopedics consumable detection method and system based on cascade deep learning method Download PDF

Info

Publication number
CN111968114B
CN111968114B CN202010940962.0A CN202010940962A CN111968114B CN 111968114 B CN111968114 B CN 111968114B CN 202010940962 A CN202010940962 A CN 202010940962A CN 111968114 B CN111968114 B CN 111968114B
Authority
CN
China
Prior art keywords
consumable
orthopedic
deep learning
training
orthopedics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010940962.0A
Other languages
Chinese (zh)
Other versions
CN111968114A (en
Inventor
宋尚玲
杨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Hospital of Shandong University
Original Assignee
Second Hospital of Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Hospital of Shandong University filed Critical Second Hospital of Shandong University
Priority to CN202010940962.0A priority Critical patent/CN111968114B/en
Publication of CN111968114A publication Critical patent/CN111968114A/en
Application granted granted Critical
Publication of CN111968114B publication Critical patent/CN111968114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an orthopedic consumable detection method and system based on a cascade deep learning method, which comprises the following steps: acquiring orthopedic consumable image data for training, and manually marking the position of an orthopedic consumable object to be detected in the orthopedic consumable image for training; detecting the training images after the manual marking by using at least two detectors, wherein the detection process is divided into two processes, namely detecting the orthopedic consumable outer box by using the detectors, and detecting the orthopedic consumable by using the detectors; inputting the data set into an orthopedics consumable outer box deep learning model, and training the deep learning model to obtain a trained deep learning model; based on the trained orthopedics consumptive material outer box deep learning model and the orthopedics consumptive material deep learning model, the new orthopedics consumptive material image is detected, and a new detection result of the orthopedics consumptive material image to be detected is obtained.

Description

Orthopedics consumable detection method and system based on cascade deep learning method
Technical Field
The invention belongs to the technical field of artificial intelligence, and relates to an orthopedic consumable detection method and system based on a deep learning method.
Background
Object detection is to find out the object of interest in the picture and to determine its location and category. The target detection technology is developed to date, and there are two main types of methods: traditional classical target detection methods and artificial intelligence related deep learning methods.
The traditional target detection method mainly comprises three steps: firstly, selecting a region, secondly, extracting features, and thirdly, classifying. In implementing the present disclosure, the inventors found that the following technical problems exist in the classical technique:
in the region selection process, since the target may appear anywhere in the image, and the size and aspect ratio of the target cannot be determined, a Sliding Window (Sliding Window) strategy is initially adopted to traverse the entire image. But its disadvantages include: the complexity of time is too high and too many redundant unnecessary windows are created, which also severely affects the speed and performance of subsequent feature extraction and classification.
In the feature extraction, SIFT and HOG features are generally used, but the target detection effect is not ideal due to the target form diversity, illumination variation diversity, background diversity and the like. The extracted features are difficult to accurately describe the picture, and a specific feature extraction method needs to be searched to improve the detection accuracy.
Among the classifiers, SVM, Adaboost classifier, and the like are mainly used. However, these methods require the selection of specific parameters according to specific scenarios and have no uniform selection criteria. In summary, the main problems of the prior art are as follows: aiming at the problems of orthopedics consumable position calibration and classification detection in images, how to perform work on the problems, and therefore rapid and accurate target detection is achieved.
Disclosure of Invention
In order to solve the defects of the prior art, the invention provides an orthopedic consumable detection method and system based on a cascade deep learning method, and under the conditions of less data, higher similarity between a target and a background and single application scene, a neural network still can achieve a better detection effect due to certain robustness.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
an orthopedic consumable detection method based on a cascade deep learning method comprises the following steps:
step 1: data preparation
Acquiring orthopedic consumable image data for training, and manually marking the position of an orthopedic consumable object to be detected in the orthopedic consumable image for training. And meanwhile, the marked data set is expanded by using a data enhancement method. The method comprises the following steps: blurring, denoising, panning, etc.
Step 2: bone nail box class classifier training
And (4) detecting the outer box of the orthopedic consumable by using the trained outer box detector K neighbor. Wherein, need input the outer box deep learning model of orthopedics consumptive material with the data set of consumptive material outer box, train the deep learning model, obtain the outer box detection model who trains well.
And step 3: yolov3 model training
And (3) inputting the manual labeling data set prepared in the step (1) into an orthopedics consumable content deep learning model YOLOv3, and training the deep learning model to obtain a trained orthopedics consumable content fine detection model. And simultaneously outputting the position, the quantity and the type of the bone nails in the bone nail box.
And 4, step 4: model use
Based on the trained orthopedics consumable outer box deep learning model and the orthopedics consumable deep learning model, the new orthopedics consumable image is detected. And outputting the pictures of the bone nail boxes to be identified to a network, and obtaining the classes of the bone nail boxes through the classification of the bone nail boxes. And at the moment, the trained model weight of YOLOv3 is loaded, and the positions, the number and the types of bone nails in the bone nail box are judged, so that a new image detection result of the orthopedic consumable to be detected is obtained.
Compared with the prior art, the invention has the advantages and positive effects that:
1. the method and the system for detecting the orthopedic consumables have no strict requirements on the quantity and the quality of original data images, and meanwhile, the method for detecting the orthopedic consumables outer box and the orthopedic consumables by using a deep learning method greatly improves the accuracy rate of detection.
2. This openly detect outer box type of orthopedics consumptive material earlier, then carry out the detection that corresponds the orthopedics consumptive material according to outer box type, the rate of accuracy promotes 50% -60%, compares with direct to the detection of orthopedics consumptive material, and the work load only increases less than 2%.
3. The invention provides a method for deep learning under the condition of only a small amount of data, and when the accurate marking data amount is insufficient, the method for detecting the orthopedic consumables by firstly detecting an orthopedic consumables outer box and then detecting the orthopedic consumables is superior to the method for detecting the orthopedic consumables only by singly adopting.
4. The deep learning training method can effectively solve the problem of few data sets and improve the detection accuracy.
Drawings
Fig. 1 is a schematic diagram of data enhancement.
FIG. 2 shows the image annotation result and its XML file.
FIG. 3 is a graph showing the effect of the detection of YOLO V3.
FIG. 4 is a diagram showing the detection results.
FIG. 5 is a schematic flow chart of the operation of the present invention.
FIG. 6 is a schematic diagram of a cascade model
FIG. 7 is a flow chart of model training
FIG. 8 is a flow chart of model usage
FIG. 9 YOLOv3 network architecture diagram
Detailed Description
In order that the above objects, features and advantages of the present invention may be more clearly understood, the present invention will be further described with reference to specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and thus the present invention is not limited to the specific embodiments of the present disclosure.
1. Orthopedic consumables: the medical nursing material is a material frequently used by hospitals in the process of developing medical services, has various types and large application amount, is a material basis for developing medical and nursing work in hospitals, and has high price for part of orthopedic consumables, and the detection and utilization of the orthopedic nursing material have great practical application value.
FastCNN (Fast Region-based connected Neural Network), which is a deep Neural Network applied in the field of target detection, a two-step (two-stage) algorithm: the candidate region is generated firstly and then CNN classification is carried out, which belongs to the operation of deep learning in the mountain-opening of the target detection field, can achieve higher detection precision, but has a slightly slow speed. The improved version of the kit is FasterRCNN, and can achieve higher detection accuracy.
Yolo (you Only Look one) is a deep neural network applied to the field of target detection published in CVPR in 2016, and belongs to a one-step algorithm. The method is high in speed, can achieve a real-time detection effect, and meanwhile, the detection effect also achieves the level of the tip. Wherein YOLO v3 represents the third version, which is significantly improved in both accuracy and speed.
The K nearest neighbor classification algorithm is a relatively mature method in theory and is one of the simplest machine learning algorithms. The method has the following steps: in the feature space, if most of the K nearest (i.e., nearest neighbors in the feature space) samples in the vicinity of a sample belong to a certain class, the sample also belongs to this class.
Example 1
The orthopedics consumable detection method based on the cascade deep learning method comprises the following steps of:
and S1, acquiring medical consumable image data for training, performing data enhancement, and enlarging a picture data set. The expanding method is that the original image is rotated by 45 degrees, the saturation is adjusted, the exposure is adjusted, and the hue is adjusted. The expansion method is shown in fig. 1.
S2: the acquired image is the original dataset without any label calibration, and the calibration tool used is labelImg. The annotations are saved as XML files in the PASCAL VOC format (the format used by ImageNet), as shown in fig. 2. During labeling, the labeling frame of the labeled object is translated by 1/4 units in four directions, namely up, down, left and right, so that the training set is expanded to 4 times of the original training set.
S3: and performing class estimation by using a K nearest neighbor algorithm. In the classification module, the simplest Euclidean distance is used for measuring the distance between the picture and the template, so that the classification of the picture is determined. The formula used is as follows:
Figure 321196DEST_PATH_IMAGE001
Figure 500505DEST_PATH_IMAGE002
wherein the content of the first and second substances,Lis the euclidean distance between the picture and the template,
nis the number of pixels in the picture,
xiis a pixel value on a picture
yiAre the pixel values on the template.
By means of the calculated Euclidean distance and according to multiple experimental results, the K is set to be 3, so that a better result can be achieved, and the type of the bone nail box to which the picture belongs can be distinguished according to the result. The bone nail box can be classified.
S4: the data set is input into an orthopedic consumable deep learning model YOLOv3, and a network model of YOLOv3 is shown in fig. 9. And training the deep learning model to obtain the trained deep learning model. The position of the bone pins on the cartridge can be detected and marked as shown in fig. 3. The process of training the model is shown in FIG. 7.
Example 2
The orthopedics consumable detection method based on the cascade deep learning method comprises the following steps of:
s1: data output
After model training is complete, there will be trained weights. And inputting the picture of the bone nail box to be detected into the model. The picture to be detected is shown in fig. 4.
S2: bone nail magazine sorting
And inputting the picture of the bone nail box to be detected into the model, and obtaining the type of the bone nail box through a linear classifier. The determination of the type of the bone screw box in the first stage is completed.
S3: YOLOv3 model weight loading
And loading corresponding training weights according to the classification result in the step 2. The trained weight loading of YOLOv3 is complete.
S4: result output
And calculating the added model weight to obtain a new orthopedic consumable image detection result to be detected, as shown in fig. 5. Model usage flow chart 8
Comparative example 1
In order to verify that the labeling effect of the two trackers on the weakly labeled data set is better than that of one tracker which is used alone, the following experiments are carried out:
(1) and acquiring marked data and unmarked data. In the experiment, in order to simulate a real scene, the labeled data is a part of the selected data set, and the unlabeled data is the data left after the data set is selected, and the labels of the unlabeled data are manually removed. In actual operation, all the acquired data are unmarked data, one part of the data needs to be marked manually, and the rest original data are unmarked data; setting the acquired labeled data to comprise a training set
Figure 306787DEST_PATH_IMAGE003
Test set
Figure 356782DEST_PATH_IMAGE004
Figure 223107DEST_PATH_IMAGE005
In order to be able to do so,
Figure 624132DEST_PATH_IMAGE006
the label refers to the coordinates of the object in the picture and the orthopedic consumable category. Setting the acquired unmarked data as
Figure 968526DEST_PATH_IMAGE007
Figure 607449DEST_PATH_IMAGE008
Is data, it is unlabeled.
(2) Labelling using different labelling tools
Figure 910254DEST_PATH_IMAGE009
Using different detectorsn 1 、n 2 Corresponding to the target detectors based on fasternn and YOLOv3 networks, respectively.
(3) Firstly, artificially labeled data are used for training YOLOv3 to carry out direct orthopedic consumable object detection, and the accuracy is obtaineda 1 And then training YOLOv3 by using manually labeled data to perform target detection on outer box of the orthopedic consumables and then performing target detection on the orthopedic consumables, so that the accuracy is obtaineda 2
(4) Firstly, artificially labeled data are used for training FasterRCNN to carry out direct orthopedic consumable object detection, and the accuracy is obtaineda 3 And training FasterRCNN by using manually marked data to perform orthopedic consumable outer box target detection and then perform orthopedic consumable target detection to obtain the accuracya 4
(5) And comparing the four accuracy rates to obtain the optimal labeling method.
Fig. 6 is a schematic diagram of a cascade model of the present disclosure. Firstly, marking partial data, then sending the marked data set into a deep learning target detector (YOLOv 3/FasterRCNN) for orthopedic consumable target detection to obtain the accuracya 1 ,a 3 Then, the labeled data set is sent to a deep learning target detector (YOLOv 3/FasterRCNN) to carry out target detection and classification on the orthopedic consumable outer box, and then to carry out detection on the orthopedic consumable to obtain the accuracya 2 ,a 4 And finally, preferably selecting the optimal scheme by comparison, namely performing outer box detection on the orthopedic consumables and then performing detection on the orthopedic consumables.
Two methods of calculating the detection accuracy are as follows.
Calculating formula according to the overlapping rate:
Figure 126472DEST_PATH_IMAGE010
if the image position marked by the detector overlaps with the real position of the image in the verification set by more thanM 1 We will conclude that the detection was successful, selected hereinM 1 The content was 60%.
According to the distance of the central point:
Figure 149923DEST_PATH_IMAGE011
Figure 767986DEST_PATH_IMAGE012
the center point of the marked image and the real center point of the image in the verification set are identified. If the distance between the two is less thanM 2 And judging that the detection is successful. Selected hereinM 2 20 in pixels.
(1) Firstly, a training set VOT and a test set OTB are selected, and repeated parts in the VOT and the OTB data set are removed. In a training set VOT, in order to simulate a real scene, an orthopedic consumable image is accurately and manually marked and serves as a reference standard.
(2) In actual work, the data we have collected may also be annotated inaccurately. To address this problem, the present disclosure proposes a way to augment the data set. To simulate this, we first add noise to the precisely labeled dataset. The method is that 1/4 units are translated up, down, left and right randomly (the probability is 25% respectively, the left and right translation is 1/4 units of the length of the translation rectangle, the width is unchanged, and the up and down translation is 1/4 units of the width of the translation rectangle, the length is unchanged);
(3) firstly, artificially labeled data are used for training YOLOv3 to carry out direct orthopedic consumable object detection, and the accuracy is obtaineda 1 And then training YOLOv3 by using manually labeled data to perform target detection on outer box of the orthopedic consumables and then performing target detection on the orthopedic consumables, so that the accuracy is obtaineda 2
(4) Firstly, artificially labeled data are used for training FasterRCNN to carry out direct orthopedic consumable object detection, and the accuracy is obtaineda 3 And training FasterRCNN by using manually marked data to perform orthopedic consumable outer box target detection and then perform orthopedic consumable target detection to obtain the accuracya 4
(5) The final recognition rate is compared in four different ways to determine an optimal solution. The tracking success rate calculated from the overlapping rate is shown in table 1, and the tracking success rate calculated from the center point distance is shown in table 2.
TABLE 1 tracking success rate calculated from overlap ratio
Figure 976113DEST_PATH_IMAGE013
Table 1 is according to the tracking success rate that the overlap ratio calculated, uses directly to carry out the effect after orthopedics consumptive material detects and sends into the deep learning model training, does not carry out the effect that orthopedics consumptive material detection was sent into the deep learning model and trains like carrying out the outer box detection of orthopedics consumptive material earlier again.
TABLE 2 tracking success rate calculated from center point distance
Figure 679627DEST_PATH_IMAGE014
Table 2 shows the same conclusions as in table 1, based on the calculated tracking success rate of the center point offset. The effect after orthopedics consumptive material detection is sent into degree of depth study model training is directly carried out in the use, does not carry out the outer box detection of orthopedics consumptive material earlier and then carries out orthopedics consumptive material detection and send into the effect that degree of depth study model trained.
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention in other forms, and any person skilled in the art may apply the above modifications or changes to the equivalent embodiments with equivalent changes, without departing from the technical spirit of the present invention, and any simple modification, equivalent change and change made to the above embodiments according to the technical spirit of the present invention still belong to the protection scope of the technical spirit of the present invention.

Claims (5)

1. An orthopedic consumable detection method based on a cascade deep learning method is characterized by comprising the following steps:
step 1: data preparation
Acquiring image data of orthopedic consumables for training, manually marking the position of an orthopedic consumable object to be detected, and expanding a marked data set;
step 2: bone nail box class classifier training
Inputting a data set of the consumable outer box into the orthopedics consumable outer box deep learning model, and training the deep learning model to obtain a trained outer box detection model;
and step 3: yolov3 model training
Inputting the manual labeling data set prepared in the step 1 into an orthopedics consumable content deep learning model YOLOv3, training the deep learning model to obtain a trained orthopedics consumable content fine detection model, and simultaneously outputting the positions, the quantity and the types of bone nails in a bone nail box;
and 4, step 4: model use
Based on the trained orthopedics consumable outer box deep learning model and the orthopedics consumable deep learning model, the new orthopedics consumable image is detected.
2. The method for detecting the orthopedic consumables based on the cascade deep learning method according to claim 1, wherein the step 2 utilizes a K-nearest neighbor algorithm to perform class estimation, and the formula is
Figure 352795DEST_PATH_IMAGE001
Wherein the content of the first and second substances,Lis the euclidean distance between the picture and the template,
nis the number of pixels in the picture,
xiis the value of a pixel on the picture,
yiare the pixel values on the template.
3. The method for detecting the orthopedic consumables based on the cascade deep learning method according to claim 1, wherein the data expansion specific operation is as follows: and (3) translating the marking frame marked with the orthopedic consumable object to be detected by 1/4 units in the upper, lower, left and right directions, so that the training set is expanded to 4 times of the original training set.
4. The method for detecting the orthopedic consumables according to claim 3, wherein the data expansion step further comprises blurring and rotating the image.
5. Orthopedics consumable detection system based on cascade deep learning method, its characterized in that includes:
a pre-processing module configured to: acquiring orthopedic consumable image data for training, and manually marking the position of an orthopedic consumable object to be detected in the orthopedic consumable image for training; inputting an original image of the orthopedic consumable, and outputting an expanded label labeling image subjected to data enhancement and image transformation;
a training module configured to: detecting the training images after the manual marking by using at least two detectors, wherein the detection process is divided into two processes, namely detecting the orthopedic consumable outer box by using the detectors, and detecting the orthopedic consumable by using the detectors; inputting the data set into an orthopedics consumable outer box deep learning model, and training the deep learning model to obtain a trained deep learning model; inputting the data set into an orthopedics consumable deep learning model, and training the deep learning model to obtain a trained deep learning model; inputting a labeled image; outputting a trained deep network model YOLOv 3;
an image detection module configured to: detecting a new orthopedic consumable image based on the trained orthopedic consumable outer box deep learning model and the trained orthopedic consumable deep learning model to obtain a new orthopedic consumable image detection result to be detected; inputting unmarked new bone department consumable pictures and trained depth models, and outputting the marking and judging results of the new pictures.
CN202010940962.0A 2020-09-09 2020-09-09 Orthopedics consumable detection method and system based on cascade deep learning method Active CN111968114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010940962.0A CN111968114B (en) 2020-09-09 2020-09-09 Orthopedics consumable detection method and system based on cascade deep learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010940962.0A CN111968114B (en) 2020-09-09 2020-09-09 Orthopedics consumable detection method and system based on cascade deep learning method

Publications (2)

Publication Number Publication Date
CN111968114A CN111968114A (en) 2020-11-20
CN111968114B true CN111968114B (en) 2021-04-09

Family

ID=73392713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010940962.0A Active CN111968114B (en) 2020-09-09 2020-09-09 Orthopedics consumable detection method and system based on cascade deep learning method

Country Status (1)

Country Link
CN (1) CN111968114B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011486A (en) * 2021-03-12 2021-06-22 重庆理工大学 Chicken claw classification and positioning model construction method and system and chicken claw sorting method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109979576A (en) * 2019-03-04 2019-07-05 上海零库存医疗供应链管理有限公司 A kind of information-based efficient management of screw box
CN111080700A (en) * 2019-12-11 2020-04-28 中国科学院自动化研究所 Medical instrument image detection method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108171269A (en) * 2018-01-04 2018-06-15 吴勤旻 A kind of medical instrument pattern recognition device
CN108198613A (en) * 2018-01-31 2018-06-22 九州通医疗信息科技(武汉)有限公司 Information registering method and device
CN110969451A (en) * 2018-09-28 2020-04-07 快创科技(大连)有限公司 Medical instrument display classification system based on intelligent product picture album
CN110414551A (en) * 2019-06-14 2019-11-05 田洪涛 A kind of method and system classified automatically based on RCNN network to medical instrument
CN110782005B (en) * 2019-09-27 2023-02-17 山东大学 Image annotation method and system for tracking based on weak annotation data
CN110765886B (en) * 2019-09-29 2022-05-03 深圳大学 Road target detection method and device based on convolutional neural network
CN110991444B (en) * 2019-11-19 2023-08-29 复旦大学 License plate recognition method and device for complex scene
CN111291799A (en) * 2020-01-21 2020-06-16 青梧桐有限责任公司 Room window classification model construction method, room window classification method and room window classification system
CN111626276B (en) * 2020-07-30 2020-10-30 之江实验室 Two-stage neural network-based work shoe wearing detection method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109979576A (en) * 2019-03-04 2019-07-05 上海零库存医疗供应链管理有限公司 A kind of information-based efficient management of screw box
CN111080700A (en) * 2019-12-11 2020-04-28 中国科学院自动化研究所 Medical instrument image detection method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
C-Arm Image-Based Surgical Path Planning Method for Distal Locking of Intramedullary Nails;Wei-En Hsu等;《https://doi.org/10.1155/2018/4530386》;20180523;1-10 *
关于精细化管理在降低医用耗材占比作用的研究与探讨;宋尚玲等;《医院数字化管理》;20180210;第33卷(第2期);168-170 *

Also Published As

Publication number Publication date
CN111968114A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN110414462B (en) Unsupervised cross-domain pedestrian re-identification method and system
JP6843086B2 (en) Image processing systems, methods for performing multi-label semantic edge detection in images, and non-temporary computer-readable storage media
EP2806374B1 (en) Method and system for automatic selection of one or more image processing algorithm
JP6397986B2 (en) Image object region recognition method and apparatus
CN105512683A (en) Target positioning method and device based on convolution neural network
CN107330027B (en) Weak supervision depth station caption detection method
US11823453B2 (en) Semi supervised target recognition in video
CN112949408B (en) Real-time identification method and system for target fish passing through fish channel
CN111552837A (en) Animal video tag automatic generation method based on deep learning, terminal and medium
EP4018358A1 (en) Negative sampling algorithm for enhanced image classification
JPWO2019111550A1 (en) Person matching device, method, and program
WO2022134580A1 (en) Method and apparatus for acquiring certificate information, and storage medium and computer device
CN111968114B (en) Orthopedics consumable detection method and system based on cascade deep learning method
CN108549915B (en) Image hash code training model algorithm based on binary weight and classification learning method
WO2019045101A1 (en) Image processing device and program
WO2024051427A1 (en) Coin identification method and system, and storage medium
CN110442736B (en) Semantic enhancer spatial cross-media retrieval method based on secondary discriminant analysis
CN111144469B (en) End-to-end multi-sequence text recognition method based on multi-dimensional associated time sequence classification neural network
CN117036904A (en) Attention-guided semi-supervised corn hyperspectral image data expansion method
Fragkiadakis et al. Towards a User-Friendly Tool for Automated Sign Annotation: Identification and Annotation of Time Slots, Number of Hands, and Handshape.
CN115203408A (en) Intelligent labeling method for multi-modal test data
Behera et al. Rotation axis focused attention network (rafa-net) for estimating head pose
CN115457620A (en) User expression recognition method and device, computer equipment and storage medium
CN110674342B (en) Method and device for inquiring target image
CN112232288A (en) Satellite map target identification method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant