CN106485268A - A kind of image-recognizing method and device - Google Patents

A kind of image-recognizing method and device Download PDF

Info

Publication number
CN106485268A
CN106485268A CN201610854506.8A CN201610854506A CN106485268A CN 106485268 A CN106485268 A CN 106485268A CN 201610854506 A CN201610854506 A CN 201610854506A CN 106485268 A CN106485268 A CN 106485268A
Authority
CN
China
Prior art keywords
image
scanogram
obtains
neural network
depth convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610854506.8A
Other languages
Chinese (zh)
Other versions
CN106485268B (en
Inventor
邹博
刘玉洁
齐智峰
李锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201610854506.8A priority Critical patent/CN106485268B/en
Publication of CN106485268A publication Critical patent/CN106485268A/en
Application granted granted Critical
Publication of CN106485268B publication Critical patent/CN106485268B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The application is related to a kind of image-recognizing method and device, and methods described includes:Obtain scanogram;Described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Carry out target detection using the described image feature extracted, based on depth convolution multilayer neural network target detection model, obtain candidate target;Using the described image feature extracted, based on depth convolution multilayer neural network object-class model, described candidate target is identified, obtains image recognition result;Wherein, described to described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and be specially:Obtain the characteristics of image of each level of depth convolution multilayer neural network, fusion treatment is carried out to the characteristics of image of described each level, obtain the characteristics of image after merging as the characteristics of image extracting.The application can improve accuracy rate and the efficiency of target detection.

Description

A kind of image-recognizing method and device
Technical field
The application is related to technical field of image processing, more particularly, to a kind of image-recognizing method and device.
Background technology
In the customs supervision such as trade port, railway station, airport region it is often necessary to be carried to passenger using rays safety detection apparatus Article detected, to determine described article whether as dangerous materials or smuggled goods.How quick during safety check, accurate Really detect that dangerous materials or smuggled goods become a problem demanding prompt solution.
In prior art, when passenger's belongings pass through rays safety detection apparatus, arise that through the image that X-ray scanning goes out On the display screen being connected with rays safety detection apparatus, whether the image on security staff's manual observation screen is to be danger to article therein Dangerous product or die Konterbande are identified., there is workload in prior art this artificial cognition dangerous materials or the method for die Konterbande Greatly, the problem that efficiency is low, accuracy rate is not high.
Content of the invention
For solving existing technical problem, the application expectation provides a kind of image-recognizing method and device, Ke Yitong Cross and automatically image is identified, improve accuracy rate and the efficiency of detection.
A kind of first aspect according to the embodiment of the present application, there is provided image-recognizing method, methods described includes:Acquisition is swept Tracing picture;Described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Using extract described image feature, Target detection is carried out based on depth convolution multilayer neural network target detection model, obtains candidate target;Described in extracting Characteristics of image, based on depth convolution multilayer neural network object-class model, described candidate target is identified, obtains image Recognition result;Wherein, described to described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and be specially:Obtain The characteristics of image of each level of depth convolution multilayer neural network, carries out fusion treatment to the characteristics of image of described each level, obtains Characteristics of image after fusion is as the characteristics of image extracting.
Alternatively, before carrying out feature extraction to described scanogram, methods described also includes:Described scanogram is entered Row pretreatment, is that different classes of scanned item image arranges different colors based on scanned item classification results.
Alternatively, described pretreatment is carried out to described scanogram, be different classes of based on scanned item classification results The different color of scanned item image setting includes:Obtain the atomic number of scanned item, obtained based on described atomic number and sweep Retouch the density of article;Determine the classification of scanned item according to the density of described scanned item, obtain scanned item classification results;Base It is that different classes of scanned item image arranges different colors in described scanned item classification results.
Alternatively, described described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and include:Based on scanning The color characteristic of article determines object candidate area;Carry out feature extraction process in described object candidate area, obtain and extract Characteristics of image.
Alternatively, described using extract described image feature, be based on depth convolution multilayer neural network target detection mould Type carries out target detection, obtains candidate target and includes:Using the described image feature extracted, based on multiple depth convolution multilamellar god Carry out target detection through network objectives detection model, obtain multiple testing results;Merge the plurality of testing result, obtain final Testing result as candidate target.
Alternatively, the plurality of testing result of described fusion, obtains final testing result and includes as candidate target:Base In confidence calculations result, merge multiple testing results to obtain final testing result.
Alternatively, methods described also includes:Based on described image recognition result, judge whether dangerous materials or smuggling Article;If judging dangerous product or smuggled goods, export information.
Alternatively, methods described also includes:Described image recognition result and list of articles are compared, obtains and compare knot Really;Export described comparison result.
A kind of second aspect according to the embodiment of the present application, there is provided pattern recognition device, described device includes:Image obtains Delivery block, for obtaining scanogram;Characteristic extracting module, for described scanogram is carried out with feature extraction, obtains and extracts Characteristics of image;Wherein, described described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and be specially:Obtain The characteristics of image of each level of depth convolution multilayer neural network, carries out fusion treatment to the characteristics of image of described each level, obtains Characteristics of image after fusion is as the characteristics of image extracting;Module of target detection, for using the described image feature extracted, base Carry out target detection in depth convolution multilayer neural network target detection model, obtain candidate target;Target classification module, is used for Using the described image feature extracted, based on depth convolution multilayer neural network object-class model, described candidate target is carried out Identification, obtains image recognition result.
Alternatively, described device also includes:Pretreatment module, for carrying out pretreatment to described scanogram, based on sweeping Retouch the different color of the scanned item image setting that taxonomy of goods result is different classes of.
Alternatively, described pretreatment module specifically includes:Density acquiring unit, for obtaining the atomic number of scanned item Number, obtains the density of scanned item based on described atomic number;Taxon, determines for the density according to described scanned item The classification of scanned item, obtains scanned item classification results;Color arranging unit, for based on described scanned item classification results For the different color of different classes of scanned item image setting.
Alternatively, described characteristic extracting module is specifically for determining target candidate area based on the color characteristic of scanned item Domain;Carry out feature extraction process in described object candidate area, obtain the characteristics of image extracting.
Alternatively, described module of target detection specifically includes:Multi-model detector unit, for using the described image extracted Feature, the target detection that carried out based on multiple depth convolution multilayer neural network target detection models, obtain multiple testing results;Knot Fruit integrated unit, for merging the plurality of testing result, obtains final testing result as candidate target.
Alternatively, described result integrated unit specifically for based on confidence calculations result, merge multiple testing results with Obtain final testing result.
Alternatively, described device also includes:Judge module, for based on described image recognition result, judging whether Dangerous materials or smuggled goods;First output module, if for judging dangerous product or smuggled goods, output prompting letter Breath.
Alternatively, described device also includes:Comparing module, for being compared described image recognition result and list of articles Right, obtain comparison result;Second output module, for exporting described comparison result.
The third aspect according to the embodiment of the present application, there is provided for the device of image recognition, include memorizer, and One or more than one program, one of or more than one program storage is in memorizer, and is configured to by one Individual or more than one computing device is one or more than one program bag contains the instruction for carrying out following operation:
Obtain scanogram;Described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Using extract Described image feature, the target detection that carried out based on depth convolution multilayer neural network target detection model, obtain candidate target;Profit With the described image feature of extraction, based on depth convolution multilayer neural network object-class model, described candidate target is known Not, obtain image recognition result;Wherein, described to described scanogram is carried out with feature extraction, obtain the characteristics of image extracting It is specially:Obtain the characteristics of image of each level of depth convolution multilayer neural network, the characteristics of image of described each level is melted Conjunction is processed, and obtains the characteristics of image after merging as the characteristics of image extracting.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Pretreatment is carried out to described scanogram, is different classes of scanning thing based on scanned item classification results The different color of product image setting.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Obtain the atomic number of scanned item, obtain the density of scanned item based on described atomic number;According to institute The density stating scanned item determines the classification of scanned item, obtains scanned item classification results;Based on the classification of described scanned item Result is that different classes of scanned item image arranges different colors.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Object candidate area is determined based on the color characteristic of scanned item;Carry out in described object candidate area Feature extraction is processed, and obtains the characteristics of image extracting.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Using the described image feature extracted, it is based on multiple depth convolution multilayer neural network target detection models Carry out target detection, obtain multiple testing results;Merge the plurality of testing result, obtain final testing result as candidate Target.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Based on confidence calculations result, merge multiple testing results to obtain final testing result.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Based on described image recognition result, judge whether dangerous materials or smuggled goods;If judging there is danger Dangerous product or smuggled goods, export information.
Alternatively, described process implement body be additionally operable to execute one or more than one program bag contain be used for carrying out with The instruction of lower operation:Described image recognition result and list of articles are compared, obtains comparison result;Output is described to compare knot Really.
Image-recognizing method and device that the embodiment of the present application provides, can extract to scanogram feature based, utilize The characteristics of image extracting is based on depth convolution multilayer neural network target detection model and depth convolution multilayer neural network mesh Mark disaggregated model carries out target detection and classification, automatically obtains image recognition result, improves the efficiency of detection.Further, since When carrying out feature extraction, obtain the characteristics of image of each level of depth convolution multilayer neural network respectively, to described each level Characteristics of image carries out fusion treatment, obtains the characteristics of image after merging as the characteristics of image extracting, is derived from characteristics of image More accurate, effectively increase image detection and the accuracy and efficiency of classification.
Brief description
For the technical scheme being illustrated more clearly that in the embodiment of the present application, will make to required in embodiment description below Accompanying drawing be briefly described it should be apparent that, drawings in the following description are only some embodiments of the present application, for For those of ordinary skill in the art, without having to pay creative labor, it can also be obtained according to these accompanying drawings His accompanying drawing.
The image-recognizing method flow chart that Fig. 1 provides for the application one embodiment;
Fig. 2 processes schematic diagram for the image co-registration that the embodiment of the present application provides;
The multi-model fusion treatment schematic diagram that Fig. 3 provides for the embodiment of the present application;
The image-recognizing method flow chart that Fig. 4 provides for another embodiment of the application
A kind of pattern recognition device schematic diagram that Fig. 5 provides for the embodiment of the present application;
Fig. 6 is the block diagram of the pattern recognition device that another embodiment of the application provides.
Specific embodiment
The purpose of the application is to provide a kind of image-recognizing method and device, can by being automatically identified to image, Improve accuracy rate and the efficiency of detection.
For enabling present invention purpose, feature, advantage more obvious and understandable, below in conjunction with the application Accompanying drawing in embodiment, is described to the technical scheme in the embodiment of the present application it is clear that described embodiment is only this Apply for a part of embodiment, and not all embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not having The every other embodiment being obtained under the premise of making creative work, broadly falls into the scope of the application protection.
As shown in figure 1, being the flow chart of the image-recognizing method according to the application one embodiment, specifically for example can wrap Include:
S101, obtains scanogram.
Wherein, described scanogram is specifically as follows the radioscopic image of X-ray rays safety detection apparatus collection.
S102, carries out feature extraction to described scanogram, obtains the characteristics of image extracting.
When implementing, before feature extraction is carried out to described scanogram, described scanogram can also be carried out pre- Process, described pretreatment can include:The scanned item image setting being different classes of based on scanned item classification results is different Color.Specifically, the atomic number of scanned item can be obtained, obtain the density of scanned item based on described atomic number; Determine the classification of scanned item according to the density of described scanned item, obtain scanned item classification results;Based on described scanning thing Product classification results are that different classes of scanned item image arranges different colors.Illustrate, Image semantic classification can be passed through, Scanned item is divided into Organic substance and inorganic matters two class, to detect offer prior information for image object, improves target detection accurate Exactness.It is for instance possible to use mild steel (corresponding inorganic matters) and two kinds of materials of lucite (corresponding Organic substance) configure effectively Atomic is the different densities of (7,25) material, and sets up look-up table by linear interpolation.When obtaining scanogram, pass through High and low power X ray irradiating item with electron radiation, to obtain the atomic number of different article, is tabled look-up according to described atomic number and can be obtained thing The density of body.Then, the density according to object can determine that object is inorganic matters or Organic substance, is inorganic matters and Organic substance Different colors are set.For example, inorganic matters can be represented with blueness, and Organic substance can be represented with orange.So, just for scanning figure As imparting colouring information.It is, of course, also possible to be that different article arrange different colors according to different density values, here is not It is defined.
In some embodiments, described scanogram is being carried out with feature extraction, when obtaining the characteristics of image extracting, can Object candidate area is determined with the color characteristic based on scanned item;Carry out at feature extraction in described object candidate area Reason, obtains the characteristics of image extracting.Illustrate it is assumed that dangerous materials to be detected are the inorganic matters such as knife, rifle, in advance for inorganic matters The blueness of the color of this classification setting, then in feature extraction, using blue region as object candidate area, can be just for The image of described object candidate area carries out feature extraction, which thereby enhances efficiency and the accuracy rate of image procossing.
In some embodiments, the described characteristics of image tool described scanogram being carried out with feature extraction, obtaining extraction Body is:Obtain the characteristics of image of each level of depth convolution multilayer neural network, the characteristics of image of described each level is merged Process, obtain the characteristics of image after merging as the characteristics of image extracting.It should be noted that for the standard improving image recognition Really rate, when extracting characteristics of image, the mode employing multi-level features fusion obtains characteristics of image to the application.Specifically Neutral net shallow-layer characteristics of image and deep layer characteristics of image are carried out fusion treatment as final characteristics of image.Figure after fusion As feature can preferably improve the accuracy rate of detection, and the identification classification to wisp has a clear superiority.As Fig. 2 institute Show, process schematic diagram for the image co-registration that the application provides.Wherein, conv1 represents neutral net ground floor, and conv2 represents god Through the network second layer, conv3 represents neutral net third layer, and conv4 represents the 4th layer of neutral net, and conv5 represents nerve net Network layer 5.It is assumed that the number of plies of depth convolution multilayer neural network is 5 layers during concrete process, extract this 5 layers image respectively special Levy, then fusion treatment is carried out to the 5 tomographic image features extracted, obtain the characteristics of image after merging as the final image extracting Feature.
S103, is carried out using the described image feature extracted, based on depth convolution multilayer neural network target detection model Target detection, obtains candidate target.
When implementing, the application has pre-build depth convolution multilayer neural network target detection model and target classification Network model, for example, can obtain depth convolution multilayer neural network target detection model using samples pictures training and target is divided Class network model.Illustrate, the training initial stage can be initialized from the good depth convolutional neural networks model of pre-training, Then carry out small parameter perturbations (fine-tuning) using the X-ray sample image gathering in advance, generate target detection network respectively Model and target classification network model.The good initial network model of described pre-training can be a kind of ZF network model (depth Practise neural network model) or VGG (English full name is visual geometry group, a kind of deep learning nerve Network model) network model.Using the X-ray sample image gathering in advance, initial training model is trained, you can with To depth convolution multilayer neural network target detection model and target classification network model.In some embodiments, in order to carry High algorithm performance, depth convolution multilayer neural network target detection network model and depth convolution multilayer neural network target classification Network model carries out convolution feature and shares.That is, in this application, a feature extraction can only be carried out, extract Feature is separately in target detection model and object-class model, as such, it is possible to improve the treatment effeciency of algorithm.
In some embodiments, described using extract described image feature, be based on depth convolution multilayer neural network Target detection model carries out target detection, obtains candidate target and includes:Using the described image feature extracted, it is based on multiple depth Convolution multilayer neural network target detection model carries out target detection, obtains multiple testing results;Merge the plurality of detection knot Really, obtain final testing result as candidate target.Illustrate, when being trained to target detection model, can be with base In different training samples, obtain multiple different depth convolution multilayer neural network target detection models, such as 3.Then, Using the 3 depth convolution multilayer neural network target detection models training, target is detected, obtain 3 detection knots Really.As shown in figure 3, the multi-model fusion treatment schematic diagram providing for the embodiment of the present application.For same width scanogram, permissible Obtain the first testing result using target detection model 1 (Model 1), it is possible to use target detection model 2 (Model 2) obtains Second testing result, it is possible to use target detection model 3 (Model 3) obtains the 3rd testing result.Then to 3 testing results Carry out fusion treatment, obtain the result after merging as final output result.Wherein, described fusion is the plurality of detects knot Really, obtain final testing result to include as candidate target:Based on confidence calculations result, merge multiple testing results to obtain Obtain testing result finally.Illustrate, all corresponding confidence calculations result of each testing result is it is assumed that first is detected Confidence level in result is 0.9, and the confidence level of second testing result is 0.8, and the confidence level of the 3rd testing result is 0.7, It is then final testing result with confidence level highest testing result.It is of course also possible to be obtained after fusion using other modes Testing result, here is not defined.
S104, using the described image feature extracted, is based on depth convolution multilayer neural network object-class model to institute State candidate target to be identified, obtain image recognition result.
It is previously noted, it is possible to use samples pictures pre-build depth convolution multilayer neural network object-class model.So Afterwards, using the described image feature extracted, it is based on depth convolution multilayer neural network object-class model to described candidate target It is identified, obtain image recognition result.When implementing, the described image feature extracted is inputted described depth convolution multilamellar Neutral net object-class model, you can to obtain to recognition result, described recognition result is used for the classification of marking articles, for example Whether it is knife or rifle etc..
Referring to Fig. 4, the image-recognizing method flow chart providing for another embodiment of the application, methods described for example can be wrapped Include:
S401, obtains scanogram.
S402, carries out pretreatment to described scanogram.
When implementing, can be that different classes of scanned item image setting is different based on scanned item classification results Color.So, by Image semantic classification, Organic substance, inorganic matters are made a distinction, detecting for image object provides prior information, Improve target detection accuracy.
S403, carries out feature extraction to described scanogram, obtains the characteristics of image extracting.
S404, using the described image feature extracted, is based on multiple depth convolution multilayer neural network target detection models Carry out target detection, obtain candidate target.
S405, using the described image feature extracted, is based on multiple depth convolution multilayer neural network object-class models Described candidate target is identified, obtains image recognition result.
S406, based on described image recognition result, judges whether dangerous materials or smuggled goods.If judging to exist, Enter S409, export information.
S407, if not existing, described image recognition result and list of articles is compared, and obtains comparison result.
S408, exports described comparison result.
If described comparison result display described image recognition result is mated with described list of articles, terminate program.If institute State comparison result display described image recognition result to mismatch with described list of articles, export warning message, enter artificial reinspection Program.
S409, exports prompt messages.
The image-recognizing method that the embodiment of the present application provides, can carry out detection identification automatically to scanogram, can be right All kinds of dangerous materials, contraband or smuggled goods are used for quickly detecting, thus effectively alleviating the workload of security staff, improving and disobeying Prohibit the accuracy of Articles detecting, the commodity simultaneously passing through will identify that are mated with declaration data, it is possible to achieve clearance automatically, It is effectively improved clearance speed.Additionally, the application passes through Image semantic classification, Organic substance, inorganic matters are made a distinction, for follow-up Image object detection provides prior information, improves target detection accuracy.In addition, in feature extraction, using shallow-layer and deep layer Feature Fusion, as final characteristics of image, can preferably improve the accuracy rate of detection, to the little thing occurring in scanogram The identification classification of body has more preferable effect.Finally, the application carries out detection classification using the parallel means of multi-model to target, And final result is provided by suitably rule, effectively improve accuracy rate.
It is more than the detailed description that the image-recognizing method that the embodiment of the present application is provided is carried out, below the application is carried For pattern recognition device be described in detail.
A kind of pattern recognition device schematic diagram that Fig. 5 provides for the embodiment of the present application.
A kind of pattern recognition device 500, described device 500 includes:
Image collection module 501, for obtaining scanogram.
Characteristic extracting module 502, for described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Its In, described to described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and be specially:Obtain depth convolution multilamellar The characteristics of image of each level of neutral net, carries out fusion treatment to the characteristics of image of described each level, obtains the image after merging Feature is as the characteristics of image extracting.
Module of target detection 503, for using the described image feature extracted, based on depth convolution multilayer neural network mesh Mark detection model carries out target detection, obtains candidate target.
Target classification module 504, for using the described image feature extracted, based on depth convolution multilayer neural network mesh Mark disaggregated model is identified to described candidate target, obtains image recognition result.
In some embodiments, described device also includes:Pretreatment module, for carrying out pre- place to described scanogram Reason, is that different classes of scanned item image arranges different colors based on scanned item classification results.
In some embodiments, described pretreatment module specifically includes:Density acquiring unit, for obtaining scanned item Atomic number, based on described atomic number obtain scanned item density;Taxon, for according to described scanned item Density determines the classification of scanned item, obtains scanned item classification results;Color arranging unit, for based on described scanned item Classification results are that different classes of scanned item image arranges different colors.
In some embodiments, described characteristic extracting module determines mesh specifically for the color characteristic based on scanned item Mark candidate region;Carry out feature extraction process in described object candidate area, obtain the characteristics of image extracting.
In some embodiments, described module of target detection specifically includes:Multi-model detector unit, for using extraction Described image feature, carry out target detection based on multiple depth convolution multilayer neural network target detection models, obtain multiple Testing result;Result integrated unit, for merging the plurality of testing result, obtains final testing result as candidate's mesh Mark.
In some embodiments, described result integrated unit is specifically for based on confidence calculations result, merging multiple Testing result is to obtain final testing result.
In some embodiments, described device also includes:Judge module, for based on described image recognition result, sentencing Break and whether there is dangerous materials or smuggled goods;First output module, if for judging dangerous product or smuggled goods, defeated Go out information.
In some embodiments, described device also includes:Comparing module, for by described image recognition result and article Inventory is compared, and obtains comparison result;Second output module, for exporting described comparison result.
The function of above-mentioned each module may correspond to the process step of the above-mentioned image-recognizing method of Fig. 1, Fig. 4 detailed description, Repeat no more in this.
Referring to Fig. 6, it is the block diagram of the device of image recognition that another embodiment of the application provides.Including:At at least one Reason device 601 (such as CPU), memorizer 602 and at least one communication bus 603, lead to for realizing the connection between these devices Letter.Processor 601 is used for executing the executable module of storage in memorizer 602, such as computer program.Memorizer 602 may Comprise high-speed random access memory (RAM:Random Access Memory) it is also possible to also include non-labile memorizer (non-volatile memory), for example, at least one disk memory.One or more than one program storage are in memorizer In, and be configured to execute one or more than one program bag by one or more than one processor 601 contain and be used for Carry out the instruction of following operation:
Obtain scanogram;Described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Using extract Described image feature, the target detection that carried out based on depth convolution multilayer neural network target detection model, obtain candidate target;Profit With the described image feature of extraction, based on depth convolution multilayer neural network object-class model, described candidate target is known Not, obtain image recognition result;Wherein, described to described scanogram is carried out with feature extraction, obtain the characteristics of image extracting It is specially:Obtain the characteristics of image of each level of depth convolution multilayer neural network, the characteristics of image of described each level is melted Conjunction is processed, and obtains the characteristics of image after merging as the characteristics of image extracting.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Pretreatment is carried out to described scanogram, is different classes of scanned item image based on scanned item classification results Different colors are set.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Obtain the atomic number of scanned item, obtain the density of scanned item based on described atomic number;Swept according to described The density retouching article determines the classification of scanned item, obtains scanned item classification results;Based on described scanned item classification results For the different color of different classes of scanned item image setting.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Object candidate area is determined based on the color characteristic of scanned item;Carry out feature to carry in described object candidate area Take process, obtain the characteristics of image extracting.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Carry out mesh using the described image feature extracted, based on multiple depth convolution multilayer neural network target detection models Mark detection, obtains multiple testing results;Merge the plurality of testing result, obtain final testing result as candidate target.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Based on confidence calculations result, merge multiple testing results to obtain final testing result.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Based on described image recognition result, judge whether dangerous materials or smuggled goods;If judging dangerous product Or smuggled goods, exports information.
In some embodiments, processor 601 is specifically for executing one or more than one program bag containing use In the instruction carrying out following operation:
Described image recognition result and list of articles are compared, obtains comparison result;Export described comparison result.
Professional should further appreciate that, each example describing in conjunction with the embodiments described herein Module and algorithm steps, can be hard in order to clearly demonstrate with electronic hardware, computer software or the two be implemented in combination in Part and the interchangeability of software, generally describe composition and the step of each example in the above description according to function. These functions to be executed with hardware or software mode actually, the application-specific depending on technical scheme and design constraint. Professional and technical personnel can use different methods to each specific application realize described function, but this realization It is not considered that exceeding scope of the present application.
The step of the method in conjunction with the embodiments described herein description or algorithm can be with hardware, computing device Software module, or the combination of the two is implementing.Software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable ROM, depositor, hard disk, moveable magnetic disc, CD-ROM or technical field In interior known any other form of storage medium.
Above-described specific embodiment, has been carried out to the purpose of the application, technical scheme and beneficial effect further Describe in detail, be should be understood that the specific embodiment that the foregoing is only the application, be not used to limit the application Protection domain, all any modification, equivalent substitution and improvement within spirit herein and principle, done etc., all should comprise Within the protection domain of the application.

Claims (10)

1. a kind of image-recognizing method is it is characterised in that methods described includes:
Obtain scanogram;
Described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;
Carry out target detection using the described image feature extracted, based on depth convolution multilayer neural network target detection model, Obtain candidate target;
Using the described image feature extracted, it is based on depth convolution multilayer neural network object-class model to described candidate target It is identified, obtain image recognition result;
Wherein, described described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and be specially:Obtain depth convolution The characteristics of image of each level of multilayer neural network, carries out fusion treatment to the characteristics of image of described each level, obtains after merging Characteristics of image is as the characteristics of image extracting.
2. method according to claim 1 is it is characterised in that before carrying out feature extraction to described scanogram, described Method also includes:
Pretreatment is carried out to described scanogram, is different classes of scanned item image setting based on scanned item classification results Different colors.
3. method according to claim 2 is it is characterised in that described carry out pretreatment to described scanogram, based on sweeping Retouch the different color of the scanned item image setting that taxonomy of goods result is different classes of to include:
Obtain the atomic number of scanned item, obtain the density of scanned item based on described atomic number;
Determine the classification of scanned item according to the density of described scanned item, obtain scanned item classification results;
It is that different classes of scanned item image arranges different colors based on described scanned item classification results.
4., according to the method in claim 2 or 3 it is characterised in that described carry out feature extraction to described scanogram, obtain The characteristics of image that must extract includes:
Object candidate area is determined based on the color characteristic of scanned item;
Carry out feature extraction process in described object candidate area, obtain the characteristics of image extracting.
5. method according to claim 1 is it is characterised in that described utilize the described image feature extracted, be based on depth Convolution multilayer neural network target detection model carries out target detection, obtains candidate target and includes:
Carry out target inspection using the described image feature extracted, based on multiple depth convolution multilayer neural network target detection models Survey, obtain multiple testing results;
Merge the plurality of testing result, obtain final testing result as candidate target.
6. method according to claim 5, it is characterised in that the plurality of testing result of described fusion, obtains final Testing result includes as candidate target:
Based on confidence calculations result, merge multiple testing results to obtain final testing result.
7. method according to claim 1 is it is characterised in that methods described also includes:
Based on described image recognition result, judge whether dangerous materials or smuggled goods;
If judging dangerous product or smuggled goods, export information.
8. the method according to claim 1 or 6 is it is characterised in that methods described also includes:
Described image recognition result and list of articles are compared, obtains comparison result;
Export described comparison result.
9. a kind of pattern recognition device is it is characterised in that described device includes:
Image collection module, for obtaining scanogram;
Characteristic extracting module, for described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Wherein, described To described scanogram is carried out with feature extraction, obtain the characteristics of image extracting and be specially:Obtain depth convolution multilamellar nerve net The characteristics of image of each level of network, carries out fusion treatment to the characteristics of image of described each level, obtains the characteristics of image after merging and makees For the characteristics of image extracting;
Module of target detection, for using the described image feature extracted, based on depth convolution multilayer neural network target detection Model carries out target detection, obtains candidate target;
Target classification module, for using the described image feature extracted, based on depth convolution multilayer neural network target classification Model is identified to described candidate target, obtains image recognition result.
10. a kind of device for image recognition, includes memorizer, and one or more than one program, and wherein one Individual or more than one program storage is in memorizer, and is configured to one described in one or more than one computing device Individual or more than one program bag contains the instruction for carrying out following operation:
Obtain scanogram;
Described scanogram is carried out with feature extraction, obtains the characteristics of image extracting;Wherein, described described scanogram is carried out Feature extraction, obtains the characteristics of image extracting and is specially:Obtain the characteristics of image of each level of depth convolution multilayer neural network, right The characteristics of image of described each level carries out fusion treatment, obtains the characteristics of image after merging as the characteristics of image extracting;
Carry out target detection using the described image feature extracted, based on depth convolution multilayer neural network target detection model, Obtain candidate target;
Using the described image feature extracted, it is based on depth convolution multilayer neural network object-class model to described candidate target It is identified, obtain image recognition result.
CN201610854506.8A 2016-09-27 2016-09-27 Image identification method and device Active CN106485268B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610854506.8A CN106485268B (en) 2016-09-27 2016-09-27 Image identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610854506.8A CN106485268B (en) 2016-09-27 2016-09-27 Image identification method and device

Publications (2)

Publication Number Publication Date
CN106485268A true CN106485268A (en) 2017-03-08
CN106485268B CN106485268B (en) 2020-01-21

Family

ID=58268114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610854506.8A Active CN106485268B (en) 2016-09-27 2016-09-27 Image identification method and device

Country Status (1)

Country Link
CN (1) CN106485268B (en)

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960186A (en) * 2017-03-17 2017-07-18 王宇宁 Ammunition recognition methods based on depth convolutional neural networks
CN107273936A (en) * 2017-07-07 2017-10-20 广东工业大学 A kind of GAN image processing methods and system
CN107463906A (en) * 2017-08-08 2017-12-12 深图(厦门)科技有限公司 The method and device of Face datection
CN107463965A (en) * 2017-08-16 2017-12-12 湖州易有科技有限公司 Fabric attribute picture collection and recognition methods and identifying system based on deep learning
CN107563290A (en) * 2017-08-01 2018-01-09 中国农业大学 A kind of pedestrian detection method and device based on image
CN107909093A (en) * 2017-10-27 2018-04-13 浙江大华技术股份有限公司 A kind of method and apparatus of Articles detecting
CN108229523A (en) * 2017-04-13 2018-06-29 深圳市商汤科技有限公司 Image detection, neural network training method, device and electronic equipment
CN108510116A (en) * 2018-03-29 2018-09-07 哈尔滨工业大学 A kind of luggage space planning system based on mobile terminal
CN108647559A (en) * 2018-03-21 2018-10-12 四川弘和通讯有限公司 A kind of danger recognition methods based on deep learning
CN109001833A (en) * 2018-06-22 2018-12-14 天和防务技术(北京)有限公司 A kind of Terahertz hazardous material detection method based on deep learning
CN109034245A (en) * 2018-07-27 2018-12-18 燕山大学 A kind of object detection method merged using characteristic pattern
CN109557114A (en) * 2017-09-25 2019-04-02 清华大学 Inspection method and inspection equipment and computer-readable medium
CN109583266A (en) * 2017-09-28 2019-04-05 杭州海康威视数字技术股份有限公司 A kind of object detection method, device, computer equipment and storage medium
WO2019096181A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Detection method, apparatus and system for security inspection, and electronic device
CN109816037A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 The method and apparatus for extracting the characteristic pattern of image
WO2019154383A1 (en) * 2018-02-06 2019-08-15 同方威视技术股份有限公司 Tool detection method and device
CN110222641A (en) * 2019-06-06 2019-09-10 北京百度网讯科技有限公司 The method and apparatus of image for identification
CN110245564A (en) * 2019-05-14 2019-09-17 平安科技(深圳)有限公司 A kind of pedestrian detection method, system and terminal device
CN110459225A (en) * 2019-08-14 2019-11-15 南京邮电大学 A kind of speaker identification system based on CNN fusion feature
CN110781911A (en) * 2019-08-15 2020-02-11 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium
CN110796127A (en) * 2020-01-06 2020-02-14 四川通信科研规划设计有限责任公司 Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal
CN110909604A (en) * 2019-10-23 2020-03-24 深圳市华讯方舟太赫兹科技有限公司 Security image detection method, terminal device and computer storage medium
CN110942453A (en) * 2019-11-21 2020-03-31 山东众阳健康科技集团有限公司 CT image lung lobe identification method based on neural network
CN111103629A (en) * 2018-10-25 2020-05-05 杭州海康威视数字技术股份有限公司 Target detection method and device, NVR (network video recorder) equipment and security check system
CN111241893A (en) * 2018-11-29 2020-06-05 阿里巴巴集团控股有限公司 Identification recognition method, device and system
CN111340775A (en) * 2020-02-25 2020-06-26 湖南大学 Parallel method and device for acquiring ultrasonic standard tangent plane and computer equipment
WO2020134848A1 (en) * 2018-12-28 2020-07-02 深圳市华讯方舟太赫兹科技有限公司 Intelligent detection method and device applied to millimeter wave security check instrument, and storage device
WO2020173021A1 (en) * 2019-02-25 2020-09-03 平安科技(深圳)有限公司 Artificial intelligence-based forbidden object identification method, apparatus and device, and storage medium
CN111856445A (en) * 2019-04-11 2020-10-30 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and system
CN112185077A (en) * 2019-07-01 2021-01-05 云丁网络技术(北京)有限公司 Intelligent reminding method, device and system and camera equipment
CN112215095A (en) * 2020-09-24 2021-01-12 西北工业大学 Contraband detection method, device, processor and security inspection system
CN112730468A (en) * 2019-10-28 2021-04-30 同方威视技术股份有限公司 Article detection device and method for detecting article
CN114549900A (en) * 2022-02-23 2022-05-27 智慧航安(北京)科技有限公司 Article classification method, device and system
CN114693612A (en) * 2022-03-16 2022-07-01 深圳大学 Knee joint bone tumor detection method based on deep learning and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013098A (en) * 2010-10-11 2011-04-13 公安部第一研究所 Method for removing organic and inorganic matters from security inspection images
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN105320945A (en) * 2015-10-30 2016-02-10 小米科技有限责任公司 Image classification method and apparatus
CN105631482A (en) * 2016-03-03 2016-06-01 中国民航大学 Convolutional neural network model-based dangerous object image classification method
CN105740758A (en) * 2015-12-31 2016-07-06 上海极链网络科技有限公司 Internet video face recognition method based on deep learning
CN105809164A (en) * 2016-03-11 2016-07-27 北京旷视科技有限公司 Character identification method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102013098A (en) * 2010-10-11 2011-04-13 公安部第一研究所 Method for removing organic and inorganic matters from security inspection images
CN105160361A (en) * 2015-09-30 2015-12-16 东软集团股份有限公司 Image identification method and apparatus
CN105320945A (en) * 2015-10-30 2016-02-10 小米科技有限责任公司 Image classification method and apparatus
CN105740758A (en) * 2015-12-31 2016-07-06 上海极链网络科技有限公司 Internet video face recognition method based on deep learning
CN105631482A (en) * 2016-03-03 2016-06-01 中国民航大学 Convolutional neural network model-based dangerous object image classification method
CN105809164A (en) * 2016-03-11 2016-07-27 北京旷视科技有限公司 Character identification method and device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SHENXIAOLU1984: ""目标检测 Faster RCNN算法详解"", 《CSDN 博客》 *
卢宏涛等: ""深度卷积神经网络在计算机视觉中的应用研究综述"", 《数据采集与处理》 *
张艳珠等: ""基于物体材质的X射线安检图像分割算法"", 《装备制造技术》 *
雷青等: ""基于深度学习的安卓APP视频枪支检测技术研究"", 《信息网络安全》 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960186A (en) * 2017-03-17 2017-07-18 王宇宁 Ammunition recognition methods based on depth convolutional neural networks
CN108229523A (en) * 2017-04-13 2018-06-29 深圳市商汤科技有限公司 Image detection, neural network training method, device and electronic equipment
CN107273936A (en) * 2017-07-07 2017-10-20 广东工业大学 A kind of GAN image processing methods and system
CN107273936B (en) * 2017-07-07 2020-09-11 广东工业大学 GAN image processing method and system
CN107563290A (en) * 2017-08-01 2018-01-09 中国农业大学 A kind of pedestrian detection method and device based on image
CN107463906A (en) * 2017-08-08 2017-12-12 深图(厦门)科技有限公司 The method and device of Face datection
CN107463965A (en) * 2017-08-16 2017-12-12 湖州易有科技有限公司 Fabric attribute picture collection and recognition methods and identifying system based on deep learning
CN107463965B (en) * 2017-08-16 2024-03-26 湖州易有科技有限公司 Deep learning-based fabric attribute picture acquisition and recognition method and recognition system
CN109557114A (en) * 2017-09-25 2019-04-02 清华大学 Inspection method and inspection equipment and computer-readable medium
CN109557114B (en) * 2017-09-25 2021-07-16 清华大学 Inspection method and inspection apparatus, and computer-readable medium
CN109583266A (en) * 2017-09-28 2019-04-05 杭州海康威视数字技术股份有限公司 A kind of object detection method, device, computer equipment and storage medium
CN107909093B (en) * 2017-10-27 2021-02-02 浙江大华技术股份有限公司 Method and equipment for detecting articles
CN107909093A (en) * 2017-10-27 2018-04-13 浙江大华技术股份有限公司 A kind of method and apparatus of Articles detecting
WO2019096181A1 (en) * 2017-11-14 2019-05-23 深圳码隆科技有限公司 Detection method, apparatus and system for security inspection, and electronic device
WO2019154383A1 (en) * 2018-02-06 2019-08-15 同方威视技术股份有限公司 Tool detection method and device
CN108647559A (en) * 2018-03-21 2018-10-12 四川弘和通讯有限公司 A kind of danger recognition methods based on deep learning
CN108510116A (en) * 2018-03-29 2018-09-07 哈尔滨工业大学 A kind of luggage space planning system based on mobile terminal
CN108510116B (en) * 2018-03-29 2020-06-30 哈尔滨工业大学 Case and bag space planning system based on mobile terminal
CN109001833A (en) * 2018-06-22 2018-12-14 天和防务技术(北京)有限公司 A kind of Terahertz hazardous material detection method based on deep learning
CN109034245A (en) * 2018-07-27 2018-12-18 燕山大学 A kind of object detection method merged using characteristic pattern
CN111103629A (en) * 2018-10-25 2020-05-05 杭州海康威视数字技术股份有限公司 Target detection method and device, NVR (network video recorder) equipment and security check system
CN111241893A (en) * 2018-11-29 2020-06-05 阿里巴巴集团控股有限公司 Identification recognition method, device and system
CN111241893B (en) * 2018-11-29 2023-06-16 阿里巴巴集团控股有限公司 Identification recognition method, device and system
WO2020134848A1 (en) * 2018-12-28 2020-07-02 深圳市华讯方舟太赫兹科技有限公司 Intelligent detection method and device applied to millimeter wave security check instrument, and storage device
CN109816037A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 The method and apparatus for extracting the characteristic pattern of image
WO2020173021A1 (en) * 2019-02-25 2020-09-03 平安科技(深圳)有限公司 Artificial intelligence-based forbidden object identification method, apparatus and device, and storage medium
CN111856445A (en) * 2019-04-11 2020-10-30 杭州海康威视数字技术股份有限公司 Target detection method, device, equipment and system
CN110245564A (en) * 2019-05-14 2019-09-17 平安科技(深圳)有限公司 A kind of pedestrian detection method, system and terminal device
CN110245564B (en) * 2019-05-14 2024-07-09 平安科技(深圳)有限公司 Pedestrian detection method, system and terminal equipment
CN110222641B (en) * 2019-06-06 2022-04-19 北京百度网讯科技有限公司 Method and apparatus for recognizing image
CN110222641A (en) * 2019-06-06 2019-09-10 北京百度网讯科技有限公司 The method and apparatus of image for identification
CN112185077A (en) * 2019-07-01 2021-01-05 云丁网络技术(北京)有限公司 Intelligent reminding method, device and system and camera equipment
CN110459225A (en) * 2019-08-14 2019-11-15 南京邮电大学 A kind of speaker identification system based on CNN fusion feature
CN110781911B (en) * 2019-08-15 2022-08-19 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium
CN110781911A (en) * 2019-08-15 2020-02-11 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium
CN110909604B (en) * 2019-10-23 2024-04-19 深圳市重投华讯太赫兹科技有限公司 Security check image detection method, terminal equipment and computer storage medium
CN110909604A (en) * 2019-10-23 2020-03-24 深圳市华讯方舟太赫兹科技有限公司 Security image detection method, terminal device and computer storage medium
CN112730468A (en) * 2019-10-28 2021-04-30 同方威视技术股份有限公司 Article detection device and method for detecting article
CN110942453A (en) * 2019-11-21 2020-03-31 山东众阳健康科技集团有限公司 CT image lung lobe identification method based on neural network
CN110796127A (en) * 2020-01-06 2020-02-14 四川通信科研规划设计有限责任公司 Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal
CN111340775B (en) * 2020-02-25 2023-09-29 湖南大学 Parallel method, device and computer equipment for acquiring ultrasonic standard section
CN111340775A (en) * 2020-02-25 2020-06-26 湖南大学 Parallel method and device for acquiring ultrasonic standard tangent plane and computer equipment
CN112215095A (en) * 2020-09-24 2021-01-12 西北工业大学 Contraband detection method, device, processor and security inspection system
CN114549900A (en) * 2022-02-23 2022-05-27 智慧航安(北京)科技有限公司 Article classification method, device and system
CN114693612A (en) * 2022-03-16 2022-07-01 深圳大学 Knee joint bone tumor detection method based on deep learning and related device

Also Published As

Publication number Publication date
CN106485268B (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN106485268A (en) A kind of image-recognizing method and device
CN108154168B (en) Comprehensive cargo inspection system and method
US20230162342A1 (en) Image sample generating method and system, and target detection method
US12067760B2 (en) Systems and methods for image processing
CN104636707B (en) The method of automatic detection cigarette
CN109902643A (en) Intelligent safety inspection method, device, system and its electronic equipment based on deep learning
CN104751163B (en) The fluoroscopic examination system and method for automatic Classification and Identification are carried out to cargo
CN104165896B (en) Liquid goods safety inspection method and device
US10042079B2 (en) Image-based object detection and feature extraction from a reconstructed charged particle image of a volume of interest
US10436932B2 (en) Inspection systems for quarantine and methods thereof
CN108664971A (en) Pulmonary nodule detection method based on 2D convolutional neural networks
CN107209944A (en) The correction of beam hardening pseudomorphism in the sample microtomography being imaged in a reservoir
Rogers et al. Threat Image Projection (TIP) into X-ray images of cargo containers for training humans and machines
CN110488368A (en) A kind of contraband recognition methods and device based on dual intensity X-ray screening machine
CN110186940A (en) Safety check recognition methods, device, computer equipment and storage medium
CN106651841B (en) Analysis method for security inspection image complexity
DE102014205447A1 (en) Detection of objects in an object
US20080253653A1 (en) Systems and methods for improving visibility of scanned images
CN102608135B (en) Method and equipment for confirming CT (Computed Tomography) scanning position in dangerous goods inspection system
Visser et al. Automated comparison of X-ray images for cargo scanning
CN116129153A (en) Intelligent analysis processing method, system, device, processor and computer readable storage medium for forbidden band article identification
CN111539251B (en) Security check article identification method and system based on deep learning
Thawornwong et al. Lumber value differences from reduced CT spatial resolution and simulated log sawing
CN114676759A (en) Method and device for detecting contraband in security inspection image
Gu Research and implementation of automatic cutlery recognition method based on X-ray security inspection image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant