WO2017084408A1 - 检查货物的方法和系统 - Google Patents

检查货物的方法和系统 Download PDF

Info

Publication number
WO2017084408A1
WO2017084408A1 PCT/CN2016/097575 CN2016097575W WO2017084408A1 WO 2017084408 A1 WO2017084408 A1 WO 2017084408A1 CN 2016097575 W CN2016097575 W CN 2016097575W WO 2017084408 A1 WO2017084408 A1 WO 2017084408A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
hscode
goods
template
region
Prior art date
Application number
PCT/CN2016/097575
Other languages
English (en)
French (fr)
Inventor
陈志强
张丽
赵自然
刘耀红
程村
李强
顾建平
张健
付罡
Original Assignee
同方威视技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 同方威视技术股份有限公司 filed Critical 同方威视技术股份有限公司
Publication of WO2017084408A1 publication Critical patent/WO2017084408A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/05Recognition of patterns representing particular kinds of hidden objects, e.g. weapons, explosives, drugs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/224Character recognition characterised by the type of writing of printed characters having additional code marks or containing code marks

Definitions

  • the present disclosure relates to the field of radiation imaging security inspections and, in particular, to automated inspection of containers to determine whether false alarms/reports are involved.
  • Intelligent inspection is a hot spot in the development of security inspection. Under the condition that the current Internet technology is deeply rooted in the hearts of the people and the cloud computing gradually enters various industries, intelligent security inspection has become the focus of customs in various countries. Intelligent security can not only provide customers with faster and more convenient services, improve security efficiency, but also improve the detection rate and give customs inspection personnel more valuable information. It is one of the important ways for industry players to enhance product value.
  • customs declaration / manifest data hereinafter referred to as customs declaration
  • customs declaration and through the image processing, semantic understanding of the method to achieve comparison of the map, and the implementation of false reports, false detection is a means of intelligent solutions.
  • the technology is still in the early stage of development, the means is not mature, and the algorithm or software system is still difficult to fully meet the needs of users.
  • image matching is used to achieve customs declaration comparison.
  • this technique is too idealistic, in fact, the effect is poor, it is difficult to apply to severe non-rigid deformation, perspective superposition, etc. in fluoroscopic images, and it is difficult to apply to real-time processing of large-scale categories.
  • the image classification algorithm can be used to realize the comparison of customs declarations. Its limitation is that it is limited in the case of large-scale classification.
  • the effectiveness of the existing customs declaration comparison algorithm is limited by many factors, such as large-scale categories, category area differences, new category self-learning, large intra-class differences, performance differences between devices, and a box of multiple goods.
  • the present disclosure proposes a method and system for inspecting goods.
  • a method of inspecting a cargo comprising the steps of: obtaining a transmission image and HSCODE of the inspected cargo; processing the transmission image to obtain a region of interest; utilizing an HSCODE of the inspected cargo Retrieve the model created based on HSCODE from the model library; The model determines whether the region of interest contains goods not indicated in the customs declaration.
  • the step of processing the transmission image to obtain a region of interest comprises the steps of: performing supervised image segmentation on the transmission image with the cargo type represented by the HSCODE of the inspected cargo as a supervised value, to obtain at least A partition as a region of interest.
  • the step of determining, according to the model, whether the region of interest includes goods not indicated in the customs declaration comprises: extracting features of each segment, obtaining a texture description of each segment, forming a feature vector Determining whether the similarity between each template included in the model and the feature vector of each partition is greater than a threshold; the similarity between the feature vector of the at least one divided region and each template of the model is not greater than a threshold In the case, it is determined that the goods to be inspected contain goods not specified in the customs declaration.
  • the step of retrieving the model created based on the HSCODE from the model library by using the HSCODE of the checked goods comprises: retrieving all models corresponding to the pre-predetermined bits of the HSCODE from the local model library and/or the cloud model library .
  • the retrieved models are sorted, and it is determined according to the ranked order whether the region of interest contains goods not indicated in the customs declaration, if there is at least one feature vector of the segment and at least one model If the similarity between the templates is not greater than the threshold, it is determined that the goods to be inspected include goods not specified in the customs declaration.
  • the method further comprises the step of updating all models in the local model library and/or the cloud model library corresponding to the pre-predetermined bits of the HSCODE.
  • local region sampling is performed at an edge in the image, and then multi-scale frequency domain features of the sampling points are extracted, and feature vectors are obtained according to the multi-scale frequency domain features.
  • the HSCODE of the goods is determined according to the name of the goods recorded in the customs declaration.
  • the template in each model includes a feature vector, and the number of templates is set as the number of templates.
  • the feature vector of the new sample is directly recorded as a template; when the template has reached the number in the model
  • the feature vector of the model matching sample is not used as a template, only the weight of the template with the highest similarity is added, and when the feature vector of the new sample does not match the template in the model, the template with the smallest weight is replaced with the feature of the new sample. vector.
  • the model includes at least the following information: a device identifier, an HSCODE identifier, a maximum number of templates, each template, each template weight, a unique identifier of each template in the historical image library, and the like. Degree threshold.
  • a system for inspecting goods comprising: a scanning device that obtains a transmission image and an HSCODE of the inspected goods; and a data processing device that processes the transmission image to obtain a region of interest, utilizing The HSCODE of the inspected goods retrieves a model created based on the HSCODE from the model library, and based on the model, determines whether the region of interest contains goods not indicated in the customs declaration.
  • FIG. 1A and 1B illustrate a schematic structural view of a cargo inspection system in accordance with an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart describing a cargo inspection method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flow chart depicting a method of creating and training an HSCODE model in a scheme in accordance with an embodiment of the present disclosure
  • FIG. 4 is a schematic flow chart describing a method of checking using a created model in a scheme according to an embodiment of the present disclosure
  • FIG. 5 is a schematic flow diagram depicting a model created by an online update in a scheme in accordance with an embodiment of the present disclosure.
  • FIGS. 1A and 1B are schematic structural views of an inspection system according to an embodiment of the present disclosure.
  • FIG. 1A shows a top plan view of the inspection system
  • FIG. 1B shows a front view of the inspection system.
  • the radiation source 110 generates X-rays that are collimated by the collimator 120 to perform a security check on the moving container truck 140
  • the detector 150 receives radiation that penetrates the truck, such as a computer.
  • the data processing device 160 of the class obtains a transmission image.
  • the transmission image of the container truck 140 is obtained by scanning
  • the transmission image is processed by the data processing device 160 to obtain a region of interest, which is retrieved from the model library by using the HSCODE of the inspected cargo based on the HSCODE. And a model based on the model to determine whether the region of interest contains goods not indicated in the customs declaration. In this way, it is possible to automatically check whether there is a false alarm/reporting problem in the container cargo.
  • the present disclosure proposes to use the HSCODE (Code in The Harmonization System) formulated by the International Customs Council as a unique identifier of the goods for comparison, that is, a model is established for each HSCODE, and the model includes a feature space that can describe the image features of the cargo corresponding to the HSCODE. .
  • the category is a multi-level hierarchical structure problem, and each stage is separately modeled, and the matching strategy is matched step by step.
  • the world-wide HSCODE is a 6-bit code, and the following digits are defined by each country.
  • the common HSCODE of China's customs import and export goods the sixth (8-bit) code 6341, the third-level (10-bit) code 6735, a total of 13,076 codes.
  • the model is built in three layers, 6 bits/8 bits/10 bits. Assuming that the item is a 10-bit code "0123456789”, the matching strategy can be: match the 6-bit model "012345”, 8 to the model "012345678", and the 10-bit model "0123456789” respectively to overcome large-scale categories and categories. Regional differences, large differences within the class.
  • FIG. 2 is a schematic flow chart describing a cargo inspection method according to an embodiment of the present disclosure.
  • step S21 a transmission image and an HSCODE of the inspected goods are obtained.
  • the transmission image of the inspected container is obtained, for example, using a scanning device as shown in FIGS. 1A and 1B, and the acquired HSCODE is obtained from the customs declaration.
  • the name of the goods is used to determine the corresponding HSCODE.
  • step S22 the transmission image is processed to obtain a region of interest. For example, to extract the cargo area, and to minimize the impact of the inconsistency of the physical characteristics of the equipment on the image.
  • normalization is achieved by removing image processing operations such as attenuation of the background and air, and removal of row/column stripes.
  • the cargo area is obtained through operations such as binarization, edge extraction, and container edge detection.
  • the model created based on the HSCODE is retrieved from the model library using the HSCODE of the checked goods.
  • the present disclosure proposes to establish a device-side model (local model) and a cloud-side model to overcome the problem of differences between devices.
  • the cloud model comes from the computing center, which updates and maintains the most complete HSCODE category online, but it normalizes device differences, and the accuracy is worse than the local model.
  • the local model accumulates a sufficient amount of historical images on the device side, it is generated on the device side, which is more in line with the device, but the HSCODE category is less than the cloud.
  • the local model is automatically selected instead of the cloud model for comparison.
  • the cloud model does not have to be used online. It can be offline or in a mode that uses timing synchronization.
  • the feature is updated to the local model and the cloud model, thereby implementing the self-learning function.
  • This update may be a new generation of the corresponding HSCODE model, or it may be a modification of the current model.
  • the problem of one box of multiple goods is a difficult problem that is difficult to solve completely under the current technical conditions, and only a relatively feasible result can be obtained to a certain extent. To be precise, it is affected by equipment inconsistency, and the ambiguous complex segmentation problem under the supervision of customs declaration data. For example, under different equipment, the data form of the customs declaration gives multiple supervisory values (such as how many kinds of goods, each type and unit weight, etc.), and each pixel on the image may belong to multiple goods. Complexity is also reflected in the fact that the above-mentioned factors may appear inconsistent and inaccurate. To solve the above problems, a supervised texture image segmentation algorithm can be used to solve.
  • the technical solution of the present disclosure proposes to implement customs declaration comparison based on HSCODE.
  • the HSCODE model has a hierarchical structure and can adopt a local/cloud dual model strategy.
  • feature extraction can be achieved using supervised texture image segmentation and region texture description, with the distance between features as a measure of similarity.
  • the HSCODE model can also be updated with the principle of maximum differentiation to realize the system self-learning function.
  • Customize customs declarations based on HSCODE Create a separate model for each HSCODE, from From the perspective of HSCODE, the model is divided into 6-bit/8-bit/10-bit hierarchy. From the device perspective, the model is divided into a local model and a cloud model.
  • HSCODE is not necessary for customs declaration comparison.
  • the customs declaration may only have the name of the goods and no number, then the general method may be name resolution, text retrieval to obtain the corresponding historical image, and the comparison is realized in the historical image.
  • the HSCODE is obtained by mapping the cargo name with the HSCODE, thereby finding a corresponding model.
  • train the local model associated with the device To reduce the impact of device inconsistency, train the local model associated with the device. Use a device-independent cloud model without a native model.
  • the cloud model is continuously updated to maintain the maximum amount of models.
  • the local model is independent of the cloud model and can be identical or different.
  • Supervised image segmentation is performed using the HSCODE cargo type as a supervised value, and a regional texture description, ie, a feature vector, is obtained in each segmentation region.
  • the feature vector of multiple historical images is saved in the HSCODE model.
  • the distance between the feature vectors is the similarity.
  • the maximum similarity between the unknown sample features and the plurality of vectors (ie, templates) in the model is the similarity between the samples and the HSCODE. Note that there are many ways to select the method of image segmentation and feature extraction. For example, image segmentation is used to divide the region, and image feature patch and its statistics are used to form features.
  • the above models can be self-learning, including online creation and update.
  • the present disclosure updates the HSCODE model using the principle of maximum differentiation.
  • the features in each model are called “templates", and the number of templates is set to a uniform "template number".
  • template in the model is not enough, the feature of the new sample is directly recorded as a template; when the template has reached the number in the model, the matching sample with the model is not used as a template, only the weight of the template with the highest similarity is added, and the new one is added.
  • the template with the least weight will be replaced with the new sample feature.
  • the template in the HSCODE will form the same set of templates with the greatest difference, supporting the model feature space. Note that the principle of maximum differentiation can be achieved using a variety of online learning methods.
  • the technical solution involves three links of training, use and online update.
  • the training session is divided into three steps: image normalization and effective cargo area acquisition; effective area feature extraction; and establishment of HSCODE model.
  • image normalization and efficient cargo area acquisition supervised image segmentation; model loading; regional feature extraction; feature and model matching.
  • An online update creates a new model or updates an existing model if it confirms that the sample meets the customs declaration.
  • step S31 a sample image is obtained, and then image normalization and effective cargo area acquisition are performed in step S32.
  • image processing operations such as attenuation of the background and air, removing row/column stripes, and secondly, by binarization, edge extraction, container edge detection, etc. Get the cargo area.
  • step S33 effective area feature extraction is performed.
  • texture statistical features may be selected, in particular texture statistical features based on edge sampling to describe an area.
  • edge sampling For example: i) for highlighting edge information, local region sampling at the edges in the image; ii) for highlighting texture properties, the present disclosure uses texton to extract multi-scale frequency domain features of the sample points; iii) for effectively describing these texture features Statistical characteristics, using the fisher vector to get the final feature vector.
  • corner detection methods such as HARRIS instead of edge sampling, or using SIFT, HOG, etc. instead of texton, or using other forms of bag of words such as SCSPM (Sparse Coding Spatial Pyramid Matching), or using the Deep Learning method such as R-CNN (Regions with CNN features) to obtain feature vectors.
  • the feature extraction in the training session is different from other links.
  • all the texton features of the image in the image library are extracted, and then the probability dictionary model required by the Fisher Vector is trained according to all texton.
  • the probability dictionary model After getting the probability dictionary model, the texton of each image is converted into a Fisher Vector.
  • the probability dictionary model is known, and the input image or region can be directly derived from the Fisher Vector feature. Since Fisher Vector is a well-known algorithm, it will not be described here.
  • the training general mode is a large amount of data batch processing. In order to ensure the accuracy of the model, only the data is considered “no suspect” and contains only one kind of goods, that is, only one cargo image of HSCODE enters the training session. Otherwise, the location of the area belonging to each HSCODE needs to be manually labeled to ensure the correctness of the training samples.
  • a customs declaration corresponding to the input image is obtained.
  • an HSCODE model is established.
  • the HSCODE model is divided into a local model and a cloud model.
  • the cloud model is trained based on a large number of historical images and is available to the user. Built as a local file in a new product without historical images.
  • the local model is trained offline after the user accumulates a larger amount of images (eg, greater than 20,000).
  • the cloud model uses both real-time and offline updates to maintain the largest collection of models.
  • the local model is updated, the cloud model is updated at the same time.
  • the local model and the cloud model exist at the same time the local model is preferentially matched. It can also be configured to use only the local model when the local model exists/template is sufficient.
  • the HSCODE model is divided into 6-bit/8-bit/10-bit hierarchy.
  • a model with a higher number of matching bits that is, a priority of 10 bits > 8 bits > 6 bits.
  • the above "priority match” means: if a region matches to a 10-bit model A And 8-bit model B, the area is considered to belong to model A.
  • the form of the HSCODE model is related to the feature extraction algorithm.
  • the HSCODE model is composed of 7 elements, namely, ⁇ device number, HSCODE identifier, maximum number of templates, each template, each template weight, unique identifier of each template in the historical image library, and similarity threshold. ⁇ . The meaning of each element is given below.
  • Device Number Indicates which device this model belongs to. If it is a cloud model, it is identified as "CLOUD”.
  • HSCODE identifier HSCODE code, which can be 6/8/10 bits.
  • Maximum number of templates This value is consistent across all models, but different devices can configure the maximum number of templates for the local model. The larger the value, the better the description of the inconsistency of the goods, but it will also reduce the precision. In practical applications, the value of 10-20 can get better results.
  • Each template that is, the cargo area texture statistical feature corresponding to the HSCODE, which is the Fisher Vector in this embodiment.
  • the maximum number is the "maximum number of templates" and the dimension is determined by the Fisher Vector probability dictionary model.
  • Each template has a weight, and the sum of these weights is 1. The larger the weight, the more the template can represent HSCODE. The smaller the weight, the more likely it is to be replaced by a new sample feature.
  • each template is derived from a real image, and when its features are recorded in the model, its unique identifier such as serial number, manifest number, etc. are recorded at the same time.
  • the application software can find the corresponding historical image by this identification.
  • Similarity threshold If the distance between the feature and the template is greater than or equal to this threshold, it indicates a match, otherwise it is a mismatch.
  • This value can have 3 sources: default value, user set value, and adaptive threshold. The adaptive threshold is adjusted after the initialization, as the model is updated, as described below.
  • the sample space can be formed in a variety of ways.
  • FIG. 4 is a schematic flow chart describing a method of checking using a created model in a scheme according to an embodiment of the present disclosure.
  • step S41 an image of the inspected goods is input.
  • the transmission image of the inspected goods is obtained, for example, by using a scanning device, and then in step S42, image normalization and effective cargo area extraction are performed. For example, to extract the cargo area, and reduce the impact of the inconsistency of the physical characteristics of the equipment on the image.
  • Normalization is achieved by removing image processing operations such as attenuation of the background and air, and removal of row/column stripes.
  • the cargo area is obtained through operations such as binarization, edge extraction, and container edge detection.
  • step S43 a customs declaration corresponding to the image is acquired, and then supervised image segmentation is performed in step S44.
  • the difference from the general image segmentation is that the customs declaration sheet gives the number of goods, that is, the category number in the ideal image should not exceed the cargo type.
  • a supervised image segmentation algorithm can be used to obtain regions of different goods.
  • the texture segmentation method is employed to implement cargo image segmentation.
  • step S45 effective area feature extraction is performed. This step is similar to step S33 in FIG. 3 described above, and thus the description will not be repeated here.
  • step S46 model loading is performed. For example, load the corresponding model according to HSCODE. Since HSCODE has a hierarchical structure under different devices, there are many ways to select a loaded model. For example, it can be the "maximum loading mode", that is, loading the local model, the cloud model, all the models matching the first 6, 8, or 10 digits of the number, or the "minimum loading mode", only loading the HSCODE in the local model. The number exactly matches the model. In some embodiments, the models into which the same HSCODE is downloaded are prioritized.
  • step S47 feature and model matching are performed. For example, after obtaining the Fisher Vector feature of the unknown region, the cosine distance is used to measure the distance between the feature and the template, and the larger the cosine distance value, the greater the similarity. In this implementation, the maximum similarity between each template in the model and the feature to be matched is used as the similarity between the template and the model.
  • the "similarity matrix” is obtained, that is, a numerical matrix in which the number of unknown regions is a row and the number of HSCODEs is a column.
  • an area may match multiple HSCODEs; on the other hand, an HSCODE may also match multiple areas. This is determined by the ambiguity of the fluoroscopic image itself, and is related to the performance of algorithms such as segmentation and similarity metrics.
  • the HSCODE model records "the unique identifier of the template in the historical image library", which is transmitted to the application as a matching result. With this logo, you can find the image with the closest image area to the historical image.
  • FIG. 5 is a schematic flow diagram depicting a model created by an online update in a scheme in accordance with an embodiment of the present disclosure.
  • the update link is essentially the online learning process of the model, which can be implemented by various online clustering algorithms, such as online K-means algorithm.
  • step S501 the area HSCODE is obtained.
  • the input for online updates is HSCODE and image area.
  • step S502 model loading is performed. Updates can also have multiple strategies, such as "maximum update mode", that is, update the local model, all models in the cloud model that match the first 6, 8, or 10 digits of the number, or "minimum update mode", only update The model in which the HSCODE number in the local model exactly matches.
  • step S501 and S502 the HSCODE of the image area is obtained and model loading is performed.
  • steps S505 and S506 a valid cargo area is obtained and feature extraction is performed on the area.
  • step S503 if the number of templates in the model is not enough to be a predetermined value, the feature is directly added as a template in step S504. If the number of templates has reached the maximum value, a matching step is performed in step S507. If there is a match, the matched template weight is increased in step S508; if not, the minimum weight template is replaced with the feature in step S509. Thereafter, the weights are normalized in step S510, and the respective models are saved in step S511.
  • the update link will involve adaptive adjustment of the threshold. If a matching step is passed during the update, all of the matching values will be recorded as a histogram. This histogram content is the distribution of all correctly matched scores. Assuming that the default risk distribution index is 5%, the goods need to be manually checked, and the threshold is adaptively adjusted to a position where the cumulative amount of the score distribution reaches 5%, thereby realizing the threshold adaptive adjustment under the guidance of risk deployment.
  • aspects of the embodiments disclosed herein may be implemented in an integrated circuit as a whole or in part, as one or more of one or more computers running on one or more computers.
  • a computer program eg, implemented as one or more programs running on one or more computer systems
  • implemented as one or more programs running on one or more processors eg, implemented as one or One or more programs running on a plurality of microprocessors, implemented as firmware, or substantially in any combination of the above, and those skilled in the art, in accordance with the present disclosure, will be provided with design circuitry and/or write software and / or firmware code capabilities.
  • signal bearing media include, but are not limited to, recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital tapes, computer memories, and the like; and transmission-type media such as digital and / or analog communication media (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Economics (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)

Abstract

一种检查货物的方法和系统,该方法包括:获得被检查货物的透射图像和HSCODE;对所述透射图像进行处理,得到感兴趣区域;利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型;以及基于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。利用上述方法和系统,能够对集装箱货物进行有效的检查/检验,发现其中是否夹带有报关单上未注明的货物。

Description

检查货物的方法和系统 技术领域
本公开涉及辐射成像安全检查领域,特别地,涉及对集装箱的自动检查,以确定是否涉及伪报/瞒报。
背景技术
智能查验是安检领域发展的热点领域。在当前互联网技术深入人心,云计算逐步进入各个行业的条件下,智能安检愈发成为各国海关的焦点问题。安检智能化既可以为客户提供更快速便捷的服务,提高安检效率,也在提高查获率的同时给海关查验人员更有价值的信息,是当前业界厂商提升产品价值的重要途径之一。使用报关单/舱单数据(以下简称为报关单),并通过图像处理、语义理解的方法实现图单对比,并实现伪报、瞒报查验是智能化方案中的一种手段。
但该技术目前仍在发展初期,手段并不成熟,算法或软件系统还难于充分满足用户需求。例如利用报关单信息,采用图像匹配的方式实现报关单对比。但是该技术过于理想化,事实上效果较差,难于适用于透视图像中严重的非刚性形变、透视叠加等情况,也难于应用于大规模类别的实时处理。此外,在大数据推理条件下,使用图像分类算法,可以实现报关单分析比对,它的局限在于对在大规模分类目情况下效果受限。
因此,现有的报关单对比算法效果受到多种因素的制约,例如大规模类目、类目区域差异性、新类目自学习、类内差异大、设备间性能差异、一箱多货与透视重叠下的图像区域区分等。现有技术的方法对此不做分析,难于在实际中满足用户需求。
发明内容
鉴于现有技术中的一个或多个技术问题,本公开提出了一种检查货物的方法和系统。
在本公开的一个方面,提出了一种检查货物的方法,包括步骤:获得被检查货物的透射图像和HSCODE;对所述透射图像进行处理,得到感兴趣区域;利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型;以及基 于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。
优选地,对所述透射图像进行处理得到感兴趣区域的步骤包括步骤:以所述被检查货物的HSCODE所代表的货物种类作为监督值,对所述透射图像进行有监督的图像分割,得到至少一个分割区,作为感兴趣区域。
优选地,基于所述模型判断所述感兴趣区域是否包含有未在所述报关单中注明的货物的步骤包括:对各个分割区进行特征提取,得到各个分割区的纹理描述,形成特征向量;判断所述模型中包括的各个模板与各个分割区的特征向量之间的相似度是否大于阈值;在至少一个分割区域的特征向量与所述模型的各个模板之间的相似度不大于阈值的情况下确定所述被检查货物中包含了报关单未注明的货物。
优选地,利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型的步骤包括:从本地模型库和/或云端模型库中检索所有与所述HSCODE的前预定位相对应的模型。
优选地,对所检索到的模型进行排序,按照所排的顺序判断所述感兴趣区域是否包含有未在报关单中注明的货物,如果有至少一个分割区的特征向量与至少一个模型的模板之间的相似度不大于所述阈值,则确定所述被检查货物中包含了报关单未注明的货物。
优选地,所述的方法还包括步骤:更新本地模型库和/或云端模型库中所有与所述HSCODE的前预定位相对应的模型。
优选地,在图像中的边缘处进行局部区域采样,然后提取采样点的多尺度频域特征,根据所述多尺度频域特征得到特征向量。
优选地,在所述报关单不包括HSCODE的情况下,根据所述报关单记载的货物名称来确定所述货物的HSCODE。
优选地,每个模型中的模板包括特征向量,模板的数量被设为模板数,当模型中模板不够该数量时,新样本的特征向量直接作为模板记录;当模型中模板已达到该数量时,与模型匹配样本的特征向量不作为模板,只增加与其相似度最高的模板的权值,而新样本的特征向量与模型中的模板不匹配时,权重最小的模板被替换为新样本的特征向量。
优选地,所述模型至少包括如下的信息:设备标识、HSCODE标识、模板最大数量、各个模板、各个模板权值、各个模板在历史图像库中的唯一标识、相似 度阈值。
在本公开的另一方面,提出了一种检查货物的系统,包括:扫描设备,获得被检查货物的透射图像和HSCODE;数据处理设备,对所述透射图像进行处理,得到感兴趣区域,利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型,以及基于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。
利用上述方案,能够发现集装箱货物图像中,与物品申报的报关单数据不一致的区域,从而认为这个区域可能是伪报或瞒报。
附图说明
为了更好的理解本公开,将根据以下附图对本公开的实施例进行描述:
图1A和图1B示出了根据本公开实施例的货物检查系统的结构示意图;
图2是描述根据本公开实施例的货物检查方法的示意性流程图;
图3是描述根据本公开实施例的方案中创建和训练HSCODE模型的方法的示意性流程图;
图4是描述根据本公开实施例的方案中使用创建的模型进行检查的方法的示意性流程图;
图5是描述根据本公开实施例的方案中在线更新所创建的模型的示意性流程图。
附图没有对实施例的所有电路或结构进行显示。贯穿所有附图相同的附图标记表示相同或相似的部件或特征。
具体实施方式
下面将详细描述本公开的具体实施例,应当注意,这里描述的实施例只用于举例说明,并不用于限制本公开。在以下描述中,为了提供对本公开的透彻理解,阐述了大量特定细节。然而,对于本领域普通技术人员显而易见的是:不必采用这些特定细节来实行本公开。在其他实例中,为了避免混淆本公开,未具体描述公知的电路、材料或方法。
在整个说明书中,对“一个实施例”、“实施例”、“一个示例”或“示例”的提及意味着:结合该实施例或示例描述的特定特征、结构或特性被包含在本公开至少 一个实施例中。因此,在整个说明书的各个地方出现的短语“在一个实施例中”、“在实施例中”、“一个示例”或“示例”不一定都指同一实施例或示例。此外,可以以任何适当的组合和/或子组合将特定的特征、结构或特性组合在一个或多个实施例或示例中。此外,本领域普通技术人员应当理解,在此提供的附图都是为了说明的目的,并且附图不一定是按比例绘制的。这里使用的术语“和/或”包括一个或多个相关列出的项目的任何和所有组合。
图1A和图1B是根据本公开一个实施方式的检查系统的结构示意图。图1A示出了检查系统的俯视示意图,图1B示出了检查系统的正视示意图。如图1A和图1B所示,射线源110产生X射线,经过准直器120准直后,对移动的集装箱卡车140进行安全检查,由探测器150接收穿透卡车的射线,在诸如计算机之类的数据处理装置160得到透射图像。
根据本公开的实施例,在通过扫描得到集装箱卡车140的透射图像后,在数据处理装置160对透射图像进行处理,得到感兴趣区域,利用被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型,以及基于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。这样,能够自动检查集装箱货物中是否存在伪报/瞒报问题。
本公开提出使用国际海关理事会制定的HSCODE(The Harmonization System中Code)作为货物的唯一标识进行比对,即对每个HSCODE建立模型,模型中包含可描述该HSCODE对应的货物图像特征的特征空间。在一些实施例中,针对类目为多级层次性结构问题,采用各级分别建模,比对时逐级匹配的策略。比如世界通用的HSCODE为6位编码,后面的位数由各个国家自己定义。2013年中国海关进出口货物常见的HSCODE中,二级(8位)编码6341种,三级(10位)编码6735种,总共13076个编码。为满足通用性,模型分三层即6位/8位/10位建立。假设物品为10位编码“0123456789”,则匹配策略可以是:分别与6位模型“012345”、8为模型“012345678”、10位模型“0123456789”求匹配,以克服大规模类目、类目区域差异性、类内差异大的问题。
图2是描述根据本公开实施例的货物检查方法的示意性流程图。如图2所示,在步骤S21,获得被检查货物的透射图像和HSCODE。例如利用如图1A和图1B所示的扫描设备获得被检查集装箱的透射图像,并且从报关单取得获取的HSCODE。在报关单中没有包含HSCODE的情况下,利用货物的名称确定相应的 HSCODE。
在步骤S22,对所述透射图像进行处理,得到感兴趣区域。例如,为提取货物区域,并尽量减少设备物理特性不一致对图像造成的影响。首先,通过去除本底与空气带来的衰减、去除行/列条纹等图像处理操作,实现归一化。其次,通过二值化、边缘提取、集装箱边缘检测等操作,得到货物区域。
在步骤S23,利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型。在步骤S24,基于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。
本公开提出建立设备端模型(本地模型)与云端模型克服设备间的差异问题。云端模型来源于计算中心,在线更新并保持最完备的HSCODE类目,但它将设备差别进行归一化,准确率较本地模型差。本地模型在设备端积累足够量的历史图像后,在设备端产生,更符合该设备情况,但HSCODE类目少于云端。对于一个新设备,本身不具备本地模型,只能使用云端模型。在积累足够图像并训练产生设备模型后,自动选择本地模型而不是云端模型进行比对。注意,云端模型并不必须在线使用。它可以是离线或者采用定时同步的模式。
此外,对于待分析的图像,在得到用户指示后,将其特征更新到本地模型及云端模型,从而实现自学习功能。这个更新可能是新生成相应的HSCODE模型,也可能是对当前模型的修改。
一箱多货的问题是当前技术条件下难以完全解决的难题,只能在一定程度上得到比较可行的结果。准确的说,它是受到设备不一致性影响的,在报关单数据监督下的多义性复杂分割问题。比如,不同的设备下,报关单的数据形式给出了多个监督值(比如货物有多少种,每种类型与单位重量等),图像上的每个像素可能属于多个货物等。复杂性还体现在,上述因素出现的形式可能不一致,且并不准确。针对上述问题,可以采用有监督的纹理图像分割算法来解决。
针对现有技术中的问题,本公开的技术方案提出以HSCODE为基准实现报关单对比,HSCODE模型具有层次结构,并可以采用本地/云端双模型策略。此外,可以采用有监督的纹理图像分割和区域纹理描述实现特征提取,以特征之间的距离作为相似度度量。也可以以最大差异化原则更新HSCODE模型,实现系统自学习功能。
以HSCODE为基准实现报关单对比。对于每个HSCODE建立各自的模型,从 HSCODE的角度说,模型分为6位/8位/10位层次结构,从设备角度说,模型分为本地模型和云端模型。需要注意的是HSCODE对于报关单对比来说并非必要。比如报关单可能只有货物名称而没有编号,那么一般的方法可以是名称解析、文本检索得到相应的历史图像,在历史图像中实现比对。优选的,对于不具备编码的报关单,通过货物名称与HSCODE的映射得到HSCODE,从而找到对应的模型。为减小设备不一致性带来的影响,训练设备相关的本地模型。在不具备本地模型的情况下,使用设备无关的云端模型。云端模型持续更新,保持最多模型量。本地模型与云端模型无关,可以完全相同,也可以采用不同算法。
使用HSCODE货物种类作为监督值,进行有监督的图像分割,并在每个分割区域获取区域纹理描述,即特征向量。HSCODE模型中保存多个历史图像的特征向量。特征向量之间的距离即为相似度。优选的,未知样本特征与模型中多个向量(即模板)中最大相似度即为样本与该HSCODE的相似度。注意,货物图像区域区分方式、特征提取方式有多种方法可以选择,比如采用图像列聚类的方式划分区域,采用图像特征patch及其统计量形成特征等。
上述模型可以具备自学习功能,包括在线的建立与更新。本公开采用最大差异化原则更新HSCODE模型。为了使得模型可控且减小样本量不一致带来的影响,每个模型中的特征被称为“模板”,模板的数量被设为统一的“模板数”。当模型中模板不够该数量时,新样本的特征直接作为模板记录;当模型中模板已达到该数量时,与模型匹配样本不作为模板,只增加与其相似度最高的模板的权值,而新样本与模型不匹配时,权重最小的模板将被替换为新样本特征。由此,HSCODE中的模板将形成差异最大的一组模板,支撑该模型特征空间。注意,最大差异化原则可以采用多种在线学习方法实现。
该技术方案在具体实现时,涉及到训练、使用、在线更新三个环节。训练环节分为3个步骤:图像归一化与有效货物区域获取;有效区域特征提取;建立HSCODE模型。使用分5个步骤:图像归一化与有效货物区域获取;有监督的图像分割;模型载入;区域特征提取;特征与模型匹配。在线更新是在确认样本符合报关单的情况下,创建新模型或更新已有模型。
图3是描述根据本公开实施例的方案中创建和训练HSCODE模型的方法的示意性流程图。如图3所示,在步骤S31,获得样本图像,然后在步骤S32进行图像归一化与有效货物区域获取。为了提取货物区域并减少设备物理特性不一致对 图像造成的影响,可以首先,通过去除本底与空气带来的衰减、去除行/列条纹等图像处理操作,实现归一化,其次,通过二值化、边缘提取、集装箱边缘检测等操作,得到货物区域。
在步骤S33,进行有效区域特征提取。优选的,可以选择纹理统计特征,特别是基于边缘采样的纹理统计特征来描述一个区域。例如:i)为突出边缘信息,在图像中的边缘处进行局部区域采样;ii)为突出纹理特性,本公开采用texton提取采样点的多尺度频域特征;iii)为有效描述这些纹理特征的统计特性,采用fisher vector得到最终的特征向量。本领域的技术人员易于想到该算法的多种替代形式,比如使用各类角点检测方法如HARRIS等代替边缘采样,或使用SIFT、HOG等描述子代替texton,或使用bag of words的其它形式比如SCSPM(Sparse Coding Spatial Pyramid Matching),或者采用Deep Learning方式如R-CNN(Regions with CNN features)得到特征向量。
需要注意的是,训练环节中的特征提取与其他环节不同,首先提取图像库中图像的所有texton特征,再根据所有texton训练Fisher Vector所需要的概率字典模型。在得到概率字典模型后,再将各个图像的texton转化为Fisher Vector。对于使用和更新环节,概率字典模型已知,输入图像或区域可以直接得到Fisher Vector特征。由于Fisher Vector为公知算法,此处不再赘述。
另外,训练一般模式为大量数据批处理。为保证模型准确性,这些数据中,仅有被认为是“无嫌疑”且只含一种货物,即HSCODE只有一个的货物图像进入训练环节。否则,需要人工标注属于各个HSCODE的区域位置才能确保训练样本的正确性。
在步骤S34,取得与输入的图像相对应的报关单。在步骤S35,建立HSCODE模型。HSCODE模型分为本地模型和云端模型。云端模型根据大量历史图像训练,并提供用户使用。以本地文件形式,内置于不含历史图像的新产品中。本地模型在用户积累较大量图像(例如大于2万幅)后离线训练。云端模型采用实时和离线两种更新方式,保持最大量的模型集合。本地模型更新时,同时更新云端模型。在本地模型与云端模型同时存在时,优先匹配本地模型。也可以配置为本地模型存在/模板足够时,仅使用于本地模型。
HSCODE模型分为6位/8位/10位层次结构。优先匹配位数多的模型,即优先级10位>8位>6位。上述“优先匹配”意为:如一个区域匹配到10位模型A 和8位模型B,则认为区域属于模型A。
HSCODE模型的形式与特征提取算法有关。在本公开的实施例中,HSCODE模型由7个要素组成,即{设备号,HSCODE标识,模板最大数量,各个模板,各个模板权值,各个模板在历史图像库中的唯一标识,相似度阈值}。各个元素的含义见下文。
设备号:表明此模型属于哪个设备。如果是云端模型,则标识为“CLOUD”。
HSCODE标识:HSCODE编码,可以是6/8/10位。
模板最大数量:这个值是所有模型一致的,但不同的设备可以配置本地模型的模板最大数量。这个值越大,货物的不一致性描述越好,但也会降低查准率。在实际应用中,10~20的取值即可得到较好效果。
各个模板:即与HSCODE对应的货物区域纹理统计特征,本实施例中即为Fisher Vector。其数量最大为“模板最大数量”,维数由Fisher Vector概率字典模型确定。
各个模板权值:每个模板都有一个权值,这些权值的和为1。权值越大,则该模板越能代表HSCODE。权值越小,越可能被新样本特征代替。
各个模板在历史图像库中的唯一标识:各个模板都来源于真实的图像,在将其特征记录在模型中时,同时记录其唯一标识比如流水号,舱单号等。应用软件可以凭此标识找到对应的历史图像。
相似度阈值:特征与模板的距离大于等于此阈值,则说明匹配,否则为不匹配。这个值可以有3个来源:默认值,用户设定值,自适应阈值。自适应阈值是在初始化之后,随着模型的更新调整的,实施例见下文。
在步骤S33得到各个已知HSCODE的Fisher Vector特征后,如果特征数量少于既定的模板最大数量,则特征赋予同样的权值,与其他必要信息同时记录在HSCODE模型中。如果特征数量大于既定模板最大数量,可采用多种方式形成样本空间。
图4是描述根据本公开实施例的方案中使用创建的模型进行检查的方法的示意性流程图。
如图4所示,在步骤S41,输入被检查货物的图像。例如利用扫描设备获得被检查货物的透射图像,然后在步骤S42,进行图像归一化与有效货物区域提取。例如,为提取货物区域,并减少设备物理特性不一致对图像造成的影响。首先, 通过去除本底与空气带来的衰减、去除行/列条纹等图像处理操作,实现归一化。其次,通过二值化、边缘提取、集装箱边缘检测等操作,得到货物区域。
在步骤S43,取得图像相对应的报关单,然后在步骤S44进行有监督的图像分割。与一般图像分割的区别在于,报关单给定了货物种类数,即理想图像中的类别编号应该不超过货物种类。由此,可采用有监督的图像分割算法得到不同货物的区域。在一些实施例中,采用纹理分割方法实现货物图像分割。
在步骤S45,进行有效区域特征提取。该步骤与上述图3中的步骤S33类似,因此这里不再重复描述。在步骤S46,进行模型载入。例如,根据HSCODE载入相应模型。由于HSCODE存在不同设备下的层级结构,可以有多种方式选择载入的模型。比如可以是“最大载入模式”,即载入本地模型、云端模型中所有匹配该号码前6、8、10位的模型,也可以是“最小载入模式”,仅载入本地模型中HSCODE号码完全匹配的模型。在一些实施例中,将同一HSCODE下载入的模型按照优先级排列。
在步骤S47,进行特征与模型匹配。例如,在求得未知区域的Fisher Vector特征后,使用余弦距离度量特征和模板之间的距离,余弦距离值越大则相似度越大。本实施采用模型中各个模板与待匹配特征间最大相似度作为模板与模型间的相似度。
由于在步骤S46中模型按照优先级排列,在此步骤中遇到匹配模型及停止计算。注意,此步骤中,可获得“相似度矩阵”,即以未知区域数为行、以HSCODE数为列的数值矩阵。一方面,一个区域可能匹配到多个HSCODE;另一方面,一个HSCODE也可能匹配到多个区域。这是透视图像本身的多义性决定的,同时与分割、相似度度量等算法的性能有关。
若一个区域无法与任何一个已载入的模型匹配,则它为伪报或者瞒报。
另外,本实施例中HSCODE模型记录了“模板在历史图像库中的唯一标识”,作为匹配结果传递给应用程序。通过此标识,可以找到图像区域与历史图像中最相近的图像。
图5是描述根据本公开实施例的方案中在线更新所创建的模型的示意性流程图。更新环节实质上是模型的在线学习过程,可采用多种在线聚类算法实现,如在线K均值算法等。
如图5所示,在步骤S501,获得区域HSCODE。例如,在线更新的输入为 HSCODE和图像区域。在步骤S502,进行模型载入。更新也可以有多种策略,比如可以是“最大更新模式”,即更新本地模型、云端模型中所有匹配该号码前6、8、10位的模型,也可以是“最小更新模式”,仅更新本地模型中HSCODE号码完全匹配的模型。
在步骤S501和S502,获得图像区域的HSCODE并进行模型载入。在步骤S505和S506,获得有效货物区域并对该区域进行特征提取。在步骤S503,如果模型中的模板数不够既定值,则在步骤S504直接将特征添加为模板。如果模板数已达最大值,则在步骤S507进行匹配步骤。若匹配,在步骤S508增大被匹配的模板权值;若不匹配,则在步骤S509用该特征替换掉最小权值模板。之后在步骤S510归一化权值,在步骤S511保存各个模型。
需要注意的是,模型不存在的情况是更新中的特例,此时产生新模型,其中仅包含1个特征,其权值为1。
另外,更新环节还会涉及到阈值的自适应调节。更新时若经过匹配步骤,则其中所有匹配值将以直方图形式记录。这个直方图内容为所有正确匹配的分数分布。假设默认的风险布控指数为5%货物需要人工查验,则阈值自适应的调整到分数分布累积量达5%的位置,从而实现风险布控指导下的阈值自适应调整。
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了检查方法和系统的众多实施例。在这种示意图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开的实施例所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序),实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无 论实际用来执行分发的信号承载介质的具体类型如何,本公开所述主题的示例性实施例均适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。
虽然已参照几个典型实施例描述了本公开,但应当理解,所用的术语是说明和示例性、而非限制性的术语。由于本公开能够以多种形式具体实施而不脱离公开的精神或实质,所以应当理解,上述实施例不限于任何前述的细节,而应在随附权利要求所限定的精神和范围内广泛地解释,因此落入权利要求或其等效范围内的全部变化和改型都应为随附权利要求所涵盖。

Claims (11)

  1. 一种检查货物的方法,包括步骤:
    获得被检查货物的透射图像和HSCODE;
    对所述透射图像进行处理,得到感兴趣区域;
    利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型;以及
    基于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。
  2. 如权利要求1所述的方法,对所述透射图像进行处理,得到感兴趣区域的步骤包括步骤:
    以所述被检查货物的HSCODE所代表的货物种类作为监督值,对所述透射图像进行有监督的图像分割,得到至少一个分割区,作为感兴趣区域。
  3. 如权利要求2所述的方法,其中基于所述模型判断所述感兴趣区域是否包含有未在所述报关单中注明的货物的步骤包括:
    对各个分割区进行特征提取,得到各个分割区的纹理描述,形成特征向量;
    判断所述模型中包括的各个模板与各个分割区的特征向量之间的相似度是否大于阈值;
    在至少一个分割区域的特征向量与所述模型的各个模板之间的相似度不大于阈值的情况下确定所述被检查货物中包含了报关单未注明的货物。
  4. 如权利要求3所述的方法,其中利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型的步骤包括:
    从本地模型库和/或云端模型库中检索所有与所述HSCODE的前预定位相对应的模型。
  5. 如权利要求4所述的方法,其中对所检索到的模型进行排序,按照所排的顺序判断所述感兴趣区域是否包含有未在报关单中注明的货物,如果有至少一个分割区的特征向量与至少一个模型的模板之间的相似度不大于所述阈值,则确定所述被检查货物中包含了报关单未注明的货物。
  6. 如权利要求3所述的方法,还包括步骤:
    更新本地模型库和/或云端模型库中所有与所述HSCODE的前预定位相对应的模型。
  7. 如权利要求3所述的方法,其中,在图像中的边缘处进行局部区域采样, 然后提取采样点的多尺度频域特征,根据所述多尺度频域特征得到特征向量。
  8. 如权利要求1所述的方法,其中,在所述报关单不包括HSCODE的情况下,根据所述报关单记载的货物名称来确定所述货物的HSCODE。
  9. 如权利要求1所述的方法,其中每个模型中的模板包括特征向量,模板的数量被设为模板数,当模型中模板不够该数量时,新样本的特征向量直接作为模板记录;当模型中模板已达到该数量时,与模型匹配样本的特征向量不作为模板,只增加与其相似度最高的模板的权值,而新样本的特征向量与模型中的模板不匹配时,权重最小的模板被替换为新样本的特征向量。
  10. 如权利要求1所述的方法,其中,所述模型至少包括如下的信息:设备标识、HSCODE标识、模板最大数量、各个模板、各个模板权值、各个模板在历史图像库中的唯一标识、相似度阈值。
  11. 一种检查货物的系统,包括:
    扫描设备,获得被检查货物的透射图像和HSCODE;
    数据处理设备,对所述透射图像进行处理,得到感兴趣区域,利用所述被检查货物的HSCODE从模型库中检索基于HSCODE所创建的模型,以及基于所述模型判断所述感兴趣区域是否包含有未在报关单中注明的货物。
PCT/CN2016/097575 2015-11-18 2016-08-31 检查货物的方法和系统 WO2017084408A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510795436.9 2015-11-18
CN201510795436.9A CN106706677B (zh) 2015-11-18 2015-11-18 检查货物的方法和系统

Publications (1)

Publication Number Publication Date
WO2017084408A1 true WO2017084408A1 (zh) 2017-05-26

Family

ID=57280942

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/097575 WO2017084408A1 (zh) 2015-11-18 2016-08-31 检查货物的方法和系统

Country Status (9)

Country Link
US (1) US10262410B2 (zh)
EP (1) EP3171332B1 (zh)
JP (1) JP6632953B2 (zh)
KR (1) KR101917000B1 (zh)
CN (1) CN106706677B (zh)
BR (1) BR102016022619A2 (zh)
PL (1) PL3171332T3 (zh)
SG (1) SG10201607955PA (zh)
WO (1) WO2017084408A1 (zh)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402697B2 (en) * 2016-08-01 2019-09-03 Nvidia Corporation Fusing multilayer and multimodal deep neural networks for video classification
CN108108744B (zh) * 2016-11-25 2021-03-02 同方威视技术股份有限公司 用于辐射图像辅助分析的方法及其系统
US10417602B2 (en) * 2017-04-18 2019-09-17 International Bridge, Inc. Item shipping screening and validation
WO2018217635A1 (en) 2017-05-20 2018-11-29 Google Llc Application development platform and software development kits that provide comprehensive machine learning services
CN109522913B (zh) * 2017-09-18 2022-07-19 同方威视技术股份有限公司 检查方法和检查设备以及计算机可读介质
KR101969022B1 (ko) * 2017-12-29 2019-04-15 (주)제이엘케이인스펙션 영상 분석 장치 및 방법
JP6863326B2 (ja) * 2018-03-29 2021-04-21 日本電気株式会社 選別支援装置、選別支援システム、選別支援方法及びプログラム
CN112106081A (zh) * 2018-05-07 2020-12-18 谷歌有限责任公司 提供综合机器学习服务的应用开发平台和软件开发套件
CN109002841B (zh) * 2018-06-27 2021-11-12 淮阴工学院 一种基于Faster-RCNN模型的建筑构件提取方法
WO2020075307A1 (ja) * 2018-10-12 2020-04-16 日本電気株式会社 ゲート装置、ゲート装置の制御方法及び記録媒体
KR102032796B1 (ko) * 2018-12-07 2019-11-08 (주)제이엘케이인스펙션 영상 분석 장치 및 방법
CN111382635B (zh) * 2018-12-29 2023-10-13 杭州海康威视数字技术股份有限公司 一种商品类别识别方法、装置及电子设备
CN111461152B (zh) * 2019-01-21 2024-04-05 同方威视技术股份有限公司 货物检测方法及装置、电子设备和计算机可读介质
CN110706263B (zh) * 2019-09-30 2023-06-06 武汉工程大学 一种图像处理方法、装置、设备和计算机可读存储介质
US10854055B1 (en) 2019-10-17 2020-12-01 The Travelers Indemnity Company Systems and methods for artificial intelligence (AI) theft prevention and recovery
KR102300796B1 (ko) 2020-02-03 2021-09-13 한국과학기술연구원 이미지 변환 모델을 사용한 x-ray 이미지 판독 지원 방법 및 이를 수행하는 시스템
CN113407753A (zh) * 2020-03-16 2021-09-17 清华大学 基于语义的透视图像检索方法及其装置
KR102378742B1 (ko) 2020-07-30 2022-03-28 한국과학기술연구원 사용자의 x-ray 영상 판독을 지원하는 시스템 및 방법
KR102473165B1 (ko) 2020-07-30 2022-12-01 한국과학기술연구원 3d 환경 모델을 이용하여 피사체의 실제 크기를 측정 가능한 cctv 관제 시스템
USD980454S1 (en) * 2020-09-23 2023-03-07 Mighty Buildings, Inc. Studio building
CN112288371A (zh) * 2020-11-03 2021-01-29 深圳壹账通智能科技有限公司 通关检验方法、装置、电子设备及计算机可读存储介质
KR102426750B1 (ko) * 2020-12-02 2022-07-28 소프트온넷(주) X-ray 영상 다중 판독 시스템 및 그 방법
US11989586B1 (en) 2021-06-30 2024-05-21 Amazon Technologies, Inc. Scaling up computing resource allocations for execution of containerized applications
US11892418B1 (en) * 2021-06-30 2024-02-06 Amazon Technologies, Inc. Container image inspection and optimization
US11995466B1 (en) 2021-06-30 2024-05-28 Amazon Technologies, Inc. Scaling down computing resource allocations for execution of containerized applications
CN117495861A (zh) * 2024-01-02 2024-02-02 同方威视科技江苏有限公司 安检图像查验方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1690115A1 (en) * 2003-11-24 2006-08-16 Passport Systems, Inc. Adaptive scanning of materials using nuclear resonancee fluorescence imaging
CN101473439A (zh) * 2006-04-17 2009-07-01 全视Cdm光学有限公司 阵列成像系统及相关方法
CN101960333A (zh) * 2007-11-19 2011-01-26 美国科技工程公司 用于人员筛查的多重图像的收集和合成
US20110182805A1 (en) * 2005-06-17 2011-07-28 Desimone Joseph M Nanoparticle fabrication methods, systems, and materials
EP2504259A1 (de) * 2009-10-02 2012-10-03 TGW Logistics Group GmbH Fördereinrichtung und verfahren zum betrieb einer fördereinrichtung
CN104751163A (zh) * 2013-12-27 2015-07-01 同方威视技术股份有限公司 对货物进行自动分类识别的透视检查系统和方法
CN105784732A (zh) * 2014-12-26 2016-07-20 同方威视技术股份有限公司 检查方法和检查系统

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463419B1 (en) * 2000-03-07 2002-10-08 Chartering Solutions Internet system for exchanging and organizing vessel transport information
CN1650306A (zh) * 2002-04-30 2005-08-03 本田技研工业株式会社 支持关税代码选择的系统和方法
JP3725838B2 (ja) 2002-04-30 2005-12-14 本田技研工業株式会社 関税コードの選択支援システム
US20080231454A1 (en) * 2007-03-23 2008-09-25 Diamond Arrow Communications L.L.C. Cargo Container Monitoring Device
CN101435783B (zh) * 2007-11-15 2011-01-26 同方威视技术股份有限公司 物质识别方法和设备
WO2011011894A1 (en) * 2009-07-31 2011-02-03 Optosecurity Inc. Method and system for identifying a liquid product in luggage or other receptacle
MX2014002728A (es) * 2011-09-07 2014-08-22 Rapiscan Systems Inc Sistema de inspeccion de rayos x que integra datos de manifiesto con procesamiento de deteccion / generacion de imagenes.
KR101180471B1 (ko) 2011-09-27 2012-09-07 (주)올라웍스 한정된 메모리 환경 하에서 얼굴 인식 성능 향상을 위한 참조 얼굴 데이터베이스 관리 방법, 장치 및 컴퓨터 판독 가능한 기록 매체
CN105808555B (zh) * 2014-12-30 2019-07-26 清华大学 检查货物的方法和系统
EP3420563A4 (en) * 2016-02-22 2020-03-11 Rapiscan Systems, Inc. SYSTEMS AND METHODS FOR THREAT DETECTION AND SMUGGLING IN CARGO
US10115211B2 (en) * 2016-03-25 2018-10-30 L3 Security & Detection Systems, Inc. Systems and methods for reconstructing projection images from computed tomography volumes
CN108108744B (zh) * 2016-11-25 2021-03-02 同方威视技术股份有限公司 用于辐射图像辅助分析的方法及其系统
US10268924B2 (en) * 2016-12-05 2019-04-23 Sap Se Systems and methods for integrated cargo inspection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1690115A1 (en) * 2003-11-24 2006-08-16 Passport Systems, Inc. Adaptive scanning of materials using nuclear resonancee fluorescence imaging
US20110182805A1 (en) * 2005-06-17 2011-07-28 Desimone Joseph M Nanoparticle fabrication methods, systems, and materials
CN101473439A (zh) * 2006-04-17 2009-07-01 全视Cdm光学有限公司 阵列成像系统及相关方法
CN101960333A (zh) * 2007-11-19 2011-01-26 美国科技工程公司 用于人员筛查的多重图像的收集和合成
EP2504259A1 (de) * 2009-10-02 2012-10-03 TGW Logistics Group GmbH Fördereinrichtung und verfahren zum betrieb einer fördereinrichtung
CN104751163A (zh) * 2013-12-27 2015-07-01 同方威视技术股份有限公司 对货物进行自动分类识别的透视检查系统和方法
CN105784732A (zh) * 2014-12-26 2016-07-20 同方威视技术股份有限公司 检查方法和检查系统

Also Published As

Publication number Publication date
EP3171332A1 (en) 2017-05-24
SG10201607955PA (en) 2017-06-29
US20170140526A1 (en) 2017-05-18
JP2017097853A (ja) 2017-06-01
BR102016022619A2 (pt) 2017-05-23
KR20170058263A (ko) 2017-05-26
PL3171332T3 (pl) 2019-07-31
EP3171332B1 (en) 2019-01-30
KR101917000B1 (ko) 2018-11-08
US10262410B2 (en) 2019-04-16
JP6632953B2 (ja) 2020-01-22
CN106706677B (zh) 2019-09-03
CN106706677A (zh) 2017-05-24

Similar Documents

Publication Publication Date Title
WO2017084408A1 (zh) 检查货物的方法和系统
Alfarisy et al. Deep learning based classification for paddy pests & diseases recognition
CA3066029A1 (en) Image feature acquisition
Khan et al. Real-time plant health assessment via implementing cloud-based scalable transfer learning on AWS DeepLens
Batool et al. [Retracted] An IoT and Machine Learning‐Based Model to Monitor Perishable Food towards Improving Food Safety and Quality
Qiao et al. Detection and classification of early decay on blueberry based on improved deep residual 3D convolutional neural network in hyperspectral images
CN108877947A (zh) 基于迭代均值聚类的深度样本学习方法
Behera et al. Automatic classification of mango using statistical feature and SVM
Henila et al. Segmentation using fuzzy cluster‐based thresholding method for apple fruit sorting
CN105809087A (zh) 辐射检查系统及车型模板检索方法
JP2009281742A (ja) 判別方法、判別装置及びプログラム
Gao et al. An improved XGBoost based on weighted column subsampling for object classification
CN104200222B (zh) 一种基于因子图模型的图片中对象识别方法
Chen et al. Eggshell biometrics for individual egg identification based on convolutional neural networks
Daykin et al. A comparison of unsupervised abnormality detection methods for interstitial lung disease
He et al. Fourier Descriptors Based Expert Decision Classification of Plug Seedlings
CN111581640A (zh) 一种恶意软件检测方法、装置及设备、存储介质
CN104573746A (zh) 基于磁共振成像的实蝇种类识别方法
Yebasse et al. Coffee Disease Visualization and Classification. Plants 2021, 10, 1257
Kerami et al. Classification of X-ray images using grid approach
Hettiarachchi et al. UrbanAgro: Utilizing advanced deep learning to support Sri Lankan urban farmers to detect and control common diseases in tomato plants
CN113780084B (zh) 基于生成式对抗网络的人脸数据扩增方法、电子设备和存储介质
Manga Plant Disease Classification using Residual Networks with MATLAB
He et al. Smart Diet Management Through Food Image and Cooking Recipe Analysis
Fernández-Robles et al. Evaluation of clustering configurations for object retrieval using sift features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16865586

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16865586

Country of ref document: EP

Kind code of ref document: A1