WO2021073370A1 - 物品检测方法、装置、系统和计算机可读存储介质 - Google Patents

物品检测方法、装置、系统和计算机可读存储介质 Download PDF

Info

Publication number
WO2021073370A1
WO2021073370A1 PCT/CN2020/116728 CN2020116728W WO2021073370A1 WO 2021073370 A1 WO2021073370 A1 WO 2021073370A1 CN 2020116728 W CN2020116728 W CN 2020116728W WO 2021073370 A1 WO2021073370 A1 WO 2021073370A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
item
identified
grayscale
detected
Prior art date
Application number
PCT/CN2020/116728
Other languages
English (en)
French (fr)
Inventor
郁昌存
王德鑫
Original Assignee
北京海益同展信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京海益同展信息科技有限公司 filed Critical 北京海益同展信息科技有限公司
Publication of WO2021073370A1 publication Critical patent/WO2021073370A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/06Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
    • G01N23/10Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption the material being confined in a container, e.g. in a luggage X-ray scanners
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • G01V5/22Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the present disclosure relates to the field of computer technology, and in particular to an article detection method, device, system, and computer-readable storage medium.
  • security inspection machines mainly emit X-rays, and according to the degree of absorption of X-rays by objects, they are processed into images of different colors through signals and displayed on the screen. Security inspectors judge whether there are prohibited items through experience by viewing X-ray fluoroscopy images.
  • an object detection method including: acquiring a to-be-identified image generated by scanning one or more objects to be detected by a security inspection machine; converting the to-be-identified image into a gray-scale image to be identified; The gray image is input to the object detection model to determine whether each of the one or more objects to be detected in the gray image to be identified is a prohibited item.
  • converting the image to be recognized into a grayscale image to be recognized includes: removing noise in the image to be recognized to obtain a denoised image; removing background from the denoised image to obtain an image of the target area; The HSV features of hue, saturation, and lightness convert the target area image into a grayscale image as the grayscale image to be identified.
  • removing the background in the denoising image to obtain the target area image includes: inputting the denoising image into the image segmentation model to determine the category to which each pixel belongs.
  • the categories include: foreground category or background category; according to each pixel
  • the category to which the point belongs is extracted from the mask image of the denoised image; the mask image and the image to be identified are subjected to the bitwise AND operation of the pixel values to obtain the target area image.
  • converting the target area image into a grayscale image according to the HSV characteristics of the target area image, as the grayscale image to be recognized includes: converting the red, green, and blue RGB values of each pixel in the target area image into HSV values ; According to the tone value of each pixel, the target area image is converted into a grayscale image as the grayscale image to be identified.
  • inputting the gray image to be identified into the object detection model, and determining whether each of the one or more objects to be detected in the gray image to be identified is a prohibited item includes: adding the gray image to be identified Input the feature extraction network in the item detection model to get the output image features of the gray image to be recognized; among them, the feature extraction network is a lightweight neural network model; input the image features into the target detection network in the item detection model to get the output The category information of each item to be inspected; determine whether the item to be inspected is a prohibited item according to the category information of each item to be inspected.
  • the method further includes: in the case where it is determined that one or more items to be inspected contain prohibited items, sending an alarm message.
  • the object detection model also outputs the position information of each object to be detected in the grayscale image to be identified, and the method further includes: according to the position information of each object to be detected in the grayscale image to be identified, The identification information of each item to be detected as a prohibited item is mapped to the image to be identified, and the image to be identified with the identification information is sent to the display device for display.
  • the method further includes: generating a plurality of first training sample images, wherein each first training sample image contains one or more prohibited items and one or more non-prohibited items, for each first training sample image One or more prohibited items in the training sample image and one or more non-prohibited items in each item category are labeled; multiple first training sample images are used to train the item detection model; and the information generated by the security inspection machine is obtained. Two real images are used as the second training sample image, and the category of each item in each second training sample image in the second training sample image is annotated; multiple second training sample images are used to adjust the item detection model parameters, Complete the training of the item detection model.
  • using a plurality of first training sample images to train an item detection model includes: converting the plurality of first training sample images into a plurality of first grayscale images; and inputting the plurality of first grayscale images into the item
  • the feature extraction network in the detection model separately obtains the image features of each first gray-scale image output, where the feature extraction network is pre-trained to determine the parameters; the image features of each first gray-scale image are used to detect the target in the object
  • the detection network is trained;
  • the use of multiple second training sample images to adjust the parameters of the item detection model includes: converting multiple second training sample images into multiple second grayscale images; inputting multiple second grayscale images into the article
  • the feature extraction network in the detection model separately obtains the image features of each second gray-scale image output; the image features of each second gray-scale image are used to adjust the parameters of the target detection network in the article detection model.
  • an object detection device including: a collection module for acquiring images to be recognized generated by scanning one or more objects to be detected by a security inspection machine; and an image processing module for converting the objects to be recognized The image is converted into a grayscale image to be identified; the detection module is used to input the grayscale image to be identified into the object detection model to determine whether each of the one or more objects to be detected in the grayscale image to be identified is a prohibited item .
  • the image processing module is used to remove the noise in the image to be recognized to obtain a denoised image; remove the background in the denoised image to obtain the target area image; according to the hue, saturation, and brightness of the target area image HSV
  • the feature converts the image of the target area into a grayscale image, which is used as the grayscale image to be recognized.
  • the image processing module is used to input the denoised image into the image segmentation model to determine the category to which each pixel belongs.
  • the categories include: foreground category or background category; according to the category to which each pixel belongs, extract the image of the denoised image Mask image: The mask image and the image to be recognized are combined bit by bit to calculate the pixel value to obtain the target area image.
  • the image processing module is used to convert the red, green, and blue RGB values of each pixel in the target area image into HSV values; according to the tone value of each pixel, the target area image is converted into a grayscale image as a to-be-identified Grayscale image.
  • the detection module is used to input the gray image to be recognized into the feature extraction network in the item detection model to obtain the output image features of the gray image to be recognized; wherein the feature extraction network is a lightweight neural network model ; Input the image features into the target detection network in the item detection model to obtain the output category information of each item to be inspected; determine whether the item to be inspected is a prohibited item according to the category information of each item to be inspected.
  • the device further includes an alarm module, which is used to issue an alarm message when it is determined that one or more items to be detected contain prohibited items.
  • the item detection model also outputs the position information of each item to be detected in the grayscale image to be identified
  • the device further includes: a display module, which is used to determine the position of each item to be detected in the grayscale image to be identified.
  • the location information of each item to be detected is mapped to the image to be identified, and the identification information of each item to be detected is mapped to the image to be identified, and the image to be identified with the identification information is sent to the display device for display.
  • the device further includes: a training module for generating a plurality of first training sample images, wherein each first training sample image contains one or more prohibited items and one or more non-prohibited items. One or more prohibited items in each first training sample image and the category of each item in one or more non-prohibited items are labeled; use multiple first training sample images to train the item detection model; obtain security check Multiple real images generated by the machine are used as the second training sample image, and the category of each item in each second training sample image in the second training sample image is labeled; the item detection model is performed using multiple second training sample images The parameters are adjusted to complete the training of the item detection model.
  • the training module is used to convert a plurality of first training sample images into a plurality of first grayscale images; input the plurality of first grayscale images into the feature extraction network in the item detection model to obtain the output respectively The image features of each first gray-scale image, where the feature extraction network is pre-trained to determine parameters; the image features of each first gray-scale image are used to train the target detection network in the item detection model; multiple second training samples The image is converted into multiple second grayscale images; the multiple second grayscale images are input into the feature extraction network in the article detection model, and the image features of the output second grayscale images are obtained respectively; each second grayscale image is used The parameters of the target detection network in the image feature item detection model are adjusted.
  • an article detection device including: a processor; and a memory coupled to the processor for storing instructions.
  • the processor executes any of the foregoing The article detection method of the embodiment.
  • a computer-readable non-transitory storage medium on which a computer program is stored, wherein the program is executed by a processor to implement the item detection method of any of the foregoing embodiments.
  • an article detection system including: the article detection device and the security inspection machine of any of the foregoing embodiments, which are used to scan the image to be identified generated by one or more objects to be inspected.
  • the system further includes a display device for receiving and displaying the image to be identified with identification information sent by the article detection device.
  • Fig. 1 shows a schematic flowchart of an article detection method according to some embodiments of the present disclosure.
  • FIG. 2 shows a schematic flowchart of an article detection method according to other embodiments of the present disclosure.
  • FIG. 3 shows a schematic structural diagram of an article detection device according to some embodiments of the present disclosure.
  • FIG. 4 shows a schematic structural diagram of an article detection device according to other embodiments of the present disclosure.
  • Fig. 5 shows a schematic structural diagram of an article detection device according to still other embodiments of the present disclosure.
  • Fig. 6 shows a schematic structural diagram of an article detection system according to some embodiments of the present disclosure.
  • FIG. 7 shows a schematic flowchart of an article detection method according to still other embodiments of the present disclosure.
  • FIG. 8 shows a schematic flowchart of an article detection method according to still other embodiments of the present disclosure.
  • FIG. 9 shows a schematic flowchart of an article detection method according to still other embodiments of the present disclosure.
  • FIG. 10 shows a schematic flowchart of an article detection method according to still other embodiments of the present disclosure.
  • FIG. 11 shows a schematic flowchart of an article detection method according to still other embodiments of the present disclosure.
  • FIG. 12 shows a schematic flowchart of an article detection method according to still other embodiments of the present disclosure.
  • a technical problem to be solved by the present disclosure is to improve the accuracy of the safety detection of articles.
  • FIG. 1 is a flowchart of some embodiments of the object detection method of the present disclosure. As shown in Fig. 1, the method of this embodiment includes: steps S102 to S106.
  • step S102 a to-be-identified image generated by the security inspection machine scanning one or more to-be-detected objects is acquired.
  • the security inspection machine can use X-rays or other forms to scan the one or more items to be inspected to form an image.
  • the security inspection machines deployed on the market are equipped with display screens, which can scan the image information generated by scanning one or more items to be detected through HDMI (High Definition Multimedia Interface) signals or VGA (Video Graphics Array, video graphics array) Signals and other signals are displayed on the screen synchronously.
  • the signal generated by the security inspection machine can be acquired through the acquisition module of the article detection device of the present disclosure, and the signal can be further converted into a video source, and the image frames in the video source can be read to obtain the image to be identified.
  • one frame of image can be extracted as the image to be recognized every preset number of frames to ensure that all the images to be recognized can cover all the items to be detected that pass the security inspection machine to avoid missed inspection.
  • OpenCV can be used to read the video source to obtain the image to be recognized.
  • step S104 the image to be recognized is converted into a gray image to be recognized.
  • the image to be recognized is, for example, an X-ray scan image. Due to the different penetrating power of X-rays on different items, different items have obvious color differences and are distributed in a certain regularity. For example, organic materials such as food are displayed in orange; ceramics, etc. are displayed in green; and metals are displayed in blue. Convert the image to be recognized into a grayscale image to obtain the grayscale image to be recognized, which can preserve the color difference of different items, and at the same time can reduce the data volume of the pixel value of the pixel, so as to improve the subsequent image recognition efficiency.
  • the image to be identified can be converted into a grayscale image according to the HSV (hue, saturation, lightness) characteristics of the image to be identified.
  • the image to be recognized can be converted from the RGB (red, green, and blue) color space to the HSV color space, and the H component is used to convert the color of the pixel.
  • the H parameter represents color information, that is, the position of the spectral color. This parameter is represented by an angle, and red, green, and blue are separated by 120 degrees.
  • the conversion of grayscale images through the H parameter can try to preserve the color differences of different items and improve the accuracy of subsequent recognition.
  • the RGB values of the pixels of the image to be identified can be converted into HSV values, and the image to be identified can be converted into grayscale images according to the hue value (H value) of the pixels.
  • the H value can be normalized from 0 to 255, and then the mapped grayscale image can be obtained. For example, multiply the ratio of H value to 240 by 255 to get the gray value.
  • step S104 includes: steps S702 to S706.
  • the noise in the to-be-recognized image is removed to obtain a denoised image.
  • Gaussian low-pass filtering or morphological operations can be used to preprocess the image to be recognized to remove noise points in the image to be recognized. Morphological operations such as image expansion, erosion, opening and closing operations, etc., and Gaussian low-pass filtering belong to the prior art, and will not be repeated here.
  • step S704 the background in the denoised image is removed to obtain the target area image.
  • step S706 the target area image is converted into a grayscale image according to the HSV characteristics of the target area image, which is used as the grayscale image to be identified.
  • step S704 includes: steps S802 to S806.
  • step S802 the denoised image is input into the image segmentation model, and the category to which each pixel belongs is determined.
  • the categories include: foreground category or background category; in step S804, the mask of the denoised image is extracted according to the category to which each pixel belongs. Film image;
  • step S806 the mask image and the image to be identified are subjected to bitwise AND operation of pixel values to obtain an image of the target area.
  • the image segmentation model may be, for example, an SVM (Support Vector Machine) model, or an existing model such as FCN (Fully Convolutional Network, full volume and neural network), which will not be repeated here.
  • SVM Serial Vector Machine
  • FCN Full Volume and neural network
  • step S706 includes steps S902 to S904.
  • step S902 the RGB value of each pixel in the target area image is converted into an HSV value;
  • step S904 the target area image is converted into a grayscale image according to the tone value of each pixel, as the grayscale image to be identified.
  • step S106 the grayscale image to be identified is input into the object detection model, and it is determined whether each of the one or more objects to be detected in the grayscale image to be identified is a prohibited item.
  • the item detection model may include a feature extraction network and a target detection network.
  • step S106 includes: steps S1002 to S1006.
  • step S1002 input the to-be-identified gray image into the feature extraction network in the article detection model to obtain the output image features of the to-be-identified gray image;
  • step S1004 input the image features into the target detection network in the article detection model , Obtain the location information and category information of each item to be detected in the output one or more items to be detected.
  • step S1006 it is determined whether the item to be detected is a prohibited item according to the category information of each item to be detected.
  • the feature extraction network can adopt a lightweight neural network model, which can reduce the amount of model calculation and increase the processing speed.
  • the item detection model may be a MobileNet-SSD model, which is not limited to the examples.
  • MobileNet a lightweight neural network model for mobile terminals
  • SSD single shot multi-frame detector
  • the item detection model can use the existing model, which will not be repeated here.
  • the position information of the object to be detected is, for example, coordinate information.
  • the category information can be the category determined during the training of the item detection model. For example, the items to be inspected can be divided into prohibited items and non-prohibited items, and the actual categories of the items to be inspected can also be identified, such as food, knives, etc., and then According to the actual category, it is mapped to prohibited items and non-prohibited items.
  • the item detection model can be pre-trained offline, and the trained model can be used for real-time item detection.
  • the training process of the item detection model includes, for example, steps S1102 to S1108.
  • step S1102 a plurality of first training sample images are generated, where each first training sample image contains one or more prohibited items and one or more non-prohibited items, and for each first training sample image, one The category of each item in or multiple prohibited items and one or more non-prohibited items shall be marked.
  • step S1104 use a plurality of first training sample images to train the item detection model;
  • step S1106 multiple real images generated by the security inspection machine are acquired as second training sample images, and the category of each item in each second training sample image in the second training sample image is labeled.
  • step S1108, a plurality of second training sample images are used to adjust the parameters of the item detection model to complete the training of the item detection model.
  • step S1104 includes: in step S1202, converting a plurality of first training sample images into a plurality of first grayscale images; in step S1204, converting a plurality of first grayscale images
  • the degree image is input to the feature extraction network in the article detection model to obtain the image features of each first gray-scale image output.
  • the feature extraction network is pre-trained to determine the parameters; in step S1206, the image features of each first gray-scale image are used The image features train the target detection network in the item detection model.
  • Step S1108 includes: in step S1208, converting a plurality of second training sample images into a plurality of second grayscale images; in step S1210, inputting a plurality of second grayscale images into the feature extraction network in the article detection model, The image features of the output second grayscale images are respectively obtained; in step S1212, the parameters of the target detection network in the object detection model are adjusted using the image feature of each second grayscale image.
  • the first training sample image can be generated by using the collected images of various items. For example, you can refer to the actual situation to collect images of a variety of prohibited items and a variety of non-prohibited items (for example, grab various images from the Internet).
  • a plurality of first training sample images may be generated through image fusion technology, where each first training sample image contains one or more prohibited items and one or more non-prohibited items. Before fusion, the image of each item in one or more prohibited items and one or more non-prohibited items can be preprocessed such as scaling, rotation, blurring, and color conversion, so that the generated first training sample image is closer to the real image .
  • the feature extraction network in the item detection model can be pre-trained using public data sets (such as COCO data sets) to obtain network parameters. Further, the parameters of the feature extraction network can be retained by means of migration learning, and a plurality of first training sample images can be used to train the target detection network in the item detection model. For example, each first training sample image is input to the item detection model, and the category of each item in each first training sample image is obtained. The first training sample image is calculated according to the category of each item output and the category of each item marked. A loss function, and the parameters of the target detection network are adjusted according to the first loss function, and the above process is repeated until the preset conditions are met, and a preliminary trained item detection model is obtained.
  • public data sets such as COCO data sets
  • multiple real images generated by the security inspection machine are used as the second training sample images to fine-tune the item detection model. That is, on the basis of the initially trained item detection model, the second training sample image is input for re-training, and finally the trained item detection model is obtained.
  • the first training sample image and the second sample image can also be processed according to the method of the foregoing embodiment to finally obtain corresponding grayscale images, which are then used to train the item detection model, which will not be repeated here.
  • the above-mentioned training process can complete the effective training of the item detection model when the real images generated by the security inspection machine are few, and ensure the accuracy of the item detection model detection.
  • the real image generated by the security inspection machine is sufficient, the real image can be directly used to complete the training of the item detection model.
  • the to-be-recognized image generated by the security inspection machine scanning the object to be detected is obtained, and the to-be-recognized image is converted into a gray image and then input into the object detection model to identify whether each object to be detected is a prohibited object.
  • the method of the foregoing embodiment adopts a machine learning method to perform image recognition on the image to be recognized, which can accurately recognize a large number of images in real time without interruption, and improve the accuracy of item safety detection.
  • different objects to be detected in the images to be identified generated by the security inspection machine have obvious color differences. Converting the images to be identified into grayscale images can reduce the computational complexity in the object detection process while preserving the differences. Improve the efficiency of safety inspections.
  • FIG. 2 is a flowchart of other embodiments of the object detection method of the present disclosure. As shown in FIG. 2, the method of this embodiment includes: steps S202 to S210.
  • step S202 a to-be-identified image generated by the security inspection machine scanning one or more to-be-detected objects is acquired.
  • step S204 the image to be recognized is converted into a gray image to be recognized.
  • step S206 the grayscale image to be identified is input into the object detection model, and it is determined whether each of the one or more objects to be detected in the grayscale image to be identified is a prohibited item.
  • Steps S202 to S206 can be explained with reference to the description of the foregoing embodiment.
  • step S208 when it is determined that one or more items to be detected contain prohibited items, an alarm message is issued.
  • an alarm can be automatically sent to remind the security inspector to deal with it, and a stop message can also be sent to the security inspection machine to deal with the prohibited item.
  • step S210 according to the position information of each item to be detected in the grayscale image to be identified, the identification information of each item to be detected is mapped to the image to be identified, and the identification information of each item is mapped to the image to be identified.
  • the image to be recognized is sent to the display device for display.
  • Steps S208 and S210 can be executed in parallel, regardless of sequence.
  • the middle position of each item to be detected in the grayscale image to be identified corresponds to the position in the image to be identified.
  • the identification information of prohibited items and non-prohibited items can be directly mapped to the image to be identified, and furthermore, the identification information can be mapped to the image to be identified.
  • the image of the information is sent to the display device corresponding to the security inspection machine for display, so that the security inspector can view it.
  • an article detection device can be used to implement the above-mentioned embodiment methods.
  • the article detection device can be externally connected to a security inspection system (security inspection machine, display device, etc.).
  • the X-ray image of the article is acquired through the acquisition module and transmitted to the image processing module and Detection module.
  • the detection module uses a machine learning model to detect images. When prohibited items are detected, the alarm mechanism is triggered to notify the staff to conduct a review and inspection.
  • the item detection device as a whole is an embedded module, it can be connected to any security inspection machine, and the original manual decision-making work is handed over to the algorithm application for judgment, intelligently identified and alarmed, so as to realize the intelligent transformation of the existing gates and improve the items. Detection efficiency and accuracy rate, reduce labor costs.
  • the present disclosure also provides an article detection device, which is described below with reference to FIG. 3.
  • FIG. 3 is a structural diagram of some embodiments of the article detection device of the present disclosure. As shown in FIG. 3, the device 30 of this embodiment includes: an acquisition module 302, an image processing module 304, and a detection module 306.
  • the acquisition module 302 is configured to acquire the to-be-identified image generated by the security inspection machine scanning one or more to-be-detected objects.
  • the image processing module 304 is used to convert the image to be recognized into a grayscale image to be recognized.
  • the image processing module 304 is used to remove the noise in the image to be identified to obtain a denoised image; remove the background in the denoised image to obtain the target area image; according to the hue, saturation, and brightness of the target area image
  • the HSV feature converts the target area image into a grayscale image as the grayscale image to be identified.
  • the image processing module 304 is used to input the denoised image into the image segmentation model to determine the category to which each pixel belongs.
  • the categories include: foreground category or background category; extract the denoised image according to the category to which each pixel belongs The mask image; the mask image and the to-be-identified image are subjected to the bitwise AND operation of the pixel values to obtain the target area image.
  • the image processing module 304 is used to convert the red, green, and blue RGB values of each pixel in the target area image into HSV values; according to the tone value of each pixel, the target area image is converted into a grayscale image as a to-be Identify grayscale images.
  • the detection module 306 is configured to input the gray image to be identified into the object detection model to determine whether each of the one or more objects to be detected in the gray image to be identified is a prohibited item.
  • the detection module 306 is used to input the gray image to be recognized into the feature extraction network in the item detection model to obtain the output image features of the gray image to be recognized; wherein the feature extraction network is a lightweight neural network Model; input image features into the target detection network in the item detection model to obtain the output category information of each item to be inspected; determine whether the item to be inspected is a prohibited item according to the category information of each item to be inspected.
  • the device 30 further includes: an alarm module 308, configured to issue an alarm message when it is determined that one or more items to be detected contain prohibited items.
  • the object detection model also outputs the position information of each object to be detected in the grayscale image to be identified; the device 30 also includes: a display module 310, which is used to determine the position of each object to be detected in the grayscale image to be identified.
  • the location information in the figure maps the identification information of whether each item to be detected is a prohibited item to the image to be identified, and the image to be identified with the identification information is sent to the display device for display.
  • the device 30 further includes: a training module 312 for generating a plurality of first training sample images, wherein each first training sample image contains one or more prohibited items and one or more non-prohibited items , To mark the category of one or more prohibited items in each first training sample image and one or more non-banned items in each item; use multiple first training sample images to train the item detection model; Obtain multiple real images generated by the security inspection machine as the second training sample image, and mark the category of each item in each second training sample image in the second training sample image; use multiple second training sample images to label the item
  • the detection model parameters are adjusted to complete the training of the item detection model.
  • the training module is used to convert a plurality of first training sample images into a plurality of first grayscale images; input the plurality of first grayscale images into the feature extraction network in the item detection model to obtain the output respectively The image features of each first gray-scale image, where the feature extraction network is pre-trained to determine parameters; the image features of each first gray-scale image are used to train the target detection network in the item detection model; multiple second training samples The image is converted into multiple second grayscale images; the multiple second grayscale images are input into the feature extraction network in the article detection model, and the image features of the output second grayscale images are obtained respectively; each second grayscale image is used The parameters of the target detection network in the image feature item detection model are adjusted.
  • the article detection devices in the embodiments of the present disclosure can be implemented by various computing devices or computer systems, which are described below with reference to FIG. 4 and FIG. 5.
  • Fig. 4 is a structural diagram of some embodiments of the article detection device of the present disclosure.
  • the device 40 of this embodiment includes: a memory 410 and a processor 420 coupled to the memory 410, and the processor 420 is configured to execute any implementation in the present disclosure based on instructions stored in the memory 410 The item detection method in the example.
  • the memory 410 may include, for example, a system memory, a fixed non-volatile storage medium, and the like.
  • the system memory for example, stores an operating system, an application program, a boot loader (Boot Loader), a database, and other programs.
  • Fig. 5 is a structural diagram of other embodiments of the article detection device of the present disclosure.
  • the apparatus 50 of this embodiment includes: a memory 510 and a processor 520, which are similar to the memory 410 and the processor 420, respectively. It may also include an input/output interface 530, a network interface 540, a storage interface 550, and so on. These interfaces 530, 540, 550 and the memory 510 and the processor 520 may be connected via a bus 560, for example.
  • the input and output interface 530 provides a connection interface for input and output devices such as a display, a mouse, a keyboard, and a touch screen.
  • the network interface 540 provides a connection interface for various networked devices, for example, it can be connected to a database server or a cloud storage server.
  • the storage interface 550 provides a connection interface for external storage devices such as SD cards and U disks.
  • the present disclosure also provides an article detection system, which is described below with reference to FIG. 6.
  • FIG. 6 is a structural diagram of some embodiments of the object detection system of the present disclosure. As shown in FIG. 6, the system 6 of this embodiment includes: the article detection device 30/40/50 of any of the foregoing embodiments, and a security inspection machine 62.
  • the security inspection machine 62 is used to scan the to-be-identified images generated by one or more to-be-detected objects.
  • the system 6 further includes a display device 64 for receiving and displaying the image to be identified with identification information sent by the article detection device.
  • the embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the present disclosure may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present disclosure may take the form of a computer program product implemented on one or more computer-usable non-transitory storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. .
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Immunology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Chemical & Material Sciences (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Image Analysis (AREA)

Abstract

一种物品检测方法、装置、系统和计算机可读存储介质,涉及计算机技术领域。该方法包括:获取安检机扫描一个或多个待检测物品生成的待识别图像(S102);将待识别图像转换为待识别灰度图(S104);将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品(S106)。

Description

物品检测方法、装置、系统和计算机可读存储介质
相关申请的交叉引用
本申请是以CN申请号为201910981844.1,申请日为2019年10月16日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开涉及计算机技术领域,特别涉及一种物品检测方法、装置、系统和计算机可读存储介质。
背景技术
随着经济发展,交通愈加发达,人们的出行也更加方便快捷。与此同时,人员的安全问题更加不可忽视。为保障出行安全,在各轨道交通、车站、海关、物流中心等地入口设置安检机,对携带或托运行李物品进行安全检查,禁止携带危险物品进入。
目前安检机主要通过发射X射线,根据物体对X射线的吸收程度,通过信号处理成不同颜色的影像,并展现在屏幕上。安检员通过查看X射线透视图像,通过经验判断是否有违禁物品。
发明内容
根据本公开的一些实施例,提供的一种物品检测方法,包括:获取安检机扫描一个或多个待检测物品生成的待识别图像;将待识别图像转换为待识别灰度图;将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品。
在一些实施例中,将待识别图像转换为待识别灰度图包括:去除待识别图像中的噪声,得到去噪图像;将去噪图像中的背景去除,得到目标区域图像;根据目标区域图像的色调、饱和度、明度HSV特征将目标区域图像转换为灰度图,作为待识别灰度图。
在一些实施例中,将去噪图像中的背景去除,得到目标区域图像包括:将去噪图像输入图像分割模型,确定各个像素点所属的类别,类别包括:前景类别或者背景类别;根据各个像素点所属的类别,提取去噪图像的掩膜图像;将掩膜图像与待识别图 像按位进行像素值的与运算,得到目标区域图像。
在一些实施例中,根据目标区域图像的HSV特征将目标区域图像转换为灰度图,作为待识别灰度图包括:将目标区域图像中各个像素的红、绿、蓝RGB值转换为HSV值;根据各个像素的色调值将目标区域图像转换为灰度图,作为待识别灰度图。
在一些实施例中,将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品包括:将待识别灰度图输入物品检测模型中的特征提取网络,得到输出的待识别灰度图的图像特征;其中,特征提取网络为轻量级神经网络模型;将图像特征输入物品检测模型中的目标检测网络,得到输出的每个待检测物品的类别信息;根据每个待检测物品的类别信息确定待检测物品是否属于违禁物品。
在一些实施例中,该方法还包括:在确定一个或多个待检测物品中包含违禁物品的情况下,发出报警信息。
在一些实施例中,物品检测模型还输出每个待检测物品在待识别灰度图中的位置信息,该方法还包括:根据每个待检测物品在待识别灰度图中的位置信息,将每个待检测物品的是否属于违禁物品的标识信息映射到待识别图像中,并将带有标识信息的待识别图像发送至显示装置进行显示。
在一些实施例中,该方法还包括:生成多个第一训练样本图像,其中,每个第一训练样本图像包含一个或多个违禁物品和一个或多个非违禁物品,对每个第一训练样本图像中的一个或多个违禁物品和一个或多个非违禁物品中每个物品的类别进行标注;利用多个第一训练样本图像对物品检测模型进行训练;;获取安检机生成的多个真实图像作为第二训练样本图像,并对第二训练样本图像中每个第二训练样本图像的每个物品的类别进行标注;利用多个第二训练样本图像对物品检测模型参数进行调整,完成对物品检测模型的训练。
在一些实施例中,利用多个第一训练样本图像对物品检测模型进行训练包括:将多个第一训练样本图像转换为多个第一灰度图像;将多个第一灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第一灰度图像的图像特征,其中,特征提取网络经过预训练确定参数;利用各个第一灰度图像的图像特征对物品检测模型中的目标检测网络进行训练;利用多个第二训练样本图像对物品检测模型参数进行调整包括:将多个第二训练样本图像转换为多个第二灰度图像;将多个第二灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第二灰度图像的图像特征;利 用各个第二灰度图像的图像特征物品检测模型的中的目标检测网络的参数进行调整。
根据本公开的另一些实施例,提供的一种物品检测装置,包括:采集模块,用于获取安检机扫描一个或多个待检测物品生成的待识别图像;图像处理模块,用于将待识别图像转换为待识别灰度图;检测模块,用于将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品。
在一些实施例中,图像处理模块用于去除待识别图像中的噪声,得到去噪图像;将去噪图像中的背景去除,得到目标区域图像;根据目标区域图像的色调、饱和度、明度HSV特征将目标区域图像转换为灰度图,作为待识别灰度图。
在一些实施例中,图像处理模块用于将去噪图像输入图像分割模型,确定各个像素点所属的类别,类别包括:前景类别或者背景类别;根据各个像素点所属的类别,提取去噪图像的掩膜图像;将掩膜图像与待识别图像按位进行像素值的与运算,得到目标区域图像。
在一些实施例中,图像处理模块用于将目标区域图像中各个像素的红、绿、蓝RGB值转换为HSV值;根据各个像素的色调值将目标区域图像转换为灰度图,作为待识别灰度图。
在一些实施例中,检测模块用于将待识别灰度图输入物品检测模型中的特征提取网络,得到输出的待识别灰度图的图像特征;其中,特征提取网络为轻量级神经网络模型;将图像特征输入物品检测模型中的目标检测网络,得到输出的每个待检测物品的类别信息;根据每个待检测物品的类别信息确定待检测物品是否属于违禁物品。
在一些实施例中,该装置还包括:报警模块,用于在确定一个或多个待检测物品中包含违禁物品的情况下,发出报警信息。
在一些实施例中,物品检测模型还输出每个待检测物品在待识别灰度图中的位置信息,该装置还包括:显示模块,用于根据每个待检测物品在待识别灰度图中的位置信息,将每个待检测物品的是否属于违禁物品的标识信息映射到待识别图像中,并将带有标识信息的待识别图像发送至显示装置进行显示。
在一些实施例中,该装置还包括:训练模块用于生成多个第一训练样本图像,其中,每个第一训练样本图像包含一个或多个违禁物品和一个或多个非违禁物品,对每个第一训练样本图像中的一个或多个违禁物品和一个或多个非违禁物品中每个物品的类别进行标注;利用多个第一训练样本图像对物品检测模型进行训练;;获取安检机生成的多个真实图像作为第二训练样本图像,并对第二训练样本图像中每个第二训 练样本图像的每个物品的类别进行标注;利用多个第二训练样本图像对物品检测模型参数进行调整,完成对物品检测模型的训练。
在一些实施例中,训练模块用于将多个第一训练样本图像转换为多个第一灰度图像;将多个第一灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第一灰度图像的图像特征,其中,特征提取网络经过预训练确定参数;利用各个第一灰度图像的图像特征对物品检测模型中的目标检测网络进行训练;将多个第二训练样本图像转换为多个第二灰度图像;将多个第二灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第二灰度图像的图像特征;利用各个第二灰度图像的图像特征物品检测模型的中的目标检测网络的参数进行调整。
根据本公开的又一些实施例,提供的一种物品检测装置,包括:处理器;以及耦接至处理器的存储器,用于存储指令,指令被处理器执行时,使处理器执行如前述任意实施例的物品检测方法。
根据本公开的再一些实施例,提供的一种计算机可读非瞬时性存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现前述任意实施例的物品检测方法。
根据本公开的又一些实施例,提供的一种物品检测系统,包括:前述任意实施例的物品检测装置以及安检机,用于扫描一个或多个待检测物品生成的待识别图像。
在一些实施例中,该系统还包括:显示装置,用于接收物品检测装置发送的带有标识信息的待识别图像,并进行显示。
通过以下参照附图对本公开的示例性实施例的详细描述,本公开的其它特征及其优点将会变得清楚。
附图说明
此处所说明的附图用来提供对本公开的进一步理解,构成本申请的一部分,本公开的示意性实施例及其说明被配置为解释本公开,并不构成对本公开的不当限定。在附图中:
图1示出本公开的一些实施例的物品检测方法的流程示意图。
图2示出本公开的另一些实施例的物品检测方法的流程示意图。
图3示出本公开的一些实施例的物品检测装置的结构示意图。
图4示出本公开的另一些实施例的物品检测装置的结构示意图。
图5示出本公开的又一些实施例的物品检测装置的结构示意图。
图6示出本公开的一些实施例的物品检测系统的结构示意图。
图7示出本公开的又一些实施例的物品检测方法的流程示意图。
图8示出本公开的再一些实施例的物品检测方法的流程示意图。
图9示出本公开的又一些实施例的物品检测方法的流程示意图。
图10示出本公开的再一些实施例的物品检测方法的流程示意图。
图11示出本公开的又一些实施例的物品检测方法的流程示意图。
图12示出本公开的再一些实施例的物品检测方法的流程示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。以下对一个或多个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
发明人发现:目前物品安检主要由安全员根据经验判断。人员本身存在不可控因素,如安检员经验不足、工作懈怠、一时疏忽等各种因素都可能导致危险物品的漏检,存在安全风险。本公开所要解决的一个技术问题是:提高物品的安全检测的准确性。
针对目前物品安检主要由安全员根据经验判断,存在漏检等风险的问题,提出本方案。下面结合图1描述本公开的物品检测方法的一些实施例。
图1为本公开物品检测方法一些实施例的流程图。如图1所示,该实施例的方法包括:步骤S102~S106。
在步骤S102中,获取安检机扫描一个或多个待检测物品生成的待识别图像。
一个或多个待检测物品被输送进入安检机后,安检机中可以采用X射线或其他形式对一个或多个待检测物品进行扫描形成图像。目前市面上部署的安检机都配套显示屏,可以将扫描一个或多个待检测物品生成的图像信息通过HDMI(High Definition Multimedia Interface,高清多媒体接口)信号或者VGA(Video Graphics Array,视频图形阵列)信号等信号同步显示到屏幕上。可以通过本公开的物品检测装置的采集模块,将安检机生成的信号获取,进一步将信号转换为视频源,对视频源中的图像帧进行读取,以获得待识别图像。例如,对视频源可以每隔预设帧数提取一帧图像作为待识别图像,保证所有待识别图像可以覆盖所有通过安检机的待检测物品,避免漏检 即可。例如可以使用OpenCV进行视频源的读取,得到待识别图像。
在步骤S104中,将待识别图像转换为待识别灰度图。
待识别图像例如为X射线扫描图像,由于X射线对不同物品的穿透力不同,使得不同物品具有较明显的颜色差别,以一定规律分布。例如,食品等有机物显示为橙色;陶瓷等显示为绿色;金属则显示为蓝色。将待识别图像转换为灰度图,得到待识别灰度图,可以保留不同物品颜色的差别,同时还可以减少像素点的像素值的数据量,以便提高后续的图像识别效率。
进一步,可以根据待识别图像的HSV(色调、饱和度、明度)特征将待识别图像转换为灰度图。可以将待识别图像由RGB(红、绿、蓝)颜色空间转换到HSV颜色空间,使用其中的H分量来进行像素点的颜色的转换。H参数表示色彩信息,即所处的光谱颜色的位置。该参数用一角度量来表示,红、绿、蓝分别相隔120度。通过H参数来进行灰度图的转换可以尽量保留不同物品颜色的差别,提高后续识别的准确率。
即可以首先将待识别图像像素的RGB值转换为HSV值,根据像素的色调值(H值)将待识别图像转换为灰度图。可以将H值进行0-255的归一化操作,便可得到映射的灰度图。例如,将H值与240的比值乘以255,可以得到灰度值。
在一些实施例中,如图7所示,步骤S104包括:步骤S702~S706。在将待识别图像转换为待识别灰度图之前,在步骤S702中,去除待识别图像中的噪声,得到去噪图像。例如,可以采用高斯低通滤波或形态学运算等方式对待识别图像进行预处理,去除待识别图像中的噪声点。形态学运算例如图像的膨胀、腐蚀、开闭运算等,与高斯低通滤波均属于现有技术,在此不再赘述。
进一步,在步骤S704中,将去噪图像中的背景去除,得到目标区域图像。在步骤S706中,根据目标区域图像的HSV特征将目标区域图像转换为灰度图,作为待识别灰度图。在一些实施例中,如图8所示,步骤S704包括:步骤S802~S806。在步骤S802中,将去噪图像输入图像分割模型,确定各个像素点所属的类别,类别包括:前景类别或者背景类别;在步骤S804中,根据各个像素点所属的类别,提取去噪图像的掩膜图像;在步骤S806中,将掩膜图像与待识别图像按位进行像素值的与运算,得到目标区域图像。
图像分割模型例如可以是SVM(支持向量机)模型,或者FCN(Fully Convolutional Network,全卷及神经网络)等现有模型,在此不再赘述。利用图像分割模型对待识别图像的前景和背景分离,得到掩码图像,进而将掩码图像与待识别图 像进行融合,得到的目标区域图像只包含一个或多个待检测物品去除了背景和噪声,提高后续物品检测的准确率。
将目标区域图像转换为待识别灰度图的过程可以参考前述实施例,例如,如图9所示,步骤S706包括,步骤S902~S904。在步骤S902中,将目标区域图像中各个像素的RGB值转换为HSV值;在步骤S904中,根据各个像素的色调值将目标区域图像转换为灰度图,作为待识别灰度图。
在步骤S106中,将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品。
在一些实施例中,物品检测模型可以包括特征提取网络和目标检测网络。如图10所示,步骤S106包括:步骤S1002~S1006。在步骤S1002中,将待识别灰度图输入物品检测模型中的特征提取网络,得到输出的待识别灰度图的图像特征;在步骤S1004中,将图像特征输入物品检测模型中的目标检测网络,得到输出的一个或多个待检测物品中每个待检测物品的位置信息和类别信息,在步骤S1006中,根据每个待检测物品的类别信息确定待检测物品是否属于违禁物品。由于待识别灰度图的颜色相对单一,像素值的数据量小,因此,特征提取网络可以采用轻量级神经网络模型,可以减少模型计算量,提升处理速度。例如,物品检测模型可以为MobileNet-SSD模型,不限于所举示例。MobileNet(面向移动端的轻量神经网络模型)作为特征提取网络,SSD(单发多框检测器)作为目标检测网络。物品检测模型可以采用现有模型,在此不再赘述。
待检测物品的位置信息例如为坐标信息。类别信息可以是物品检测模型进行训练时确定的类别,例如,可以将待检测物品划分为违禁物品和非违禁物品,也可以将待检测物品的实际类别识别出来,例如,食品,刀具等,进而根据实际类别映射为违禁物品和非违禁物品。
可以预先对物品检测模型进行离线训练,训练好的模型可以用于实时的物品检测。如图11所示,物品检测模型的训练过程例如包括:步骤S1102~S1108。在步骤S1102中,生成多个第一训练样本图像,其中,每个第一训练样本图像包含一个或多个违禁物品和一个或多个非违禁物品,对每个第一训练样本图像中的一个或多个违禁物品和一个或多个非违禁物品中每个物品的类别进行标注。在步骤S1104中,利用多个第一训练样本图像对物品检测模型进行训练;。在步骤S1106中,获取安检机生成的多个真实图像作为第二训练样本图像,并对第二训练样本图像中每个第二训练样本图像的每个物品的类别进行标注。在步骤S1108中,利用多个第二训练样本图像对物品检测 模型参数进行调整,完成对物品检测模型的训练。
在一些实施例中,如图12所示,步骤S1104包括:在步骤S1202中,将多个第一训练样本图像转换为多个第一灰度图像;在步骤S1204中,将多个第一灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第一灰度图像的图像特征,其中,特征提取网络经过预训练确定参数;在步骤S1206中,利用各个第一灰度图像的图像特征对物品检测模型中的目标检测网络进行训练。步骤S1108包括:在步骤S1208中,将多个第二训练样本图像转换为多个第二灰度图像;在步骤S1210中,将多个第二灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第二灰度图像的图像特征;在步骤S1212中,利用各个第二灰度图像的图像特征物品检测模型的中的目标检测网络的参数进行调整。
由于训练模型需要大量训练样本图像,实际应用时安检机生成的图像不容易获取的情况下,可以利用采集的各种物品的图像生成第一训练样本图像。例如,可以参考实际情况,采集多种违禁物品和多种非违禁物品的图像(例如从网络上抓取各种图像)。进一步,可以通过图像融合技术生成多个第一训练样本图像,其中,每个第一训练样本图像包含一个或多个违禁物品和一个或多个非违禁物品。在融合之前可以对一个或多个违禁物品和一个或多个非违禁物品中每个物品的图像进行缩放、旋转、模糊、颜色转换等预处理,使得生成的第一训练样本图像更加接近真实图像。
物品检测模型中的特征提取网络可以利用公开数据集(例如COCO数据集)进行预训练,得到网络参数。进一步,可以采用迁移学习的方式保留特征提取网络的参数,利用多个第一训练样本图像对物品检测模型中的目标检测网络进行训练。例如,将每个第一训练样本图像输入物品检测模型,得到输出的每个第一训练样本图像中每个物品的类别,根据输出的每个物品的类别以及标注的每个物品的类别计算第一损失函数,并根据第一损失函数对目标检测网络的参数进行调整,重复上述过程,直至满足预设条件,得到初步训练的物品检测模型。进一步,利用安检机生成的多个真实图像作为第二训练样本图像,对物品检测模型进行微调。即在初步训练的物品检测模型的基础上,输入第二训练样本图像进行再次训练,最终得到训练完成的物品检测模型。第一训练样本图像和第二样本图像也可以根据前述实施例的方法进行图像处理最终得到相应的灰度图,再用于对物品检测模型的训练,在此不再赘述。
上述训练过程,可以在安检机生成的真实图像较少的情况下,完成对物品检测模型的有效训练,保证物品检测模型检测的准确性。在安检机生成的真实图像足够的情 况下,可以直接利用真实图像完成对物品检测模型的训练。
上述实施例的方法中将安检机扫描待检测物品生成的待识别图像获取到,通过将待识别图像转换为灰度图进而输入物品检测模型,识别每个待检测物品是否属于违禁物品。上述实施例的方法采用机器学习的方法对待识别图像进行图像识别,可以实时准确的对大量图像进行无间断的识别,提高物品的安全检测的准确率。此外,安检机生成的待识别图像中不同待检测物品具有较明显的颜色差别,将待识别图像转换为灰度图,可以在保留差别的情况下,可以减少物品检测过程中的计算复杂度,提高安全检测的效率。
下面结合图2描述本公开物品检测方法的另一些实施例。
图2为本公开物品检测方法另一些实施例的流程图。如图2所示,该实施例的方法包括:步骤S202~S210。
在步骤S202中,获取安检机扫描一个或多个待检测物品生成的待识别图像。
在步骤S204中,将待识别图像转换为待识别灰度图。
在步骤S206中,将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品。
步骤S202~S206可以参考前述实施例的描述进行解释。
在步骤S208中,在确定一个或多个待检测物品中包含违禁物品的情况下,发出报警信息。
在检测出违禁物品的情况下可以自动进行报警,从而提醒安检员进行处理,还可以向安检机发出停止信息,以便对违禁物品进行处理。
在步骤S210中,根据每个待检测物品在待识别灰度图中的位置信息,将每个待检测物品的是否属于违禁物品的标识信息映射到待识别图像中,并将带有标识信息的待识别图像发送至显示装置进行显示。
步骤S208和S210可以并列执行,不分先后。
每个待检测物品在待识别灰度图的中位置和在待识别图像中的位置相对应,可以直接将违禁物品和非违禁物品的标识信息映射到待识别图像中,进一步可以将带有标识信息的图像发送至安检机对应的显示装置上进行显示,以便安检员查看。
本公开中可以用物品检测装置实现上述实施例方法,物品检测装置可以外设于安检系统(安检机和显示装置等),通过采集模块获取物品的X光透视图像,并传给图像处理模块和检测模块。检测模块采用机器学习模型对图像进行检测。当检测到违禁 物品时,触发报警机制,通知工作人员进行复核检验。当物品检测装置整体作为一个嵌入模块,可将其接入任何一个安检机上,将原本人工决策的工作交由算法应用做判断,智能识别并报警,实现现有闸机的智能化改造,提高物品检测效率及准确率,减少人力成本。
本公开还提供一种物品检测装置,下面结合图3进行描述。
图3为本公开物品检测装置的一些实施例的结构图。如图3所示,该实施例的装置30包括:采集模块302,图像处理模块304,检测模块306。
采集模块302,用于获取安检机扫描一个或多个待检测物品生成的待识别图像。
图像处理模块304,用于将待识别图像转换为待识别灰度图。
在一些实施例中,图像处理模块304用于去除待识别图像中的噪声,得到去噪图像;将去噪图像中的背景去除,得到目标区域图像;根据目标区域图像的色调、饱和度、明度HSV特征将目标区域图像转换为灰度图,作为待识别灰度图。
在一些实施例中,图像处理模块304用于将去噪图像输入图像分割模型,确定各个像素点所属的类别,类别包括:前景类别或者背景类别;根据各个像素点所属的类别,提取去噪图像的掩膜图像;将掩膜图像与待识别图像按位进行像素值的与运算,得到目标区域图像。
在一些实施例中,图像处理模块304用于将目标区域图像中各个像素的红、绿、蓝RGB值转换为HSV值;根据各个像素的色调值将目标区域图像转换为灰度图,作为待识别灰度图。
检测模块306,用于将待识别灰度图输入物品检测模型,确定待识别灰度图中的一个或多个待检测物品中每个待检测物品是否属于违禁物品。
在一些实施例中,检测模块306用于将待识别灰度图输入物品检测模型中的特征提取网络,得到输出的待识别灰度图的图像特征;其中,特征提取网络为轻量级神经网络模型;将图像特征输入物品检测模型中的目标检测网络,得到输出的每个待检测物品的类别信息;根据每个待检测物品的类别信息确定待检测物品是否属于违禁物品。
在一些实施例中,该装置30还包括:报警模块308,用于在确定一个或多个待检测物品中包含违禁物品的情况下,发出报警信息。
在一些实施例中,物品检测模型还输出每个待检测物品在待识别灰度图中的位置信息;该装置30还包括:显示模块310,用于根据每个待检测物品在待识别灰度图中的位置信息,将每个待检测物品的是否属于违禁物品的标识信息映射到待识别图像中, 并将带有标识信息的待识别图像发送至显示装置进行显示。
在一些实施例中,该装置30还包括:训练模块312用于生成多个第一训练样本图像,其中,每个第一训练样本图像包含一个或多个违禁物品和一个或多个非违禁物品,对每个第一训练样本图像中的一个或多个违禁物品和一个或多个非违禁物品中每个物品的类别进行标注;利用多个第一训练样本图像对物品检测模型进行训练;;获取安检机生成的多个真实图像作为第二训练样本图像,并对第二训练样本图像中每个第二训练样本图像的每个物品的类别进行标注;利用多个第二训练样本图像对物品检测模型参数进行调整,完成对物品检测模型的训练。
在一些实施例中,训练模块用于将多个第一训练样本图像转换为多个第一灰度图像;将多个第一灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第一灰度图像的图像特征,其中,特征提取网络经过预训练确定参数;利用各个第一灰度图像的图像特征对物品检测模型中的目标检测网络进行训练;将多个第二训练样本图像转换为多个第二灰度图像;将多个第二灰度图像输入物品检测模型中的特征提取网络,分别得到输出的各个第二灰度图像的图像特征;利用各个第二灰度图像的图像特征物品检测模型的中的目标检测网络的参数进行调整。
本公开的实施例中的物品检测装置可各由各种计算设备或计算机系统来实现,下面结合图4以及图5进行描述。
图4为本公开物品检测装置的一些实施例的结构图。如图4所示,该实施例的装置40包括:存储器410以及耦接至该存储器410的处理器420,处理器420被配置为基于存储在存储器410中的指令,执行本公开中任意一些实施例中的物品检测方法。
其中,存储器410例如可以包括系统存储器、固定非易失性存储介质等。系统存储器例如存储有操作系统、应用程序、引导装载程序(Boot Loader)、数据库以及其他程序等。
图5为本公开物品检测装置的另一些实施例的结构图。如图5所示,该实施例的装置50包括:存储器510以及处理器520,分别与存储器410以及处理器420类似。还可以包括输入输出接口530、网络接口540、存储接口550等。这些接口530,540,550以及存储器510和处理器520之间例如可以通过总线560连接。其中,输入输出接口530为显示器、鼠标、键盘、触摸屏等输入输出设备提供连接接口。网络接口540为各种联网设备提供连接接口,例如可以连接到数据库服务器或者云端存储服务器等。存储接口550为SD卡、U盘等外置存储设备提供连接接口。
本公开还提供一种物品检测系统,下面结合图6进行描述。
图6为本公开物品检测系统的一些实施例的结构图。如图6所示,该实施例的系统6包括:前述任意实施例的物品检测装置30/40/50,以及安检机62。
安检机62用于扫描一个或多个待检测物品生成的待识别图像。
在一些实施例中,该系统6还包括:显示装置64,用于接收物品检测装置发送的带有标识信息的待识别图像,并进行显示。
本领域内的技术人员应当明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此,本公开可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用非瞬时性存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解为可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述仅为本公开的较佳实施例,并不用以限制本公开,凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (16)

  1. 一种物品检测方法,包括:
    获取安检机扫描一个或多个待检测物品生成的待识别图像;
    将所述待识别图像转换为待识别灰度图;
    将所述待识别灰度图输入物品检测模型,确定所述待识别灰度图中的所述一个或多个待检测物品中每个待检测物品是否属于违禁物品。
  2. 根据权利要求1所述的物品检测方法,其中,所述将所述待识别图像转换为待识别灰度图包括:
    去除所述待识别图像中的噪声,得到去噪图像;
    将所述去噪图像中的背景去除,得到目标区域图像;
    根据所述目标区域图像的色调、饱和度、明度HSV特征将所述目标区域图像转换为灰度图,作为待识别灰度图。
  3. 根据权利要求2所述的物品检测方法,其中,所述将所述去噪图像中的背景去除,得到目标区域图像包括:
    将所述去噪图像输入图像分割模型,确定各个像素点所属的类别,所述类别包括:前景类别或者背景类别;
    根据各个像素点所属的类别,提取所述去噪图像的掩膜图像;
    将所述掩膜图像与所述待识别图像按位进行像素值的与运算,得到所述目标区域图像。
  4. 根据权利要求2所述的物品检测方法,其中,所述根据所述目标区域图像的HSV特征将所述目标区域图像转换为灰度图,作为待识别灰度图包括:
    将所述目标区域图像中各个像素的红、绿、蓝RGB值转换为HSV值;
    根据各个所述像素的色调值将所述目标区域图像转换为灰度图,作为待识别灰度图。
  5. 根据权利要求1所述的物品检测方法,其中,所述将所述待识别灰度图输入物 品检测模型,确定所述待识别灰度图中的所述一个或多个待检测物品中每个待检测物品是否属于违禁物品包括:
    将所述待识别灰度图输入所述物品检测模型中的特征提取网络,得到输出的所述待识别灰度图的图像特征;其中,所述特征提取网络为轻量级神经网络模型;
    将所述图像特征输入所述物品检测模型中的目标检测网络,得到输出的每个待检测物品的类别信息;
    根据每个待检测物品的类别信息确定所述待检测物品是否属于违禁物品。
  6. 根据权利要求1所述的物品检测方法,还包括:
    在确定所述一个或多个待检测物品中包含违禁物品的情况下,发出报警信息。
  7. 根据权利要求1所述的物品检测方法,其中,所述物品检测模型还输出每个待检测物品在所述待识别灰度图中的位置信息,所述方法还包括:
    根据每个待检测物品在所述待识别灰度图中的位置信息,将所述每个待检测物品的是否属于违禁物品的标识信息映射到所述待识别图像中,并将带有标识信息的待识别图像发送至显示装置进行显示。
  8. 根据权利要求1所述的物品检测方法,还包括:
    生成多个第一训练样本图像,其中,每个第一训练样本图像包含一个或多个违禁物品和一个或多个非违禁物品,对每个所述第一训练样本图像中的所述一个或多个违禁物品和所述一个或多个非违禁物品中每个物品的类别进行标注;
    利用所述多个第一训练样本图像对所述物品检测模型进行训练;;
    获取安检机生成的多个真实图像作为第二训练样本图像,并对所述第二训练样本图像中每个第二训练样本图像的每个物品的类别进行标注;
    利用所述多个第二训练样本图像对所述物品检测模型参数进行调整,完成对所述物品检测模型的训练。
  9. 根据权利要求8所述的物品检测方法,其中,所述利用所述多个第一训练样本图像对所述物品检测模型进行训练包括:
    将所述多个第一训练样本图像转换为多个第一灰度图像;
    将所述多个第一灰度图像输入所述物品检测模型中的特征提取网络,分别得到输出的各个所述第一灰度图像的图像特征,其中,所述特征提取网络经过预训练确定参数;
    利用各个所述第一灰度图像的图像特征对所述物品检测模型中的目标检测网络进行训练;
    所述利用所述多个第二训练样本图像对所述物品检测模型参数进行调整包括:
    将所述多个第二训练样本图像转换为多个第二灰度图像;
    将所述多个第二灰度图像输入所述物品检测模型中的特征提取网络,分别得到输出的各个所述第二灰度图像的图像特征;
    利用各个所述第二灰度图像的图像特征所述物品检测模型的中的目标检测网络的参数进行调整。
  10. 一种物品检测装置,包括:
    采集模块,用于获取安检机扫描一个或多个待检测物品生成的待识别图像;
    图像处理模块,用于将所述待识别图像转换为待识别灰度图;
    检测模块,用于将所述待识别灰度图输入物品检测模型,确定所述待识别灰度图中的所述一个或多个待检测物品中每个待检测物品是否属于违禁物品。
  11. 根据权利要求10所述的物品检测装置,还包括:
    报警模块,用于在确定所述一个或多个待检测物品中包含违禁物品的情况下,发出报警信息。
  12. 根据权利要求10所述的物品检测装置,其中,所述物品检测模型还输出每个待检测物品在所述待识别灰度图中的位置信息,所述装置还包括:
    显示模块,用于根据每个待检测物品在所述待识别灰度图中的位置信息,将所述每个待检测物品的是否属于违禁物品的标识信息映射到所述待识别图像中,并将带有标识信息的待识别图像发送至显示装置进行显示。
  13. 一种物品检测装置,包括:
    处理器;以及
    耦接至所述处理器的存储器,用于存储指令,所述指令被所述处理器执行时,使所述处理器执行如权利要求1-9任一项所述的物品检测方法。
  14. 一种计算机可读非瞬时性存储介质,其上存储有计算机程序,其中,该程序被处理器执行时实现权利要求1-9任一项所述方法的步骤。
  15. 一种物品检测系统,包括:权利要求10-13任一项所述的物品检测装置;以及
    安检机,用于扫描一个或多个待检测物品生成的待识别图像。
  16. 根据权利要求15所述的物品检测系统,还包括:
    显示装置,用于接收所述物品检测装置发送的带有标识信息的待识别图像,并进行显示。
PCT/CN2020/116728 2019-10-16 2020-09-22 物品检测方法、装置、系统和计算机可读存储介质 WO2021073370A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910981844.1 2019-10-16
CN201910981844.1A CN110751079A (zh) 2019-10-16 2019-10-16 物品检测方法、装置、系统和计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021073370A1 true WO2021073370A1 (zh) 2021-04-22

Family

ID=69278456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/116728 WO2021073370A1 (zh) 2019-10-16 2020-09-22 物品检测方法、装置、系统和计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110751079A (zh)
WO (1) WO2021073370A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113221737A (zh) * 2021-05-11 2021-08-06 杭州海康威视数字技术股份有限公司 一种物料信息的确定方法、装置、设备及存储介质
CN113220927A (zh) * 2021-05-08 2021-08-06 百度在线网络技术(北京)有限公司 图像检测方法、装置、设备及存储介质
CN113506283A (zh) * 2021-07-26 2021-10-15 浙江大华技术股份有限公司 图像处理方法及装置、存储介质、电子装置
CN113515248A (zh) * 2021-05-24 2021-10-19 浙江华视智检科技有限公司 安检仪的显示方法、控制设备及计算机可读存储介质
CN114332543A (zh) * 2022-01-10 2022-04-12 成都智元汇信息技术股份有限公司 一种多模板的安检图像识别方法、设备及介质
CN114548230A (zh) * 2022-01-25 2022-05-27 西安电子科技大学广州研究院 基于rgb色彩分离双路特征融合的x射线违禁物品检测方法
CN115797857A (zh) * 2022-11-07 2023-03-14 北京声迅电子股份有限公司 一种出行事件确定方法、安检方法以及事件管理方法
CN117272124A (zh) * 2023-11-23 2023-12-22 湖南苏科智能科技有限公司 一种基于能级的化学品分类方法、系统、设备及存储介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751079A (zh) * 2019-10-16 2020-02-04 北京海益同展信息科技有限公司 物品检测方法、装置、系统和计算机可读存储介质
CN111401182B (zh) * 2020-03-10 2023-12-08 京东科技信息技术有限公司 针对饲喂栏的图像检测方法和装置
CN113740356A (zh) * 2020-05-29 2021-12-03 同方威视技术股份有限公司 图像采集方法、装置和非易失性计算机可读存储介质
CN113822859B (zh) * 2021-08-25 2024-02-27 日立楼宇技术(广州)有限公司 基于图像识别的物品检测方法、系统、装置和存储介质
CN113505771B (zh) * 2021-09-13 2021-12-03 华东交通大学 一种双阶段物品检测方法及装置
CN115761460B (zh) * 2023-01-10 2023-08-01 北京市农林科学院智能装备技术研究中心 大棚房风险识别方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080152082A1 (en) * 2006-08-16 2008-06-26 Michel Bouchard Method and apparatus for use in security screening providing incremental display of threat detection information and security system incorporating same
CN106503719A (zh) * 2016-09-27 2017-03-15 深圳增强现实技术有限公司 一种物体颜色提取与检测方法及装置
CN106932414A (zh) * 2015-12-29 2017-07-07 同方威视技术股份有限公司 检验检疫用检查系统及其方法
CN110020647A (zh) * 2018-01-09 2019-07-16 杭州海康威视数字技术股份有限公司 一种违禁品目标检测方法、装置及计算机设备
CN110751079A (zh) * 2019-10-16 2020-02-04 北京海益同展信息科技有限公司 物品检测方法、装置、系统和计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105021529B (zh) * 2015-06-11 2017-10-17 浙江水利水电学院 融合光谱和图像信息的作物病虫害识别和区分方法
CN108198207A (zh) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 基于改进的Vibe模型和BP神经网络的多运动目标跟踪方法
CN110018524B (zh) * 2019-01-28 2020-12-04 同济大学 一种基于视觉-属性的x射线安检违禁品识别方法
CN109978827A (zh) * 2019-02-25 2019-07-05 平安科技(深圳)有限公司 基于人工智能的违禁物识别方法、装置、设备和存储介质
CN110096960B (zh) * 2019-04-03 2021-06-08 罗克佳华科技集团股份有限公司 目标检测方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080152082A1 (en) * 2006-08-16 2008-06-26 Michel Bouchard Method and apparatus for use in security screening providing incremental display of threat detection information and security system incorporating same
CN106932414A (zh) * 2015-12-29 2017-07-07 同方威视技术股份有限公司 检验检疫用检查系统及其方法
CN106503719A (zh) * 2016-09-27 2017-03-15 深圳增强现实技术有限公司 一种物体颜色提取与检测方法及装置
CN110020647A (zh) * 2018-01-09 2019-07-16 杭州海康威视数字技术股份有限公司 一种违禁品目标检测方法、装置及计算机设备
CN110751079A (zh) * 2019-10-16 2020-02-04 北京海益同展信息科技有限公司 物品检测方法、装置、系统和计算机可读存储介质

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113220927A (zh) * 2021-05-08 2021-08-06 百度在线网络技术(北京)有限公司 图像检测方法、装置、设备及存储介质
CN113221737A (zh) * 2021-05-11 2021-08-06 杭州海康威视数字技术股份有限公司 一种物料信息的确定方法、装置、设备及存储介质
CN113221737B (zh) * 2021-05-11 2023-09-05 杭州海康威视数字技术股份有限公司 一种物料信息的确定方法、装置、设备及存储介质
CN113515248A (zh) * 2021-05-24 2021-10-19 浙江华视智检科技有限公司 安检仪的显示方法、控制设备及计算机可读存储介质
CN113515248B (zh) * 2021-05-24 2024-03-29 浙江华视智检科技有限公司 安检仪的显示方法、控制设备及计算机可读存储介质
CN113506283A (zh) * 2021-07-26 2021-10-15 浙江大华技术股份有限公司 图像处理方法及装置、存储介质、电子装置
CN114332543A (zh) * 2022-01-10 2022-04-12 成都智元汇信息技术股份有限公司 一种多模板的安检图像识别方法、设备及介质
CN114332543B (zh) * 2022-01-10 2023-02-14 成都智元汇信息技术股份有限公司 一种多模板的安检图像识别方法、设备及介质
CN114548230A (zh) * 2022-01-25 2022-05-27 西安电子科技大学广州研究院 基于rgb色彩分离双路特征融合的x射线违禁物品检测方法
CN114548230B (zh) * 2022-01-25 2024-03-26 西安电子科技大学广州研究院 基于rgb色彩分离双路特征融合的x射线违禁物品检测方法
CN115797857A (zh) * 2022-11-07 2023-03-14 北京声迅电子股份有限公司 一种出行事件确定方法、安检方法以及事件管理方法
CN115797857B (zh) * 2022-11-07 2023-08-01 北京声迅电子股份有限公司 一种出行事件确定方法、安检方法以及事件管理方法
CN117272124A (zh) * 2023-11-23 2023-12-22 湖南苏科智能科技有限公司 一种基于能级的化学品分类方法、系统、设备及存储介质
CN117272124B (zh) * 2023-11-23 2024-03-12 湖南苏科智能科技有限公司 一种基于能级的化学品分类方法、系统、设备及存储介质

Also Published As

Publication number Publication date
CN110751079A (zh) 2020-02-04

Similar Documents

Publication Publication Date Title
WO2021073370A1 (zh) 物品检测方法、装置、系统和计算机可读存储介质
US11282185B2 (en) Information processing device, information processing method, and storage medium
KR101863196B1 (ko) 딥러닝 기반 표면 결함 검출장치 및 방법
EP3633605A1 (en) Information processing device, information processing method, and program
CN103761529B (zh) 一种基于多色彩模型和矩形特征的明火检测方法和系统
CN107909093B (zh) 一种物品检测的方法及设备
CN109977877B (zh) 一种安检智能辅助判图方法、系统以及系统控制方法
CN104077577A (zh) 一种基于卷积神经网络的商标检测方法
CN112800860B (zh) 一种事件相机和视觉相机协同的高速抛撒物检测方法和系统
CN102156881B (zh) 基于多尺度图像相位信息的海难搜救目标检测方法
CN111461133B (zh) 快递面单品名识别方法、装置、设备及存储介质
EP3772722A1 (en) X-ray image processing system and method, and program therefor
Zou et al. Dangerous objects detection of X-ray images using convolution neural network
CN112069907A (zh) 基于实例分割的x光机图像识别方法、装置及系统
CN113393482A (zh) 一种基于融合算法的遗留物品检测方法和装置
KR101990123B1 (ko) 영상 분석 장치 및 방법
CN116071315A (zh) 一种基于机器视觉的产品可视缺陷检测方法及系统
CN209895386U (zh) 图像识别系统
CN110992324B (zh) 一种基于x射线图像的智能危险品检测方法及系统
CN107463934A (zh) 一种隧道裂缝检测方法及装置
Navada et al. Design of Mobile Application for Assisting Color Blind People to Identify Information on Sign Boards.
CN109299655A (zh) 一种基于无人机的海上溢油在线快速识别方法
CN115359406A (zh) 一种邮局场景人物交互行为识别方法及系统
CN114399671A (zh) 目标识别方法及装置
CN112633287A (zh) 一种面向矿井多源异构图文信息的文本识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20877795

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20877795

Country of ref document: EP

Kind code of ref document: A1