WO2021022865A1 - 嫌疑物识别方法、装置和系统 - Google Patents

嫌疑物识别方法、装置和系统 Download PDF

Info

Publication number
WO2021022865A1
WO2021022865A1 PCT/CN2020/090973 CN2020090973W WO2021022865A1 WO 2021022865 A1 WO2021022865 A1 WO 2021022865A1 CN 2020090973 W CN2020090973 W CN 2020090973W WO 2021022865 A1 WO2021022865 A1 WO 2021022865A1
Authority
WO
WIPO (PCT)
Prior art keywords
perspective
map
human body
suspect
image
Prior art date
Application number
PCT/CN2020/090973
Other languages
English (en)
French (fr)
Inventor
陈志强
李元景
吴万龙
桑斌
曹硕
程大卫
沈宗俊
丁先利
赵加江
Original Assignee
同方威视技术股份有限公司
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 同方威视技术股份有限公司, 清华大学 filed Critical 同方威视技术股份有限公司
Publication of WO2021022865A1 publication Critical patent/WO2021022865A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Definitions

  • the present disclosure relates to the field of image recognition, and in particular, to a method, device, and system for identifying suspects.
  • the structure of the internal organs of the human body is relatively complex, and various organs and bone structures may overlap in the perspective image.
  • security inspectors view images, they need to distinguish between normal human structure and prohibited items carried inside or outside the body. Therefore, security inspectors are required to understand and familiarize themselves with human structure.
  • the current fluoroscopy security equipment part has the function of referencing typical scanned images, but the scanned images of human bodies of different body types, genders, and ages are quite different, and the reference significance of individual fixed typical images is limited.
  • a method for identifying suspects including:
  • the partition map, the recognition map and the matching map are displayed on the display.
  • the method for identifying suspects further includes:
  • the suspect is determined according to the candidate matching map.
  • determining the area where the suspect is located includes:
  • determining the suspect according to the candidate matching graph includes:
  • the contraband is determined as the suspect.
  • partitioning the scanned perspective view to obtain different regions, and marking the different regions with different colors to obtain the partition map includes:
  • the partitioned perspective library is obtained by partitioning and marking each human body perspective in the human body perspective library according to regions.
  • recognizing the scanned perspective view to obtain the recognition image includes:
  • the scanned perspective image is input to a second machine learning model to obtain the recognition image, wherein the second machine learning model is trained using the human body perspective image as an input and a recognition perspective library as an output, so
  • the recognition perspective library is obtained by marking each human body perspective view in the human body perspective library according to the position, shape and size of the suspect.
  • the area is an area where an organ of the human body is located, and the information includes the location, shape and size of the suspect.
  • a suspicious object identification device including:
  • the scanning module is configured to scan the human body to obtain a scanning perspective view
  • the partition module is configured to partition the scanning perspective view to obtain different areas, and to mark the different areas with different colors to obtain a partition map;
  • An identification module configured to identify the scanned perspective image to obtain an identification image, the identification image displaying the information of the suspect;
  • the matching module is configured to match the scanned perspective image with the human body perspective library to find a matching image similar to the scanned perspective image in the human body perspective library;
  • the display module is configured to display the partition map, the identification map, and the matching map on a display.
  • the suspicious object identification device further includes: a determination module configured to:
  • the suspect is determined according to the candidate matching map.
  • the determining module is further configured to:
  • the determining module is further configured to:
  • the contraband is determined as the suspect.
  • the partition module is further configured to:
  • the partitioned perspective library is obtained by partitioning and marking each human body perspective in the human body perspective library according to regions.
  • the identification module is further configured to:
  • the scanned perspective image is input to a second machine learning model to obtain the recognition image, wherein the second machine learning model is trained using the human body perspective image as an input and a recognition perspective library as an output, so
  • the recognition perspective library is obtained by marking each human body perspective view in the human body perspective library according to the position, shape and size of the suspect.
  • the area is an area where an organ of the human body is located, and the information includes the location, shape and size of the suspect.
  • Fig. 1 shows a flowchart of a method for identifying a suspect according to an embodiment of the present disclosure
  • FIGS. 2a to 2d show schematic diagrams of a display interface for identifying a suspicious object according to an embodiment of the present disclosure
  • FIG. 3 shows a flowchart of a method for identifying a suspicious object according to another embodiment of the present disclosure
  • FIG. 4 shows a block diagram of a suspicious object identification device according to an embodiment of the present disclosure.
  • Fig. 5 shows a schematic diagram of a suspect identification system according to an embodiment of the present disclosure.
  • FIG. 1 shows a flowchart of a suspect identification method 100 according to an embodiment of the present disclosure.
  • step S110 the human body can be scanned to obtain a scanned perspective view.
  • the scanned perspective view may be partitioned to obtain different regions, and the different regions may be marked with different colors to obtain a partition map.
  • the area may be an area where an organ of the human body is located.
  • Step S120 may include: inputting the scanned perspective view into a first machine learning model (for example, a deeplab semantic segmentation model) to obtain a partition map, wherein the first machine learning model uses a human body perspective library as input and a partition perspective library as input Output training is obtained.
  • the partitioned perspective library is obtained by partitioning and marking each human body perspective in the human body perspective library according to regions.
  • the human body perspective gallery includes perspective images of the human body in different states, including: body type, gender, age, physical condition (for example, intestinal flatulence, food digestion, etc.), whether to carry suspects, and different types of suspects Things and so on.
  • the scanned perspective image may be recognized to obtain an identification image, and the identification image displays the information of the suspect, such as the location, shape, and size of the suspect.
  • Step S130 may include: inputting the scanned perspective view into a second machine learning model (for example, Siamese Network model) to obtain a recognition map, wherein the second machine learning model uses the human body perspective view as input and uses the recognition perspective
  • the gallery is obtained as output training, and the recognition perspective gallery is obtained by marking each human body perspective in the human body perspective gallery according to the information (for example, position, shape, and size, etc.) of the suspect.
  • step S140 the scanned perspective image can be matched with the human body perspective library to find a matching image similar to the scanned perspective image in the human body perspective library.
  • step S150 the partition map, the identification map, and the matching map can be displayed on the display for the security personnel to identify the suspect. When more than one matching map is found, these matching maps can be displayed on the display in turn.
  • step S150 when the security inspector determines the region of interest (for example, a certain organ) in the identification map, image processing (for example, contrast enhancement, magnification, gray scale stretching, etc.) can be performed on the region of interest. In order to obtain a clearer image of the region of interest, a more accurate identification of suspects can be carried out.
  • image processing for example, contrast enhancement, magnification, gray scale stretching, etc.
  • FIGS. 2a to 2d show schematic diagrams of display interfaces for identifying suspects according to an embodiment of the present disclosure.
  • the display interface for identifying suspects displays the partition map, recognition map, and matching map obtained in the above steps S120, S130, and S140, respectively, which can be appropriately arranged on the display according to the size of the display or the preference of the security personnel .
  • Figure 2a and Figure 2b are vertical screen display interfaces.
  • the upper part shows the partition map and the matching map from left to right, and the lower part shows the identification map; in Figure 2b, the upper part shows from left to right.
  • the matching map and the partition map are shown separately, and the recognition map is shown in the lower part.
  • Figures 2c and 2d are horizontal screen display interfaces.
  • the matching map and the partition map are displayed from top to bottom on the left, and the identification map is displayed on the right; in Figure 2d, the left side is displayed from top to bottom.
  • the partition map and the matching map are shown below, and the identification map is shown on the right.
  • the arrangement of the partition map, the identification map, and the matching map is not limited to the arrangement shown in FIGS. 2a to 2d, and those skilled in the art can set the arrangement as appropriate.
  • the security personnel can identify the suspect in the identification map by referring to the partition map and the matching map, so that they can easily determine whether there is a suspect and where the suspect is located if there is a suspect The location of the area and the type of suspect.
  • the present disclosure is not limited to only the way of identifying the suspect by the security personnel through human eye recognition, but can also automatically give the identification result of the suspect by processing the partition map, the recognition map and the matching map.
  • a suspect identification method according to another embodiment of the present disclosure will be described with reference to FIG. 3.
  • FIG. 3 shows a flowchart of a suspect identification method 300 according to another embodiment of the present disclosure.
  • the steps S310 to S350 of the suspicious object identification method 300 are the same as the steps S110 to S150 of the above-mentioned suspicious object identification method 100, and therefore will not be repeated here.
  • step S360 the area where the suspect is located is determined.
  • Step S360 may include: extracting feature points in the partition map and the recognition map respectively, and aligning the partition map and the recognition map according to the extracted feature points to determine the area where the suspect is located.
  • step S370 the area where the contraband is located is determined in each of the matching maps.
  • step S380 determine a candidate matching map in which the area where the contraband is located in the matching map is the same as the area where the suspect is located.
  • step S390 the suspect is determined according to the candidate matching map.
  • Step S390 may include: if there is only one candidate matching graph, determining the contraband in the candidate matching graph as the suspect; if there is more than one candidate matching graph, determining whether there is in the candidate matching graph There is only one kind of contraband with the highest frequency; if it is determined that there is only one kind of contraband with the highest frequency in the candidate matching map, then the contraband is determined as the suspect; and if the candidate matching map is determined If there is no contraband with the highest frequency (for example, if there are at least two contrabands with the same frequency in the candidate matching map), it is determined to perform the scanning process again.
  • the method for identifying the suspicious object may further include: if the suspicious object is identified, marking the suspicious object with a special color and issuing a warning.
  • FIG. 4 shows a block diagram of a suspicious object identification device 400 according to an embodiment of the present disclosure.
  • the suspicious object identification device 400 may include a scanning module 410, a partitioning module 420, an identification module 430, a matching module 440, and a display module 450.
  • the scanning module 410 may be configured to scan the human body to obtain a scanned perspective view.
  • the partition module 420 may be configured to partition the scanned perspective view to obtain different regions, and to mark the different regions with different colors to obtain the partition map.
  • the area may be an area where an organ of the human body is located.
  • the partition module 420 may also be configured to: input the scanned perspective view into the first machine learning model to obtain a partition map, where the first machine learning model is trained using the human body perspective library as input and the partition perspective library as output.
  • the partitioned perspective library is obtained by partitioning and marking each human body perspective in the human body perspective library according to regions.
  • the human body perspective gallery includes perspective images of the human body in different states, including: body type, gender, age, physical condition (for example, intestinal flatulence, food digestion, etc.), whether to carry suspects, and different types of suspects Things and so on.
  • the recognition module 430 may be configured to recognize the scanned perspective view to obtain a recognition map, and the recognition map displays information (for example, position, shape, and size, etc.) of the suspect.
  • the recognition module 430 may also be configured to: input the scanned perspective view to a second machine learning model to obtain a recognition image, where the second machine learning model is trained using the human body perspective view as input and the recognition perspective library as output.
  • the recognition perspective library is obtained by marking each human body perspective view in the human body perspective library according to the information (for example, position, shape, size, etc.) of the suspect.
  • the matching module 440 may be configured to match the scanned perspective image with the human perspective library to find a matching image similar to the scanned perspective image in the human perspective library.
  • the display module 450 may be configured to display the zone map, the recognition map, and the matching map on the display.
  • Suspect identification device 400 may also include a determination module configured to: determine the area where the suspect is located; determine the area where the contraband is located in each of the matching maps; determine the area where the contraband is located in the matching map and the area where the suspect is located Candidate matching maps with the same region; and determining suspects based on the candidate matching maps.
  • the determining module may also be configured to: extract feature points in the partition map and the recognition map respectively; and align the partition map and the recognition map according to the extracted feature points to determine the area where the suspect is located.
  • the determining module may also be configured to: if there is only one candidate matching map, determine the contraband in the candidate matching map as the suspect; if there is more than one candidate matching map, determine the candidate matching map Whether there is only one contraband with the highest frequency in the candidate matching graph; if it is determined that there is only one contraband with the highest frequency in the candidate matching graph, then the contraband is determined as the suspect; and if it is determined that the If there is no contraband with the highest frequency in the candidate matching graph, it is determined to perform the scanning process again.
  • the scanned human body image is partitioned and color-labeled, and at the same time, an image similar to the scanned human body image is found in the human body perspective library as a matching map for comparison and display.
  • the security personnel or automatically The suspicious objects in the image are identified, thereby effectively assisting the security inspectors to determine whether there is a suspicious object and the information about the suspicious object when there is a suspicious object.
  • FIG. 5 schematically shows a schematic diagram of a suspicious object identification system 500 according to an embodiment of the present disclosure.
  • the system 300 may include a processor 510, for example, a digital signal processor (DSP).
  • the processor 510 may be a single device or multiple devices for performing different actions of the processes described herein.
  • the system 500 may also include an input/output (I/O) device 530 for receiving signals from or sending signals to other entities.
  • I/O input/output
  • the system 500 may include a memory 520, which may have the following form: non-volatile or volatile memory, for example, electrically erasable programmable read-only memory (EEPROM), flash memory, and the like.
  • the memory 520 may store computer-readable instructions, and when the processor 510 executes the computer-readable instructions, the computer-readable instructions may cause the processor to perform the actions described herein.
  • the technology of the present disclosure can be implemented in the form of hardware and/or software (including firmware, microcode, etc.).
  • the technology of the present disclosure may take the form of a computer program product on a computer-readable medium storing instructions, the computer program product can be used by an instruction execution system (for example, one or more processors) or used in conjunction with an instruction execution system .
  • a computer-readable medium may be any medium that can contain, store, transmit, propagate, or transmit instructions.
  • a computer-readable medium may include, but is not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, device, or propagation medium.
  • Computer-readable media include: magnetic storage devices, such as magnetic tape or hard disk (HDD); optical storage devices, such as optical disks (CD-ROM); memory, such as random access memory (RAM) or flash memory; and/or wired /Wireless communication link.
  • magnetic storage devices such as magnetic tape or hard disk (HDD)
  • optical storage devices such as optical disks (CD-ROM)
  • memory such as random access memory (RAM) or flash memory
  • RAM random access memory
  • flash memory such as wired /Wireless communication link.
  • signal bearing media include, but are not limited to: recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVD), digital tapes, computer storage, etc.; and transmission media such as digital and / Or analog communication media (for example, fiber optic cables, waveguides, wired communication links, wireless communication links, etc.).

Abstract

本公开的实施例公开了一种嫌疑物识别方法、装置和系统。该嫌疑物识别方法包括:对人体进行扫描以获得扫描透视图;对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图;对所述扫描透视图进行识别以获得识别图,所述识别图中显示了嫌疑物的信息;对所述扫描透视图与人体透视图库进行匹配,以在所述人体透视图库中找到与所述扫描透视图相似的匹配图;以及在显示器上显示所述分区图、所述识别图和所述匹配图。

Description

嫌疑物识别方法、装置和系统
相关申请的交叉引用
本申请要求于2019年8月8日提交的、申请号为201910733193.4的中国专利申请的优先权,其全部内容通过引用并入本申请中。
技术领域
本公开涉及图像识别领域,具体地,涉及嫌疑物识别方法、装置和系统。
背景技术
人体安全检查技术广泛应用于机场、监狱及各种重要的公共场所。毫米波、背散射等检查方法具有无辐射或辐射剂量低的技术优势,但是只能检查人体外携带的违禁品。透视安全检查技术通过X射线经过人体的衰减形成图像,可以检查包括人体内及人体外携带的违禁品,特别对体内藏毒可以进行有效的检查。
然而,人体躯干内器官结构较为复杂,各种器官及骨骼结构等在透视图像上也会有重叠。当安检人员查看图像时,安检人员需要区分正常的人体结构及体内或体外携带的违禁物品,因而对安检人员对人体结构的了解和熟悉提出了较高的要求。目前的透视安检设备部分具有典型扫描图像参考的功能,但是不同体型、性别、年龄的人体扫描图像有较大差异,个别固定的典型图像参考意义有限。
发明内容
根据本公开的实施例的一个方面,提供了一种嫌疑物识别方法,包括:
对人体进行扫描以获得扫描透视图;
对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图;
对所述扫描透视图进行识别以获得识别图,所述识别图中显示了嫌疑物的信息;
对所述扫描透视图与人体透视图库进行匹配,以在所述人体透视图库中 找到与所述扫描透视图相似的匹配图;以及
在显示器上显示所述分区图、所述识别图和所述匹配图。
在一个实施例中,所述嫌疑物识别方法还包括:
确定所述嫌疑物所在的区域;
在所述匹配图中每一个中确定违禁品所在的区域;
确定所述匹配图中违禁品所在的区域与所述嫌疑物所在的区域相同的候选匹配图;以及
根据所述候选匹配图来确定所述嫌疑物。
在一个实施例中,确定所述嫌疑物所在的区域包括:
分别提取所述分区图和所述识别图中的特征点;以及
根据所提取的特征点将所述分区图和所述识别图进行对准,以确定所述嫌疑物所在的区域。
在一个实施例中,根据所述候选匹配图来确定所述嫌疑物包括:
如果存在仅一个候选匹配图,则将所述候选匹配图中的违禁品确定为所述嫌疑物;
如果存在多于一个候选匹配图,则确定所述候选匹配图中是否存在出现频率最高的仅一种违禁品;
如果确定所述候选匹配图中存在出现频率最高的仅一种违禁品,则将所述违禁品确定为所述嫌疑物;以及
如果确定所述候选匹配图中不存在出现频率最高的仅一种违禁品,则确定重新执行扫描过程。
在一个实施例中,对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图包括:
将所述扫描透视图输入到第一机器学习模型中以获得所述分区图,其中,所述第一机器学习模型是使用所述人体透视图库作为输入并且使用分区透视图库作为输出训练得到的,所述分区透视图库是通过按照区域对所述人体透视图库中的每一个人体透视图进行分区和用颜色加标记得到的。
在一个实施例中,对所述扫描透视图进行识别以获得识别图包括:
将所述扫描透视图输入到第二机器学习模型以获得所述识别图,其中,所述第二机器学习模型是使用所述人体透视图作为输入并且使用识别透视图 库作为输出训练得到的,所述识别透视图库是按照嫌疑物的位置、形状和大小对所述人体透视图库中的每一个人体透视图加标记得到的。
在一个实施例中,所述区域是所述人体的器官所在的区域,并且所述信息包括所述嫌疑物的位置、形状和大小。
根据本公开的实施例的另一个方面,提供了一种嫌疑物识别装置,包括:
扫描模块,被配置为对人体进行扫描以获得扫描透视图;
分区模块,被配置为对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图;
识别模块,被配置为对所述扫描透视图进行识别以获得识别图,所述识别图中显示了嫌疑物的信息;
匹配模块,被配置为对所述扫描透视图与人体透视图库进行匹配,以在所述人体透视图库中找到与所述扫描透视图相似的匹配图;以及
显示模块,被配置为在显示器上显示所述分区图、所述识别图和所述匹配图。
在一个实施例中,所述嫌疑物识别装置还包括:确定模块,被配置为:
确定所述嫌疑物所在的区域;
在所述匹配图中每一个中确定违禁品所在的区域;
确定所述匹配图中违禁品所在的区域与所述嫌疑物所在的区域相同的候选匹配图;以及
根据所述候选匹配图来确定所述嫌疑物。
在一个实施例中,所述确定模块还被配置为:
分别提取所述分区图和所述识别图中的特征点;以及
根据所提取的特征点将所述分区图和所述识别图进行对准,以确定所述嫌疑物所在的区域。
在一个实施例中,所述确定模块还被配置为:
如果存在仅一个候选匹配图,则将所述候选匹配图中的违禁品确定为所述嫌疑物;
如果存在多于一个候选匹配图,则确定所述候选匹配图中是否存在出现频率最高的仅一种违禁品;
如果确定所述候选匹配图中存在出现频率最高的仅一种违禁品,则将所 述违禁品确定为所述嫌疑物;以及
如果确定所述候选匹配图中不存在出现频率最高的仅一种违禁品,则确定重新执行扫描过程。
在一个实施例中,所述分区模块还被配置为:
将所述扫描透视图输入到第一机器学习模型中以获得所述分区图,其中,所述第一机器学习模型是使用所述人体透视图库作为输入并且使用分区透视图库作为输出训练得到的,所述分区透视图库是通过按照区域对所述人体透视图库中的每一个人体透视图进行分区和用颜色加标记得到的。
在一个实施例中,所述识别模块还被配置为:
将所述扫描透视图输入到第二机器学习模型以获得所述识别图,其中,所述第二机器学习模型是使用所述人体透视图作为输入并且使用识别透视图库作为输出训练得到的,所述识别透视图库是按照嫌疑物的位置、形状和大小对所述人体透视图库中的每一个人体透视图加标记得到的。
在一个实施例中,所述区域是所述人体的器官所在的区域,并且所述信息包括所述嫌疑物的位置、形状和大小。
附图说明
通过以下参照附图对本公开实施例的描述,本公开的上述以及其他目的、特征和优点将更为清楚,在附图中:
图1示出了根据本公开的实施例的嫌疑物识别方法的流程图;
图2a至图2d示出了根据本公开的实施例的用于识别嫌疑物的显示界面的示意图;
图3示出了根据本公开的另一实施例的嫌疑物识别方法的流程图;
图4示出了根据本公开的实施例的嫌疑物识别装置的框图;以及
图5示出了根据本公开的实施例的嫌疑物识别系统的示意图。
附图没有对实施例的所有电路或结构进行显示。贯穿所有附图相同的附图标记表示相同或相似的部件或特征。
具体实施方式
以下,将参照附图来描述本公开的实施例。但是应该理解,这些描述只 是示例性的,而并非要限制本公开的范围。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本公开的概念。
在此使用的术语仅仅是为了描述具体实施例,而并非意在限制本公开。这里使用的词语“一”、“一个(种)”和“该”等也应包括“多个”、“多种”的意思,除非上下文另外明确指出。此外,在此使用的术语“包括”、“包含”等表明了特征、步骤、操作和/或部件的存在,但是并不排除存在或添加一个或多个其他特征、步骤、操作或部件。
在此使用的所有术语(包括技术和科学术语)具有本领域技术人员通常所理解的含义,除非另外定义。应注意,这里使用的术语应解释为具有与本说明书的上下文相一致的含义,而不应以理想化或过于刻板的方式来解释。
图1示出了根据本公开的实施例的嫌疑物识别方法100的流程图。
在步骤S110,可以对人体进行扫描以获得扫描透视图。
在步骤S120,可以对扫描透视图进行分区以获得不同的区域,并且用不同的颜色对不同的区域进行标记,以获得分区图。所述区域可以是所述人体的器官所在的区域。
步骤S120可以包括:将扫描透视图输入到第一机器学习模型(例如,deeplab语义分割模型)中以获得分区图,其中,第一机器学习模型是使用人体透视图库作为输入并且使用分区透视图库作为输出训练得到的,所述分区透视图库是通过按照区域对人体透视图库中的每一个人体透视图进行分区和用颜色加标记得到的。人体透视图库包括人体处于不同状态下的人体透视图,该状态包括:体型、性别、年龄、身体情况(例如,肠道胀气情况、食物消化情况等)、是否携带嫌疑物、携带不同种类的嫌疑物等等。
在步骤S130,可以对扫描透视图进行识别以获得识别图,识别图中显示了嫌疑物的信息,例如嫌疑物的位置、形状和大小等。
步骤S130可以包括:将扫描透视图输入到第二机器学习模型(例如,Siamese Network(孪生网络)模型)以获得识别图,其中,第二机器学习模型是使用人体透视图作为输入并且使用识别透视图库作为输出训练得到的,所述识别透视图库是按照嫌疑物的信息(例如,位置、形状和大小等)对人体透视图库中的每一个人体透视图加标记得到的。
在步骤S140,可以对扫描透视图与人体透视图库进行匹配,以在人体透 视图库中找到与扫描透视图相似的匹配图。
在步骤S150,可以在显示器上显示分区图、识别图和匹配图,以供安检人员进行嫌疑物识别。当找到多于一个匹配图时,可以在显示器上轮流显示这些匹配图。
在步骤S150之后,当安检人员在识别图中确定出感兴趣区域(例如,某一器官)时,可以对该感兴趣区域进行图像处理(例如,对比度增强、放大、灰度拉伸等),以获得更清晰的感兴趣区域图像,从而进行更准确的嫌疑物识别。
图2a至图2d示出了根据本公开的实施例的用于识别嫌疑物的显示界面的示意图。用于识别嫌疑物的显示界面上显示了分别在上述步骤S120、S130和S140中获得的分区图、识别图和匹配图,其可以根据显示器的尺寸或者安检人员的偏好被适当地布置在显示器上。例如,图2a和图2b是竖屏显示界面,其中在图2a中,上部从左至右分别显示了分区图和匹配图,并且下部显示了识别图;在图2b中,上部从左至右分别显示了匹配图和分区图,并且下部显示了识别图。图2c和图2d是横屏显示界面,其中在图2c中,左侧从上至下分别显示了匹配图和分区图,并且右侧显示了识别图;在图2d中,左侧从上至下分别显示了分区图和匹配图,并且右侧显示了识别图。当然,分区图、识别图和匹配图的布置方式不限于图2a至图2d中所示的布置方式,本领域技术人员能够视情况对布置方式进行设置。
安检人员在查看用于识别嫌疑物的显示界面时通过参考分区图和匹配图对识别图中的嫌疑物进行识别,从而能够容易地确定是否存在嫌疑物以及在存在嫌疑物的情况下嫌疑物所在的区域位置以及嫌疑物的种类。当然,本公开不限于仅由安检人员通过人眼识别的方式来确定嫌疑物,而是还可以通过对分区图、识别图和匹配图进行处理以自动地给出嫌疑物的识别结果。下面,将参照图3描述根据本公开的另一实施例的嫌疑物识别方法。
图3示出了根据本公开的另一实施例的嫌疑物识别方法300的流程图。
嫌疑物识别方法300的步骤S310至S350与上述嫌疑物识别方法100的步骤S110至S150相同,因此在此不再赘述。
在步骤S360,确定嫌疑物所在的区域。
步骤S360可以包括:分别提取分区图和识别图中的特征点,并且根据所 提取的特征点将分区图和识别图进行对准,以确定嫌疑物所在的区域。
在步骤S370,在匹配图中每一个中确定违禁品所在的区域。
在步骤S380,确定匹配图中违禁品所在的区域与嫌疑物所在的区域相同的候选匹配图。
在步骤S390,根据候选匹配图来确定嫌疑物。
步骤S390可以包括:如果存在仅一个候选匹配图,则将所述候选匹配图中的违禁品确定为所述嫌疑物;如果存在多于一个候选匹配图,则确定所述候选匹配图中是否存在出现频率最高的仅一种违禁品;如果确定所述候选匹配图中存在出现频率最高的仅一种违禁品,则将所述违禁品确定为所述嫌疑物;以及如果确定所述候选匹配图中不存在出现频率最高的仅一种违禁品(例如,有至少两种违禁品在候选匹配图中出现的频率相同),则确定重新执行扫描过程。
在步骤S390之后,嫌疑物识别方法还可以包括:如果识别出嫌疑物,则用特殊的颜色对嫌疑物进行标记并且发出警告。
图4示出了根据本公开的实施例的嫌疑物识别装置400的框图。嫌疑物识别装置400可以包括扫描模块410、分区模块420、识别模块430、匹配模块440和显示模块450。
扫描模块410可以被配置为对人体进行扫描以获得扫描透视图。
分区模块420可以被配置为对扫描透视图进行分区以获得不同的区域,并且用不同的颜色对不同的区域进行标记,以获得分区图。所述区域可以是所述人体的器官所在的区域。分区模块420还可以被配置为:将扫描透视图输入到第一机器学习模型中以获得分区图,其中,第一机器学习模型是使用人体透视图库作为输入并且使用分区透视图库作为输出训练得到的,所述分区透视图库是通过按照区域对人体透视图库中的每一个人体透视图进行分区和用颜色加标记得到的。人体透视图库包括人体处于不同状态下的人体透视图,该状态包括:体型、性别、年龄、身体情况(例如,肠道胀气情况、食物消化情况等)、是否携带嫌疑物、携带不同种类的嫌疑物等等。
识别模块430可以被配置为对扫描透视图进行识别以获得识别图,识别图中显示了嫌疑物的信息(例如,位置、形状和大小等)。识别模块430还可以被配置为:将扫描透视图输入到第二机器学习模型以获得识别图,其中,第二机器学习模型是使用人体透视图作为输入并且使用识别透视图库作为输出 训练得到的,所述识别透视图库是按照嫌疑物的信息(例如,位置、形状和大小等)对人体透视图库中的每一个人体透视图加标记得到的。
匹配模块440可以被配置为对扫描透视图与人体透视图库进行匹配,以在人体透视图库中找到与扫描透视图相似的匹配图。
显示模块450可以被配置为在显示器上显示分区图、识别图和匹配图。
嫌疑物识别装置400还可以包括确定模块,被配置为:确定嫌疑物所在的区域;在匹配图中每一个中确定违禁品所在的区域;确定匹配图中违禁品所在的区域与嫌疑物所在的区域相同的候选匹配图;以及根据候选匹配图来确定嫌疑物。
确定模块还可以被配置为:分别提取分区图和识别图中的特征点;以及根据所提取的特征点将分区图和识别图进行对准,以确定嫌疑物所在的区域。
确定模块还可以被配置为:如果存在仅一个候选匹配图,则将所述候选匹配图中的违禁品确定为所述嫌疑物;如果存在多于一个候选匹配图,则确定所述候选匹配图中是否存在出现频率最高的仅一种违禁品;如果确定所述候选匹配图中存在出现频率最高的仅一种违禁品,则将所述违禁品确定为所述嫌疑物;以及如果确定所述候选匹配图中不存在出现频率最高的仅一种违禁品,则确定重新执行扫描过程。
在本辅助识别方法中,对扫描的人体图像进行分区和颜色标注,同时在人体透视图库中查找与扫描的人体图像相似的图像作为匹配图以用于对比显示,由安检人员或者自动地对人体图像中的嫌疑物进行识别,从而有效地协助安检人员确定是否存在嫌疑物以及在存在嫌疑物的情况下关于嫌疑物的信息。
图5示意性地示出了根据本公开的实施例的嫌疑物识别系统500的示意图。系统300可以包括处理器510,例如,数字信号处理器(DSP)。处理器510可以是用于执行本文所描述的过程的不同动作的单个装置或多个装置。系统500还可以包括输入/输出(I/O)装置530,用于从其他实体接收信号或者向其他实体发送信号。
此外,系统500可以包括存储器520,该存储器520可以具有以下形式:非易失性或易失性存储器,例如,电可擦除可编程只读存储器(EEPROM)、闪存等。存储器520可以存储计算机可读指令,当处理器510执行该计算机可读指令时,该计算机可读指令可以使处理器执行本文所述的动作。
附图中示出了一些方框图和/或流程图。应理解,方框图和/或流程图中的一些方框或其组合可以由计算机程序指令来实现。这些计算机程序指令可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器,从而这些指令在由该处理器执行时可以创建用于实现这些方框图和/或流程图中所说明的功能/操作的装置。
因此,本公开的技术可以硬件和/或软件(包括固件、微代码等)的形式来实现。另外,本公开的技术可以采取存储有指令的计算机可读介质上的计算机程序产品的形式,该计算机程序产品可供指令执行系统(例如,一个或多个处理器)使用或者结合指令执行系统使用。在本公开的上下文中,计算机可读介质可以是能够包含、存储、传送、传播或传输指令的任意介质。例如,计算机可读介质可以包括但不限于电、磁、光、电磁、红外或半导体系统、装置、器件或传播介质。计算机可读介质的具体示例包括:磁存储装置,如磁带或硬盘(HDD);光存储装置,如光盘(CD-ROM);存储器,如随机存取存储器(RAM)或闪存;和/或有线/无线通信链路。
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了嫌疑物识别方法、装置和系统的众多实施例。在这种示意图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开的实施例所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序),实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信号承载介质的具体类型如何,本公开所述主题的示例性实施例均 适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。

Claims (14)

  1. 一种嫌疑物识别方法,包括:
    对人体进行扫描以获得扫描透视图;
    对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图;
    对所述扫描透视图进行识别以获得识别图,所述识别图中显示了嫌疑物的信息;
    对所述扫描透视图与人体透视图库进行匹配,以在所述人体透视图库中找到与所述扫描透视图相似的匹配图;以及
    在显示器上显示所述分区图、所述识别图和所述匹配图。
  2. 根据权利要求1所述的嫌疑物识别方法,还包括:
    确定所述嫌疑物所在的区域;
    在所述匹配图中每一个中确定违禁品所在的区域;
    确定所述匹配图中违禁品所在的区域与所述嫌疑物所在的区域相同的候选匹配图;以及
    根据所述候选匹配图来确定所述嫌疑物。
  3. 根据权利要求2所述的嫌疑物识别方法,其中,确定所述嫌疑物所在的区域包括:
    分别提取所述分区图和所述识别图中的特征点;以及
    根据所提取的特征点将所述分区图和所述识别图进行对准,以确定所述嫌疑物所在的区域。
  4. 根据权利要求2所述的嫌疑物识别方法,其中,根据所述候选匹配图来确定所述嫌疑物包括:
    如果存在仅一个候选匹配图,则将所述候选匹配图中的违禁品确定为所述嫌疑物;
    如果存在多于一个候选匹配图,则确定所述候选匹配图中是否存在出现频率最高的仅一种违禁品;
    如果确定所述候选匹配图中存在出现频率最高的仅一种违禁品,则将所述 违禁品确定为所述嫌疑物;以及
    如果确定所述候选匹配图中不存在出现频率最高的仅一种违禁品,则确定重新执行扫描过程。
  5. 根据权利要求1所述的嫌疑物识别方法,其中,对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图包括:
    将所述扫描透视图输入到第一机器学习模型中以获得所述分区图,其中,所述第一机器学习模型是使用所述人体透视图库作为输入并且使用分区透视图库作为输出训练得到的,所述分区透视图库是通过按照区域对所述人体透视图库中的每一个人体透视图进行分区和用颜色加标记得到的。
  6. 根据权利要求1所述的嫌疑物识别方法,其中,对所述扫描透视图进行识别以获得识别图包括:
    将所述扫描透视图输入到第二机器学习模型以获得所述识别图,其中,所述第二机器学习模型是使用所述人体透视图作为输入并且使用识别透视图库作为输出训练得到的,所述识别透视图库是按照嫌疑物的位置、形状和大小对所述人体透视图库中的每一个人体透视图加标记得到的。
  7. 根据权利要求1至6中任一项所述的方法,其中,所述区域是所述人体的器官所在的区域,并且所述信息包括所述嫌疑物的位置、形状和大小。
  8. 一种嫌疑物识别装置,包括:
    扫描模块,被配置为对人体进行扫描以获得扫描透视图;
    分区模块,被配置为对所述扫描透视图进行分区以获得不同的区域,并且用不同的颜色对所述不同的区域进行标记,以获得分区图;
    识别模块,被配置为对所述扫描透视图进行识别以获得识别图,所述识别图中显示了嫌疑物的信息;
    匹配模块,被配置为对所述扫描透视图与人体透视图库进行匹配,以在所述人体透视图库中找到与所述扫描透视图相似的匹配图;以及
    显示模块,被配置为在显示器上显示所述分区图、所述识别图和所述匹配图。
  9. 根据权利要求8所述的嫌疑物识别装置,还包括:确定模块,被配置 为:
    确定所述嫌疑物所在的区域;
    在所述匹配图中每一个中确定违禁品所在的区域;
    确定所述匹配图中违禁品所在的区域与所述嫌疑物所在的区域相同的候选匹配图;以及
    根据所述候选匹配图来确定所述嫌疑物。
  10. 根据权利要求9所述的嫌疑物识别装置,其中,所述确定模块还被配置为:
    分别提取所述分区图和所述识别图中的特征点;以及
    根据所提取的特征点将所述分区图和所述识别图进行对准,以确定所述嫌疑物所在的区域。
  11. 根据权利要求9所述的嫌疑物识别装置,其中,所述确定模块还被配置为:
    如果存在仅一个候选匹配图,则将所述候选匹配图中的违禁品确定为所述嫌疑物;
    如果存在多于一个候选匹配图,则确定所述候选匹配图中是否存在出现频率最高的仅一种违禁品;
    如果确定所述候选匹配图中存在出现频率最高的仅一种违禁品,则将所述违禁品确定为所述嫌疑物;以及
    如果确定所述候选匹配图中不存在出现频率最高的仅一种违禁品,则确定重新执行扫描过程。
  12. 根据权利要求8所述的嫌疑物识别装置,其中,所述分区模块还被配置为:
    将所述扫描透视图输入到第一机器学习模型中以获得所述分区图,其中,所述第一机器学习模型是使用所述人体透视图库作为输入并且使用分区透视图库作为输出训练得到的,所述分区透视图库是通过按照区域对所述人体透视图库中的每一个人体透视图进行分区和用颜色加标记得到的。
  13. 根据权利要求8所述的嫌疑物识别装置,其中,所述识别模块还被配置为:
    将所述扫描透视图输入到第二机器学习模型以获得所述识别图,其中,所述第二机器学习模型是使用所述人体透视图作为输入并且使用识别透视图库作为输出训练得到的,所述识别透视图库是按照嫌疑物的位置、形状和大小对所述人体透视图库中的每一个人体透视图加标记得到的。
  14. 根据权利要求8至13中任一项所述的嫌疑物识别装置,其中,所述区域是所述人体的器官所在的区域,并且所述信息包括所述嫌疑物的位置、形状和大小。
PCT/CN2020/090973 2019-08-08 2020-05-19 嫌疑物识别方法、装置和系统 WO2021022865A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910733193.4A CN112347822B (zh) 2019-08-08 2019-08-08 嫌疑物识别方法、装置和系统
CN201910733193.4 2019-08-08

Publications (1)

Publication Number Publication Date
WO2021022865A1 true WO2021022865A1 (zh) 2021-02-11

Family

ID=74367529

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/090973 WO2021022865A1 (zh) 2019-08-08 2020-05-19 嫌疑物识别方法、装置和系统

Country Status (2)

Country Link
CN (1) CN112347822B (zh)
WO (1) WO2021022865A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021175126A (ja) * 2020-04-28 2021-11-01 キヤノン株式会社 分割パターン決定装置、分割パターン決定方法、学習装置、学習方法およびプログラム

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202693812U (zh) * 2011-12-30 2013-01-23 北京华航无线电测量研究所 一种基于人体轮廓显示的隐藏危险物品微波安检系统
US20170337447A1 (en) * 2016-05-17 2017-11-23 Steven Winn Smith Body Scanner with Automated Target Recognition
CN108549898A (zh) * 2018-03-20 2018-09-18 特斯联(北京)科技有限公司 一种用于安检透视的特定目标识别与增强的方法和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202693812U (zh) * 2011-12-30 2013-01-23 北京华航无线电测量研究所 一种基于人体轮廓显示的隐藏危险物品微波安检系统
US20170337447A1 (en) * 2016-05-17 2017-11-23 Steven Winn Smith Body Scanner with Automated Target Recognition
CN108549898A (zh) * 2018-03-20 2018-09-18 特斯联(北京)科技有限公司 一种用于安检透视的特定目标识别与增强的方法和系统

Also Published As

Publication number Publication date
CN112347822B (zh) 2022-07-19
CN112347822A (zh) 2021-02-09

Similar Documents

Publication Publication Date Title
US10878569B2 (en) Systems and methods for automatic detection of an indication of abnormality in an anatomical image
Tsehay et al. Convolutional neural network based deep-learning architecture for prostate cancer detection on multiparametric magnetic resonance images
Ciompi et al. Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box
US9383347B2 (en) Pathological diagnosis results assessment system, pathological diagnosis results assessment method, and pathological diagnosis results assessment device
Qiu et al. Reproducibility and non-redundancy of radiomic features extracted from arterial phase CT scans in hepatocellular carcinoma patients: impact of tumor segmentation variability
Echegaray et al. Core samples for radiomics features that are insensitive to tumor segmentation: method and pilot study using CT images of hepatocellular carcinoma
Ruan et al. MB-FSGAN: Joint segmentation and quantification of kidney tumor on CT by the multi-branch feature sharing generative adversarial network
US10062161B2 (en) Endoscopic image diagnosis support system for computing average values of identification probabilities of pathological types
US20190385307A1 (en) System and method for structures detection and multi-class image categorization in medical imaging
WO2015010531A1 (zh) 人体安全检查方法和人体安全检查系统
JP2017509903A (ja) 貨物の検査方法およびそのシステム
CN111242083B (zh) 基于人工智能的文本处理方法、装置、设备、介质
Kendi et al. Head and neck PET/CT therapy response interpretation criteria (Hopkins criteria)-external validation study
WO2015010619A1 (zh) 人体安全检查的隐私保护方法和人体安全检查系统
Yang et al. Intelligent crack extraction based on terrestrial laser scanning measurement
WO2021022865A1 (zh) 嫌疑物识别方法、装置和系统
Zhang et al. Modeling false positive error making patterns in radiology trainees for improved mammography education
US20170367677A1 (en) Analysis method for breast image and electronic apparatus using the same
Mattikalli et al. Universal lesion detection in CT scans using neural network ensembles
Dubosclard et al. Automated visual grading of grain kernels by machine vision
CN203535244U (zh) 人体安全检查设备
US9201901B2 (en) Techniques for generating a representative image and radiographic interpretation information for a case
Torkzadeh et al. Automatic visual inspection system for quality control of the sandwich panel and detecting the dipping and buckling of the surfaces
Karargyros et al. Saliency U-Net: A regional saliency map-driven hybrid deep learning network for anomaly segmentation
Moroianu et al. Detecting invasive breast carcinoma on dynamic contrast-enhanced MRI

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20850500

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20850500

Country of ref document: EP

Kind code of ref document: A1