CN117496175A - Detection object risk early warning method, radiation inspection method, device and equipment - Google Patents

Detection object risk early warning method, radiation inspection method, device and equipment Download PDF

Info

Publication number
CN117496175A
CN117496175A CN202210874255.5A CN202210874255A CN117496175A CN 117496175 A CN117496175 A CN 117496175A CN 202210874255 A CN202210874255 A CN 202210874255A CN 117496175 A CN117496175 A CN 117496175A
Authority
CN
China
Prior art keywords
image
information
target frame
early warning
detection object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210874255.5A
Other languages
Chinese (zh)
Inventor
张丽
孙运达
孟凡华
傅罡
李强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuctech Co Ltd
Original Assignee
Nuctech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nuctech Co Ltd filed Critical Nuctech Co Ltd
Priority to CN202210874255.5A priority Critical patent/CN117496175A/en
Publication of CN117496175A publication Critical patent/CN117496175A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure provides a risk early warning method for a detection object, which can be applied to the technical field of artificial intelligence and the technical field of security. The risk early warning method for the detection object comprises the following steps: extracting image features of an image to be detected to obtain image feature information of a target block, wherein the image to be detected comprises a perspective image of a detection object; processing the image characteristic information of the target frame by using the image complexity evaluation model to obtain image complexity information aiming at the image to be detected; and determining a risk early warning result for the detection object according to the image complexity information. The present disclosure also provides a radiation inspection method, apparatus, device, storage medium, and program product.

Description

Detection object risk early warning method, radiation inspection method, device and equipment
Technical Field
The present disclosure relates to the field of artificial intelligence and security technologies, and more particularly, to a detection object risk early warning method, a radiation inspection method, an apparatus, a device, a medium, and a program product.
Background
With the rapid development of international trade, a logistics mode of carrying out cargo transportation by loading cargo through a container is widely applied to the logistics process of cargo. In the process of conducting clearance on the container loaded with cargoes, relevant staff can conduct clearance checking on cargoes in the container, for example, whether articles forbidden by relevant regulations exist in the container or not is confirmed through checking perspective images of the container or opening the container, namely whether forbidden articles exist in the container or not is confirmed, and accordingly life and property safety of relevant staff is guaranteed.
In implementing the inventive concepts of the present disclosure, the inventors have found that the efficiency of inspection of forbidden articles in containers is low, making it difficult to meet the ever-increasing inspection requirements of customs containers.
Disclosure of Invention
In view of the foregoing, the present disclosure provides a detection object risk early warning method, a radiation inspection method, an apparatus, a device, a medium, and a program product.
According to a first aspect of the present disclosure, there is provided a detection object risk early warning method, including:
extracting image features of an image to be detected to obtain image feature information of a target block, wherein the image to be detected comprises a perspective image of a detection object;
processing the image characteristic information of the target frame by using an image complexity evaluation model to obtain image complexity information aiming at the image to be detected; and
and determining a risk early warning result aiming at the detection object according to the image complexity information.
According to an embodiment of the present disclosure, performing image feature extraction on an image to be detected, to obtain target block image feature information includes:
extracting image features of the image to be detected by using a first feature extraction layer to obtain image feature information;
Semantic division is carried out on the image characteristic information by utilizing an image region division layer, so that target frame information aiming at the image to be detected is obtained; and
and determining the image characteristic information of the target frame according to the target frame information and the image characteristic information.
According to an embodiment of the present disclosure, the first feature extraction layer includes a convolutional neural network layer, and the image region division layer includes a region candidate network layer.
According to an embodiment of the present disclosure, performing image feature extraction on an image to be detected, to obtain target block image feature information includes:
carrying out image division on the image to be detected by using a preset sliding window to obtain a target frame image; and
and extracting image characteristics of the target frame image to obtain the characteristic information of the target frame image.
According to an embodiment of the present disclosure, the image complexity evaluation model includes a feature code embedded layer, a target encoder layer constructed based on an attention mechanism, and an evaluation result output layer, which are sequentially connected;
processing the image characteristic information of the target frame by using an image complexity evaluation model to obtain image complexity information aiming at the image to be detected, wherein the image complexity information comprises:
Inputting the image characteristic information of the target frame into the characteristic code embedded layer, and outputting the image characteristic code information of the target frame;
processing the target frame image characteristic coding information by using the target encoder layer to obtain an image complexity prediction intermediate value of the target frame image characteristic information; and
and inputting the image complexity prediction intermediate value into the evaluation result output layer to obtain the image complexity information.
According to an embodiment of the present disclosure, the target encoder layer includes a neural network layer constructed based on a tan encoder; and
the evaluation result output layer comprises a neural network layer constructed based on a plurality of layers of perceptrons.
According to an embodiment of the present disclosure, determining a risk early warning result for the detection object according to the image complexity information includes:
and under the condition that the image complexity information is larger than or equal to a preset complexity threshold value, determining the risk early warning result as a first risk level early warning.
According to an embodiment of the present disclosure, after the target encoder layer processes the target frame image feature encoding information, a predicted target frame image feature encoding for the target frame image feature information is also obtained;
The risk early warning method for the detection object further comprises the following steps:
processing the predicted target block image feature codes according to a preset clustering algorithm to obtain a clustering detection result, wherein the clustering detection result represents the types and the numbers of the objects in the inner space area of the outer wall of the detection object;
according to the image complexity information, determining the risk early warning result for the detection object further comprises:
and determining the risk early warning result as a second risk level early warning under the condition that the image complexity information is smaller than a preset complexity threshold and the clustering detection result is larger than or equal to the preset clustering detection threshold.
According to an embodiment of the present disclosure, the preset clustering algorithm includes at least one of:
hierarchical clustering algorithm, mean shift clustering algorithm, K mean clustering algorithm.
According to the embodiment of the disclosure, after the target encoder layer processes the target frame image feature encoding information, a predicted target frame image feature encoding for the target frame image feature information is obtained, where the predicted target frame image feature encoding includes N, where N is greater than or equal to 2;
the risk early warning method for the detection object further comprises the following steps:
Determining risk area information of the image to be detected according to the difference distance between each prediction target frame image feature code and other prediction target frame image feature codes in N prediction target frame image feature codes;
according to the image complexity information, determining the risk early warning result for the detection object further comprises:
and determining the risk early warning result as a third risk level early warning under the condition that the image complexity information is smaller than a preset complexity threshold and the number of the risk area information is larger than or equal to the preset risk area threshold.
According to an embodiment of the present disclosure, the differential distance includes at least one of:
manhattan distance, euclidean distance, chebyshev distance, mahalanobis distance, hamming distance.
According to an embodiment of the present disclosure, the detection object includes a container.
According to an embodiment of the present disclosure, the image complexity information includes complexity score information for texture information of the image to be detected.
A second aspect of the present disclosure provides a radiation inspection method comprising
Acquiring a perspective image of a detection object;
determining a risk early warning result of the detection object by using the detection object risk early warning method aiming at the perspective image of the detection object;
In response to detecting a risk early warning result aiming at an image to be detected, determining the image to be detected as a risk image; and
and performing secondary examination on the risk image.
A third aspect of the present disclosure provides a detection object risk early warning apparatus, including:
the extraction module is used for extracting image characteristics of an image to be detected to obtain target block image characteristic information, wherein the image to be detected comprises a perspective image of a detection object;
the evaluation module is used for processing the image characteristic information of the target frame by using an image complexity evaluation model to obtain image complexity information aiming at the image to be detected; and
and the first determining module is used for determining a risk early warning result aiming at the detection object according to the image complexity information.
A fourth aspect of the present disclosure provides a radiation inspection device comprising
The acquisition module is used for acquiring perspective images of the detection object;
the early warning module is used for determining a risk early warning result of the detection object by using the detection object risk early warning method aiming at the perspective image of the detection object;
the risk image determining module is used for determining the image to be detected as a risk image in response to detecting a risk early warning result aiming at the image to be detected; and
And the checking module is used for carrying out secondary checking on the risk image.
A fifth aspect of the present disclosure provides an electronic device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the above-described detection object risk early warning method or radiation inspection method.
The sixth aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the above-described detection object risk early warning method or radiation inspection method.
A seventh aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the above-described detection object risk early warning method or radiation inspection method.
Drawings
The foregoing and other objects, features and advantages of the disclosure will be more apparent from the following description of embodiments of the disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario diagram of a detection object risk early warning method and apparatus according to an embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a method of detecting object risk early warning according to an embodiment of the disclosure;
FIG. 3A schematically illustrates a schematic diagram of target frame image feature information according to an embodiment of the present disclosure;
FIG. 3B schematically illustrates a schematic diagram of target frame image feature information according to another embodiment of the present disclosure;
fig. 4 schematically illustrates an application scenario diagram of extracting image features of an image to be detected to obtain image feature information of a target block according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a flow chart of processing target frame image feature information using an image complexity assessment model to obtain image complexity information for an image to be detected, in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates processing target frame image feature information using an image complexity assessment model to obtain an application scenario diagram for image complexity information of an image to be detected, according to an embodiment of the present disclosure;
fig. 7 schematically illustrates a block diagram of a detection object risk early-warning apparatus according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of a radiation inspection device according to an embodiment of the present disclosure; and
fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a method of risk early warning of a detection object, a method of radiation inspection, according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing, applying and the like of the personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public order harmony is not violated.
In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
In the application scenario of related import and export customs clearance, related staff needs to carry out daily inspection on import and export goods. Since a large amount of cargo is stored in a container, it is necessary to quickly and accurately find out the relevant forbidden limit stored in the container during the cargo inspection process for the container.
At present, the means for inspecting the cargos in the container is mainly that related staff comprehensively inspects the perspective images of the container, however, under the situation that the current global trade volume is continuously increased, the number of the containers in the clearance is increased, the containers in the clearance are difficult to inspect one by one, which results in that partial cargos in the container cannot be inspected in detail, and the inspection of the cargos in the container needs to be highly dependent on the working experience of the related staff, so that the inspection efficiency is low, and the inspection accuracy is difficult to keep stable.
The embodiment of the disclosure provides a risk early warning method for a detection object, which comprises the following steps: extracting image features of an image to be detected to obtain image feature information of a target block, wherein the image to be detected comprises a perspective image of a detection object; processing the image characteristic information of the target frame by using the image complexity evaluation model to obtain image complexity information aiming at the image to be detected; and determining a risk early warning result for the detection object according to the image complexity information.
According to the embodiment of the disclosure, by extracting the image features of the image to be detected, in which the detection object is recorded, the feature information of the detection object can be reflected in a fine granularity by using the obtained one or more pieces of target frame image feature information, the target frame image feature information is processed by using the image complexity evaluation model, and the image complexity information obtained after processing can at least partially reflect the composition complexity of the detection object in the image to be detected, so that the risk early warning result for the detection object is determined according to the image complexity information, the accuracy of checking the detection object containing the related forbidden limit can be improved, and the detection object with risk can be checked according to the risk early warning result, so that the checking efficiency of the detection object can be improved, and the actual requirement of checking the customs container in the related application scene can be met.
Fig. 1 schematically illustrates an application scenario diagram of a detection object risk early warning method and apparatus according to an embodiment of the disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that, the detection object risk early warning method provided in the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the detection object risk early warning device provided in the embodiments of the present disclosure may be generally disposed in the server 105. The detection object risk early warning method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the detection object risk early warning apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The detection object risk early warning method of the disclosed embodiment will be described in detail with reference to fig. 2 to 6 based on the scenario described in fig. 1.
Fig. 2 schematically illustrates a flowchart of a detection object risk early warning method according to an embodiment of the present disclosure.
As shown in fig. 2, the detection object risk early warning method of this embodiment includes operations S210 to S230.
In operation S210, image feature extraction is performed on an image to be detected, so as to obtain target frame image feature information, where the image to be detected includes a perspective image of the detection object.
According to an embodiment of the present disclosure, the detection object may include a case device having a case, such as a container, a luggage case, or the like, and the perspective image of the detection object may be an image of an inner space of the case of the detection object acquired based on the related perspective imaging device, for example, the perspective image of the inner space of the container or the luggage case may be acquired by the related CT imaging device, thereby obtaining the image to be detected.
According to the embodiment of the disclosure, the image feature extraction can be performed on the image to be detected by using the target frame, so that an image in the target frame area is obtained, and the target frame image feature information can comprise image feature information capable of representing the image to be detected in the target frame area.
In operation S220, the image complexity evaluation model is used to process the feature information of the image of the target frame, so as to obtain the image complexity information for the image to be detected.
In operation S230, a risk early warning result for the detection object is determined according to the image complexity information.
According to embodiments of the present disclosure, the image complexity information may include related information, such as image complexity score information or the like, capable of characterizing the image complexity of the image to be detected. The image complexity may include image texture information complexity of the image to be detected, and the image complexity information may reflect texture information complexity of the entire image to be detected. And further, according to the image complexity information, the existence of related forbidden limit products in the related detection objects can be predicted, and further, the risk early warning result of the detection objects is determined according to the image complexity information, so that the accuracy of related risk early warning can be improved.
According to the embodiment of the disclosure, by extracting the image features of the image to be detected, in which the detection object is recorded, the feature information of the detection object can be reflected in a fine granularity by using the obtained one or more pieces of target frame image feature information, the target frame image feature information is processed by using the image complexity evaluation model, and the image complexity information obtained after processing can at least partially reflect the composition complexity of the detection object in the image to be detected, so that the risk early warning result for the detection object is determined according to the image complexity information, the accuracy of checking the detection object containing the related forbidden limit can be improved, and the detection object with risk can be checked according to the risk early warning result, so that the checking efficiency of the detection object can be improved, and the actual requirement of checking the customs container in the related application scene can be met.
According to an embodiment of the present disclosure, the detection object includes a container.
According to an embodiment of the present disclosure, in the case where the detection object is a container, the image to be detected may be a perspective image for the container, that is, an article stored inside the container may be recorded by the object to be detected.
According to an embodiment of the present disclosure, operation S210, performing image feature extraction on an image to be detected, and obtaining target block image feature information may include the following operations.
Extracting image features of an image to be detected by using a first feature extraction layer to obtain image feature information; semantic division is carried out on the image characteristic information by utilizing the image region division layer, so that target frame information aiming at the image to be detected is obtained; and determining the image characteristic information of the target frame according to the target frame information and the image characteristic information.
According to an embodiment of the present disclosure, the first feature extraction layer may include a neural network layer, such as a convolutional neural network layer, or the like, constructed based on a neural network model. The first feature extraction layer may extract image feature information of the whole image to be detected.
According to the embodiment of the disclosure, the image region division layer can divide local regions of the image to be detected according to semantic information of the image, so that the image feature information can be divided into local regions by utilizing the image region division layer, and target frame information corresponding to the local regions is obtained. It should be understood that the target frame information output by the image area dividing layer has a correspondence relationship with the local area of the image to be detected.
According to an embodiment of the present disclosure, the target frame image feature information may include feature information characterizing a partial region image corresponding to the target frame information in the image to be detected. Region of interest pooling (ROI-P) o And (ing) layer to process the target frame information and the image characteristic information so as to obtain the target frame image characteristic information.
It should be appreciated that, where the object frame information includes a plurality of object frame information, the respective sizes of the plurality of object frame information may be different, and the plurality of object frame information and the image feature information may be processed using the region of interest pooling layer, so that the plurality of obtained object frame image feature information may have the same dimension, thereby facilitating subsequent processing of the plurality of object frame image feature information.
According to an embodiment of the present disclosure, the first feature extraction layer comprises a convolutional neural network layer, and the image region segmentation layer comprises a region candidate network layer.
According to embodiments of the present disclosure, the region candidate network layer may include a RPN (Region Proposal Network) neural network layer. The image feature information is processed through the region candidate network layer, so that the obtained target frame information can fully extract partial image features related to the stored objects in the image to be detected, and further, the evaluation accuracy of the image complexity of the image to be detected by the subsequent image complexity evaluation model can be improved.
It should be noted that, in the embodiment of the present disclosure, the number of hidden layers in the convolutional neural network layer and the size of the convolutional kernel are not limited, and those skilled in the art may select according to actual situations, so long as the image features of the image to be detected can be extracted.
Fig. 3A schematically illustrates a schematic diagram of target frame image feature information according to an embodiment of the present disclosure.
As shown in fig. 3A, the image 300 to be detected may be a perspective image of a container, and the image feature information may be divided into areas by using an image area division layer by recording the articles stored in the container in the image 300 to be detected, so that the target frame information 3111 for the image 300 to be detected may be obtained, and thus the partial area image 311 of the image 300 to be detected corresponding to the target frame information 3111 may be obtained.
For example, the target frame information 3111 and the image feature information may be input to the region-of-interest pooling layer, and the target frame image feature information corresponding to the local region image 311 may be obtained.
It should be appreciated that, by the same or similar method, it is also possible to obtain target frame information associated with each item stored in the container, and further obtain target frame image feature information corresponding to each target frame information.
According to an embodiment of the present disclosure, operation S210, performing image feature extraction on an image to be detected, and obtaining target block image feature information may include the following operations.
Image division is carried out on the image to be detected by utilizing a preset sliding window, and a target frame image is obtained; and extracting image features of the target frame image to obtain target frame image feature information.
According to an embodiment of the present disclosure, the preset sliding window may include sliding frame information that slides on the image to be detected according to a preset sliding track, and by sliding on the image to be detected using the sliding frame information, a partial area image corresponding to each sliding may be obtained, and the partial area image may be a target frame image. Thus, the characteristic information of the target frame image can be obtained by extracting the characteristics of the target frame image.
According to embodiments of the present disclosure, image feature extraction may be performed on the target frame image using a neural network model constructed based on a neural network, which may include, for example, a convolutional neural network model, a BERT model constructed based on an attention mechanism, and the like. The embodiment of the present disclosure does not limit the network structure of the neural network model for extracting the image features of the target frame image, and a person skilled in the art may select according to the actual situation.
Fig. 3B schematically illustrates a schematic diagram of target frame image feature information according to another embodiment of the present disclosure.
As shown in fig. 3B, the image 300 to be detected may be a perspective image of a container, and the objects stored in the container are recorded in the image 300 to be detected. The preset sliding window 320 may slide on the image 300 to be detected according to a preset sliding track, so that a sliding target frame 321 corresponding to the sliding track of the preset sliding window 320 may be obtained, and further, a local area image 3211 corresponding to the sliding target frame 321 may be obtained, where the local area image 3211 may be used as a target frame image.
It should be understood that other target frame images in the image to be detected may also be obtained by the same or similar method, which is not described herein.
Fig. 4 schematically illustrates an application scenario diagram of extracting image features of an image to be detected to obtain image feature information of a target block according to an embodiment of the disclosure.
As shown in fig. 4, in this application scenario, the image to be detected 410 may include a perspective image for a container. The image to be detected is input to the first feature extraction layer 451, and the image features of the whole image to be detected 410 can be extracted, so as to obtain the image feature information 420.
The image feature information 420 is input to the image region division layer 452, and the image feature information 420 can be processed by utilizing the image region division layer 452, so that classification prediction of local regions is performed on the image to be detected corresponding to the image feature information, and target frame information 431, 432, 433 and 434 corresponding to the local regions is obtained.
The target frame information 431, 432, 433, 434 and the image feature information 420 are input to the region of interest pooling layer 453, and target frame image feature information 441, 442, 443, 444 having the same dimension can be obtained.
In this embodiment, the first feature extraction layer may include a convolutional neural network layer, and the image region segmentation layer may include a region candidate network layer.
According to an embodiment of the present disclosure, an image complexity evaluation model includes a feature encoding embedded layer, a target encoder layer constructed based on an attention mechanism, and an evaluation result output layer connected in sequence;
fig. 5 schematically illustrates a flowchart of processing target frame image feature information using an image complexity assessment model to obtain image complexity information for an image to be detected, according to an embodiment of the present disclosure.
As shown in fig. 5, in operation S220, processing the image feature information of the target frame by using the image complexity evaluation model, obtaining the image complexity information for the image to be detected may include operations S510 to S530.
In operation S510, the target frame image feature information is input to the feature code embedding layer, and the target frame image feature code information is output.
In operation S520, the target block image feature encoding information is processed by the target encoder layer to obtain an image complexity prediction intermediate value of the target block image feature information.
In operation S530, the image complexity prediction intermediate value is input to the evaluation result output layer to obtain image complexity information.
According to embodiments of the present disclosure, feature encoding embedded layers may include, for example, those constructed based on an Embedding sub-layer and a Positional Encoding sub-layer (position encoding sub-layer). The feature code embedding layer is utilized to process the image feature information of the target frame, and the image feature information of the target frame and the preset classification bits can form a corresponding relation so as to facilitate the subsequent target encoder layer to process the image feature code information of the target frame.
According to an embodiment of the present disclosure, the target encoder layer includes a neural network layer built based on a tan former encoder; the evaluation result output layer may include a neural network layer constructed based on a multi-layer perceptron.
According to embodiments of the present disclosure, the target encoder layer may include an attention sub-layer constructed based on an attention mechanism, for example, may include a multi-head attention sub-layer, a multi-head self-attention sub-layer, and the like. The image complexity prediction intermediate value corresponding to each piece of target frame image characteristic information can be obtained by processing the target frame image characteristic coding information based on the attention mechanism of the target encoder layer, and the image complexity prediction intermediate value can be used for preliminarily judging the texture complexity of the local area image corresponding to the target frame information in the image to be detected. And then inputting the image complexity prediction intermediate value into an evaluation result output layer, and comprehensively judging the texture complexity of the image to be processed according to the image complexity prediction intermediate value by utilizing the evaluation result output layer to obtain the image complexity information of the image to be detected.
It should be noted that, each neural network model and each neural network layer in the embodiments of the present disclosure may be obtained by training using a supervised training method, an unsupervised training method, or a semi-supervised training method in the related art. Sample labels required in the training process can be obtained by manual labeling, and the sample labels can form corresponding relations with sample data.
Fig. 6 schematically illustrates an application scenario diagram of processing target frame image feature information using an image complexity assessment model to obtain image complexity information for an image to be detected according to an embodiment of the present disclosure.
As shown in fig. 6, the image complexity evaluation model 610 may include a feature code embedding layer 611, a target encoder layer 612, and an evaluation result output layer 613, which are sequentially connected.
In an embodiment of the present disclosure, the feature code embedding layer 611 may include a position-coding neural network layer. The feature code embedding layer 611 may position-code the input target frame image feature information 621, 622, 623, 624, respectively, and embed classification bits for distinguishing the complexity of the image as position identifiers into the target frame image feature code information.
The specific position coding process may be that the obtained 4 pieces of target frame image feature information are input to the feature code embedding layer 611, so that target frame image feature code information corresponding to the target frame image feature information may be obtained, where the target frame image feature code information may include a code vector of 1×1024 dimensions. Meanwhile, the classification bits can be represented by using 1 x 1024-dimensional coding vectors, and the coding vectors representing the classification bits and 4 coding vectors representing the target frame image feature coding information are combined to form 5 x 1024-dimensional coding vectors, wherein the first dimension of the 5 x 1024-dimensional coding vectors represents the classification bits, and the other dimensions respectively represent the target frame image feature information. The position-coding information is then added to the 5 x 1024-dimensional code vector, which may be a 5 x 1024-dimensional code vector.
The target feature encoding information is input to the target encoder layer 612, and an image complexity prediction intermediate value corresponding to the target block image feature information may be output, and the image complexity prediction intermediate value may be encoding information having the same dimension as encoding information input to the target encoder 612, that is, the target block image feature encoding information and the predicted target block image feature encoding may be encoding vectors of 1×1024 dimensions. The evaluation result output layer 613 is then used to process one or more image complexity prediction intermediate values, so that image complexity information of the image to be detected can be obtained.
In embodiments of the present disclosure, the target encoder layer may include a neural network layer built based on a tan former encoder, which may include a multi-headed attention sub-layer and a multi-headed perception sub-layer connected in sequence. The number of the multi-head attention sub-layer can be selected according to actual requirements, and the number of the heads can comprise 8 for example.
It should be noted that, in the neural network model and the neural network layer in the embodiments of the present disclosure, the relevant pooling layer and/or full-connection layer may be set according to actual requirements, and those skilled in the art may select according to actual requirements, which is not described in detail in the embodiments of the present disclosure.
According to an embodiment of the present disclosure, determining a risk early warning result for a detection object according to image complexity information may include the following operations.
And determining a risk early warning result as a first risk level early warning under the condition that the image complexity information is larger than or equal to a preset complexity threshold value.
According to the embodiment of the disclosure, the image complexity information may include a scoring result for texture complexity of an image to be detected, and in the case that the scoring result for texture complexity is greater than a preset complexity threshold, it may be predicted that objects stored in a detection object (for example, a container) are disordered and relevant forbidden and limited objects are easy to be hidden, so that determining the detection object corresponding to the image complexity information as the first risk level early warning may improve accuracy of storing the forbidden and limited objects in the object to be detected. In the application scenario of related customs inspection, the detection object with the risk suspicion can be rapidly determined according to the first risk level early warning, so that the inspection efficiency is improved.
It should be noted that, a person skilled in the art may set the preset complexity threshold according to an actual situation, and the specific setting result of the preset complexity threshold in the embodiment of the present disclosure is not limited.
According to an embodiment of the present disclosure, after processing the target frame image feature encoding information with the target encoder layer, a predicted target frame image feature encoding for the target frame image feature information is also obtained.
The detection object risk early warning method may further include the following operations.
And processing the predicted target block image feature codes according to a preset clustering algorithm to obtain a clustering detection result, wherein the clustering detection result represents the types and the numbers of the objects in the inner space area of the outer wall of the detection object.
Determining the risk early warning result for the detection object according to the image complexity information may further include the following operations.
And determining the risk early warning result as a second risk level early warning under the condition that the image complexity information is smaller than a preset complexity threshold and the clustering detection result is larger than or equal to the preset clustering detection threshold.
According to an embodiment of the present disclosure, the prediction target frame image feature encoding may be a predicted image feature obtained by processing target frame feature information by the target encoder layer in the above embodiment. The image feature codes of the predicted target frames are processed through a preset clustering algorithm, and objects stored in the detection objects can be clustered, so that the kinds and the numbers of the objects can be obtained according to the clustering detection result. Under the condition that the image complexity information is smaller than a preset complexity threshold, the probability of storing relevant forbidden limit products in the detection object can be predicted according to the clustering detection result. Under the condition that the clustering detection result is larger than or equal to a preset clustering detection threshold value, the number of types of objects stored in the detection object can be predicted to be more, and further the possibility that related forbidden limit objects are stored is predicted to be higher, so that second-level early warning can be carried out on the detection object according to second-risk-level early warning, and the accuracy of storing the forbidden limit objects in the object to be detected is further improved.
It should be noted that, a person skilled in the art may set the preset cluster detection threshold according to an actual situation, and the embodiment of the present disclosure does not limit a specific setting result of the preset cluster detection threshold.
According to an embodiment of the present disclosure, the preset clustering algorithm includes at least one of: hierarchical clustering algorithm, mean shift clustering algorithm, K mean clustering algorithm.
In one embodiment of the disclosure, a hierarchical clustering algorithm may be used as a preset clustering algorithm to improve accuracy of a clustering detection result.
According to the embodiment of the disclosure, after the target frame image feature encoding information is processed by the target encoder layer, predicted target frame image feature encodings for the target frame image feature information are also obtained, wherein the predicted target frame image feature encodings comprise N, and N is more than or equal to 2.
The detection object risk early warning method may further include the following operations.
And determining risk area information of the image to be detected according to the difference distance between each predicted target frame image feature code and other predicted target frame image feature codes in the N predicted target frame image feature codes.
According to the image complexity information, determining the risk early warning result for the detection object further comprises the following operation.
And determining the risk early warning result as a third risk level early warning under the condition that the image complexity information is smaller than a preset complexity threshold and the number of the risk area information is larger than or equal to the preset risk area threshold.
According to the embodiment of the disclosure, the difference distance between each predicted target frame image feature code and other predicted target frame image feature codes can be obtained through a related difference distance algorithm, and the average difference distance corresponding to each predicted target frame image feature code is obtained according to the associated difference distance of each predicted target frame image feature code. When the average difference distance is greater than or equal to the preset difference distance threshold, the prediction target block image feature code corresponding to the average difference distance may be determined as risk image feature information. And determining risk area information of the image to be detected according to the number of the risk image characteristic information.
Under the condition that the image complexity information is smaller than a preset complexity threshold value and the number of the risk area information is larger than or equal to the preset risk area threshold value, the degree of difference between objects in the detected object can be predicted to be larger, and further, the risk early warning result corresponding to the detected object is determined to be a third risk level early warning, so that the accuracy of storing forbidden objects in the object to be detected is further improved.
According to the embodiment of the disclosure, according to the correspondence between the risk image characteristic information and the local area image in the image to be detected, the risk local area image with risk can be determined. By determining the risk local area image, related personnel can quickly detect the risk area possibly with forbidden limit in the object, and further, the inspection efficiency of conducting clearance inspection on the container in related application scenes is comprehensively improved.
According to an embodiment of the present disclosure, the differential distance comprises at least one of:
manhattan distance, euclidean distance, chebyshev distance, mahalanobis distance, hamming distance.
In an embodiment of the present disclosure, the euclidean distance may be selected as the difference distance, so as to promote a calculation rate of executing the detection object risk early warning method by the related device.
Embodiments of the present disclosure also provide a radiation inspection method that may include the operations of
Acquiring a perspective image of a detection object; aiming at the perspective image of the detection object, determining a risk early warning result of the detection object by using the detection object risk early warning method; in response to detecting a risk early warning result for the image to be detected, determining the image to be detected as a risk image; and performing secondary inspection on the risk image.
According to embodiments of the present disclosure, the detection object may include a container, a luggage, or the like. In a related application scenario, by the radiation inspection method provided by the embodiment of the disclosure, a risk image can be rapidly determined, and a risk detection object with a storage forbidden article risk can be determined according to the risk image, so that inspection accuracy of detection objects such as containers and luggage in the related application scenario is improved, and inspection efficiency is improved by performing secondary inspection on the risk image.
Based on the detection object risk early warning method, the disclosure also provides a detection object risk early warning device. The device will be described in detail below in connection with fig. 7.
Fig. 7 schematically illustrates a block diagram of a detection object risk early-warning device according to an embodiment of the present disclosure.
As shown in fig. 7, the detection object risk early-warning device 700 of this embodiment includes an extraction module 710, an evaluation module 720, and a first determination module 730.
The extraction module 710 is configured to perform image feature extraction on an image to be detected, to obtain target block image feature information, where the image to be detected includes a perspective image of the detection object.
The evaluation module 720 is configured to process the feature information of the image of the target frame by using the image complexity evaluation model, so as to obtain image complexity information for the image to be detected.
The first determining module 730 is configured to determine a risk early warning result for the detection object according to the image complexity information.
According to an embodiment of the present disclosure, the extraction module includes: the device comprises a first extraction unit, a first classification unit and a first determination unit.
The first extraction unit is used for extracting image features of the image to be detected by using the first feature extraction layer to obtain image feature information.
The first classification unit is used for carrying out semantic division on the image characteristic information by utilizing the image region division layer to obtain target frame information aiming at the image to be detected.
The first determining unit is used for determining target frame image characteristic information according to the target frame information and the image characteristic information.
According to an embodiment of the present disclosure, the first feature extraction layer comprises a convolutional neural network layer, and the image region segmentation layer comprises a region candidate network layer.
According to an embodiment of the present disclosure, the extraction module includes: a dividing unit and a second extracting unit.
The dividing unit is used for dividing the image to be detected by utilizing a preset sliding window to obtain a target frame image.
The second extraction unit is used for extracting image features of the target frame image to obtain target frame image feature information.
According to an embodiment of the present disclosure, an image complexity assessment model includes a feature encoding embedded layer, a target encoder layer constructed based on an attention mechanism, and an assessment result output layer connected in sequence.
The evaluation module comprises: a feature encoding unit, an image classifying unit and an evaluating unit.
The feature coding unit is used for inputting the feature information of the image of the target frame to the feature coding embedded layer and outputting the feature coding information of the image of the target frame.
The image classification unit is used for processing the target block image characteristic coding information by using the target encoder layer to obtain an image complexity prediction intermediate value of the target block image characteristic information.
The evaluation unit is used for inputting the image complexity prediction intermediate value to the evaluation result output layer to obtain the image complexity information.
According to an embodiment of the present disclosure, the target encoder layer includes a neural network layer built based on a tan former encoder; and the evaluation result output layer comprises a neural network layer constructed based on the multi-layer perceptron.
According to an embodiment of the disclosure, the first determination module comprises a first determination unit.
The first determining unit is used for determining a risk early warning result as a first risk level early warning when the image complexity information is larger than or equal to a preset complexity threshold value.
According to an embodiment of the present disclosure, after processing the target frame image feature encoding information with the target encoder layer, a predicted target frame image feature encoding for the target frame image feature information is also obtained.
The detection object risk early warning device further comprises a clustering module.
The clustering module is used for processing the predicted target block image feature codes according to a preset clustering algorithm to obtain a clustering detection result, wherein the clustering detection result represents the types and the numbers of the objects in the inner space area of the outer wall of the detection object.
The first determination module further includes a second determination unit.
The second determining unit is configured to determine, when the image complexity information is smaller than a preset complexity threshold and the cluster detection result is greater than or equal to the preset cluster detection threshold, the risk early warning result as a second risk level early warning.
According to an embodiment of the present disclosure, the preset clustering algorithm includes at least one of: hierarchical clustering algorithm, mean shift clustering algorithm, K mean clustering algorithm.
According to the embodiment of the disclosure, after the target frame image feature encoding information is processed by the target encoder layer, predicted target frame image feature encodings for the target frame image feature information are also obtained, wherein the predicted target frame image feature encodings comprise N, and N is more than or equal to 2.
The detection object risk early warning device further comprises a third determining module.
The third determining module is used for determining risk area information of the image to be detected according to difference distances between each predicted target frame image feature code and other predicted target frame image feature codes in the N predicted target frame image feature codes.
The first determination module further includes a third determination unit.
The third determining unit is configured to determine, when the image complexity information is smaller than a preset complexity threshold and the number of risk area information is greater than or equal to the preset risk area threshold, a risk early warning result as a third risk level early warning.
According to an embodiment of the present disclosure, the differential distance comprises at least one of: manhattan distance, euclidean distance, chebyshev distance, mahalanobis distance, hamming distance.
According to an embodiment of the present disclosure, the detection object includes a container.
According to an embodiment of the present disclosure, the image complexity information includes complexity score information for texture information of the image to be detected.
Fig. 8 schematically illustrates a block diagram of a radiation inspection device according to an embodiment of the present disclosure.
As shown in fig. 8, the radiation inspection device 800 of this embodiment includes an acquisition module 810, an early warning module 820, a risk image determination module 830, and an inspection module 840.
The acquisition module 810 is configured to acquire a perspective image of the detection object.
The early warning module 820 is configured to determine a risk early warning result of the detected object by using the above-mentioned detected object risk early warning method according to the perspective image of the detected object.
The risk image determining module 830 is configured to determine, in response to detecting a risk early warning result for an image to be detected, the image to be detected as a risk image.
The inspection module 840 is used to perform a secondary inspection of the risk image.
According to an embodiment of the present disclosure, any of the plurality of modules of the extraction module 710, the evaluation module 720, the first determination module 730, the acquisition module 810, the early warning module 820, the risk image determination module 830, and the inspection module 840 may be combined in one module to be implemented, or any of the plurality of modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the extraction module 710, the evaluation module 720, the first determination module 730, the acquisition module 810, the pre-warning module 820, the risk image determination module 830, and the inspection module 840 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or as hardware or firmware in any other reasonable manner of integrating or packaging the circuitry, or as any one of or a suitable combination of any of the three implementations of software, hardware, and firmware. Alternatively, at least one of the extraction module 710, the evaluation module 720, the first determination module 730, the acquisition module 810, the pre-warning module 820, the risk image determination module 830, and the inspection module 840 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a method of risk early warning of a detection object, a method of radiation inspection, according to an embodiment of the disclosure.
As shown in fig. 9, an electronic device 900 according to an embodiment of the present disclosure includes a processor 901 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage portion 908 into a Random Access Memory (RAM) 903. The processor 901 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 901 may also include on-board memory for caching purposes. Processor 901 may include a single processing unit or multiple processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the electronic device 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. The processor 901 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the program may be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the disclosure, the electronic device 900 may also include an input/output (I/O) interface 905, the input/output (I/O) interface 905 also being connected to the bus 904. The electronic device 900 may also include one or more of the following components connected to the I/O interface 905: an input section 906 including a keyboard, a mouse, and the like; an output portion 907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 902 and/or RAM 903 and/or one or more memories other than ROM 902 and RAM 903 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to perform the methods provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed, and downloaded and installed in the form of a signal on a network medium, via communication portion 909, and/or installed from removable medium 911. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 901. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (19)

1. A method of detecting object risk early warning, comprising:
extracting image features of an image to be detected to obtain image feature information of a target block, wherein the image to be detected comprises a perspective image of a detection object;
Processing the image characteristic information of the target frame by using an image complexity evaluation model to obtain image complexity information aiming at the image to be detected; and
and determining a risk early warning result aiming at the detection object according to the image complexity information.
2. The method of claim 1, wherein performing image feature extraction on the image to be detected to obtain target block image feature information comprises:
extracting image features of the image to be detected by using a first feature extraction layer to obtain image feature information;
semantic division is carried out on the image characteristic information by utilizing an image region division layer, so that target frame information aiming at the image to be detected is obtained; and
and determining the image characteristic information of the target frame according to the information of the target frame and the image characteristic information.
3. The method of claim 2, wherein the first feature extraction layer comprises a convolutional neural network layer and the image region segmentation layer comprises a region candidate network layer.
4. The method of claim 1, wherein performing image feature extraction on the image to be detected to obtain target block image feature information comprises:
carrying out image division on the image to be detected by using a preset sliding window to obtain a target frame image; and
And extracting image features of the target frame image to obtain the feature information of the target frame image.
5. The method of claim 1, wherein the image complexity assessment model comprises a feature encoding embedded layer, a target encoder layer constructed based on an attention mechanism, and an assessment result output layer connected in sequence;
processing the image characteristic information of the target frame by using an image complexity evaluation model, wherein obtaining the image complexity information aiming at the image to be detected comprises the following steps:
inputting the target frame image characteristic information to the characteristic code embedded layer, and outputting target frame image characteristic code information;
processing the target frame image characteristic coding information by using the target encoder layer to obtain an image complexity prediction intermediate value of the target frame image characteristic information; and
and inputting the image complexity prediction intermediate value to the evaluation result output layer to obtain the image complexity information.
6. The method of claim 5, wherein,
the target encoder layer comprises a neural network layer constructed based on a Tansformer encoder; and
the evaluation result output layer comprises a neural network layer constructed based on a plurality of layers of perceptrons.
7. The method of any one of claims 1 to 5, wherein determining a risk early warning result for the detection object according to the image complexity information comprises:
and determining the risk early warning result as a first risk level early warning under the condition that the image complexity information is larger than or equal to a preset complexity threshold value.
8. The method of claim 5, wherein the target encoder layer is utilized to process the target frame image feature encoding information before further deriving a predicted target frame image feature encoding for the target frame image feature information;
the detection object risk early warning method further comprises the following steps:
processing the predicted target block image feature codes according to a preset clustering algorithm to obtain a clustering detection result, wherein the clustering detection result represents the types and the numbers of the objects in the inner space area of the outer wall of the detection object;
according to the image complexity information, determining the risk early warning result for the detection object further comprises:
and determining the risk early warning result as a second risk level early warning under the condition that the image complexity information is smaller than a preset complexity threshold and the clustering detection result is larger than or equal to a preset clustering detection threshold.
9. The method of claim 8, wherein the preset clustering algorithm comprises at least one of:
hierarchical clustering algorithm, mean shift clustering algorithm, K mean clustering algorithm.
10. The method of claim 5, wherein after processing the target frame image feature encoding information with the target encoder layer, a predicted target frame image feature encoding for the target frame image feature information is also obtained, the predicted target frame image feature encoding comprising N, N being greater than or equal to 2;
the detection object risk early warning method further comprises the following steps:
determining risk area information of the image to be detected according to difference distances between each prediction target frame image feature code and other prediction target frame image feature codes in N prediction target frame image feature codes;
according to the image complexity information, determining the risk early warning result for the detection object further comprises:
and determining the risk early warning result as a third risk level early warning under the condition that the image complexity information is smaller than a preset complexity threshold and the number of the risk area information is larger than or equal to the preset risk area threshold.
11. The method of claim 10, wherein the differential distance comprises at least one of:
Manhattan distance, euclidean distance, chebyshev distance, mahalanobis distance, hamming distance.
12. The method of claim 1, wherein the detection object comprises a container.
13. The method of claim 1, wherein the image complexity information comprises complexity scoring information for texture information of the image to be detected.
14. A radiation inspection method includes
Acquiring a perspective image of a detection object;
determining a risk early-warning result of the detection object by using the detection object risk early-warning method according to any one of claims 1 to 13 for the perspective image of the detection object;
in response to detecting a risk early warning result for an image to be detected, determining the image to be detected as a risk image; and
and performing secondary examination on the risk image.
15. A detection object risk early warning device, comprising:
the extraction module is used for extracting image characteristics of an image to be detected to obtain target block image characteristic information, wherein the image to be detected comprises a perspective image of a detection object;
the evaluation module is used for processing the image characteristic information of the target frame by utilizing an image complexity evaluation model to obtain image complexity information aiming at the image to be detected; and
And the first determining module is used for determining a risk early warning result aiming at the detection object according to the image complexity information.
16. A radiation inspection device includes
The acquisition module is used for acquiring perspective images of the detection object;
an early warning module for determining a risk early warning result of the detection object by using the detection object risk early warning method according to any one of claims 1 to 13 for the perspective image of the detection object;
the risk image determining module is used for determining the image to be detected as a risk image in response to detecting a risk early warning result aiming at the image to be detected; and
and the checking module is used for performing secondary checking on the risk image.
17. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-14.
18. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1 to 14.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 14.
CN202210874255.5A 2022-07-22 2022-07-22 Detection object risk early warning method, radiation inspection method, device and equipment Pending CN117496175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210874255.5A CN117496175A (en) 2022-07-22 2022-07-22 Detection object risk early warning method, radiation inspection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210874255.5A CN117496175A (en) 2022-07-22 2022-07-22 Detection object risk early warning method, radiation inspection method, device and equipment

Publications (1)

Publication Number Publication Date
CN117496175A true CN117496175A (en) 2024-02-02

Family

ID=89667767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210874255.5A Pending CN117496175A (en) 2022-07-22 2022-07-22 Detection object risk early warning method, radiation inspection method, device and equipment

Country Status (1)

Country Link
CN (1) CN117496175A (en)

Similar Documents

Publication Publication Date Title
US20200051017A1 (en) Systems and methods for image processing
US10395147B2 (en) Method and apparatus for improved segmentation and recognition of images
Zhang et al. A research on an improved Unet-based concrete crack detection algorithm
Atha et al. Evaluation of deep learning approaches based on convolutional neural networks for corrosion detection
US10282722B2 (en) Machine learning system, method, and program product for point of sale systems
JP6725547B2 (en) Relevance score assignment for artificial neural networks
US11087452B2 (en) False alarm reduction system for automatic manufacturing quality control
WO2020124247A1 (en) Automated inspection system and associated method for assessing the condition of shipping containers
US11694080B2 (en) Systems and methods for generating datasets for model retraining
Ghosh et al. Automated detection and classification of pavement distresses using 3D pavement surface images and deep learning
CN114429566A (en) Image semantic understanding method, device, equipment and storage medium
Steno et al. A novel enhanced region proposal network and modified loss function: threat object detection in secure screening using deep learning
Tran et al. Advanced crack detection and segmentation on bridge decks using deep learning
Liu et al. Detection and location of unsafe behaviour in digital images: A visual grounding approach
Presenti et al. Automatic anomaly detection from X-ray images based on autoencoders
Xiao et al. Research on belt foreign body detection method based on deep learning
Huang et al. Hierarchical distribution-aware testing of deep learning
Mousavi et al. Optimized U-shape convolutional neural network with a novel training strategy for segmentation of concrete cracks
CN111461152B (en) Cargo detection method and device, electronic equipment and computer readable medium
CN117496175A (en) Detection object risk early warning method, radiation inspection method, device and equipment
US20230143701A1 (en) Systems and methods for predicting expression levels
Xu et al. Find the centroid: A vision‐based approach for optimal object grasping
Edison et al. Motion-constrained generative adversarial network for anomaly detection
US20220036066A1 (en) System and method for supporting user to read x-ray image
Migel et al. A New Method of Handguns Recognition While Inspecting Baggage for Aviation Security Service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination