CN109886970B - Detection segmentation method for target object in terahertz image and computer storage medium - Google Patents
Detection segmentation method for target object in terahertz image and computer storage medium Download PDFInfo
- Publication number
- CN109886970B CN109886970B CN201910048648.9A CN201910048648A CN109886970B CN 109886970 B CN109886970 B CN 109886970B CN 201910048648 A CN201910048648 A CN 201910048648A CN 109886970 B CN109886970 B CN 109886970B
- Authority
- CN
- China
- Prior art keywords
- generator
- segmentation
- target object
- detection
- terahertz image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 238000012549 training Methods 0.000 claims abstract description 20
- 230000006870 function Effects 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 12
- 238000009826 distribution Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 102100029469 WD repeat and HMG-box DNA-binding protein 1 Human genes 0.000 claims description 2
- 101710097421 WD repeat and HMG-box DNA-binding protein 1 Proteins 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 9
- 238000007689 inspection Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 7
- 238000012360 testing method Methods 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 101150083127 brox gene Proteins 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a detection segmentation method and a computer storage medium for a target object in a terahertz image, which organically integrate two tasks of detection and segmentation by utilizing a condition generation countermeasure network, generate a segmentation mask for a sample containing a weapon, generate a picture with zero value of all pixels for a sample not containing the weapon, set a proper loss function and an efficient network structure, and enable a trained model to detect whether the input terahertz image contains the target object or not and segment the target object from the image under the conditions of smaller training set and lower image signal-to-noise ratio. The invention can train out an accurate detection and segmentation model under the conditions of few training samples and low signal-to-noise ratio of pictures, and the model has high processing speed and high timeliness and can be applied to a real-time security inspection system, thereby reducing the labor intensity of manual security inspection and improving the efficiency.
Description
Technical Field
The present invention relates to a method for detecting and dividing a target object and a computer storage medium, and more particularly, to a method for detecting and dividing a target object in a terahertz image and a computer storage medium.
Background
In security inspection, detecting objects hidden under clothing is a critical task. However, the existing manual security inspection means are quite low in efficiency and have high omission rate. Terahertz imaging technology provides a non-contact scheme. Terahertz is a wavelength at 0.1X10 12 -10×10 12 Electromagnetic wave between HzIt obtains images of objects hidden under clothing by measuring the radiant heat of different parts of the human body. And such electromagnetic waves are harmless to the human body compared to X-rays.
In the case of limited marking data, we need to determine not only whether the image contains hidden objects, but also to accurately segment the detected objects. However, due to physical factors inherent in terahertz imaging, the contrast and signal-to-noise ratio of the image tend to be low. The difference between the hidden object and the human body part in the pixel value is not obvious, and the edge of the object is almost mixed with the human body and cannot be distinguished. At the same time, the picture has more background noise. Thus, under such poor conditions, algorithms that result in standard object segmentation and saliency detection do not solve the problem well. Early work focused on statistical models, shen (Shen, x., dietlein, c., grossman, e.n., popovic, z., meyer, f.g., detection and segmentation of concealed objects in terahertz images.ieee Transactions on Image Processing (2008) 2465-2475) proposed a multi-level threshold segmentation algorithm modeling radiation temperature by a mixed gaussian model. The segmentation task is accomplished by first removing noise using an anisotropic diffusion algorithm, and then tracking the boundaries that change with the change in threshold. LEE (LEE, d., yeom, s., son, j., kim, s.: automatic image segmentation for concealed object detection using the expectation-localization algorithm. Optics Express 18 (2010) 10659-10667) also uses a mixed gaussian model. The Bayesian boundary is obtained through a two-time expectation maximization algorithm to divide the human body and the targets contained in the human body. Similarly, yeom (Lee, D.S., son, J.Y., jung, M.K., jung, S.W., lee, S.J., yeom, S., jang, Y.S., real-time outdoor concealed-object detection with passive millimeter wave imaging.optics Express 19 (2011) 2530-2536) uses a multi-scale segmentation algorithm based on the Gaussian mixture model, while for Real-time, vector quantization techniques are used before the maximization algorithm is desired. However, none of these conventional algorithms has the capability of detection, i.e. performs the segmentation operation on the premise that the current picture contains an object, and at the same time, has no good segmentation result under such poor imaging conditions.
Current example segmentation algorithms based on deep convolutional neural networks have great potential to address this difficulty. This class of methods can be broadly divided into two classes, one being based on the R-CNN recommendation. They used a bottom-up flow, and the segmented results relied on the recommended region of R-CNN, and then classified these results. Another class of methods are semantic segmentation based methods. The method realizes the instance segmentation by relying on the result of semantic segmentation, namely, the result of semantic segmentation is obtained first, and then pixels are separated into different instances. A latest example segmentation algorithm Mask-RCNN (He, K., gkioxari, G., dollar, P., girsheck, R.B. Mask r-cnn.International Conference on Computer Vision (2017) 2980-2988) shares the features of the convolutional layer, enabling simultaneous tasks of detection, classification and segmentation. However, these methods generally use segmentation and detection as two independent units, which makes the model structure complex, long processing time, and not robust. And the method needs a large amount of data to train out available models, and obviously is not suitable for target detection and segmentation tasks of terahertz scenes.
Disclosure of Invention
The invention aims to: the invention aims to solve the technical problem of providing a detection and segmentation method for a target object in a terahertz image and a computer storage medium, which overcome the defects that the contrast between the target object and a background is large or a large number of high-quality samples are required to be collected and marked in the prior art, and obtain accurate detection and segmentation effects under the conditions that the signal-to-noise ratio is very low, the target object and the background are very similar in pixel density and the number of training samples is small, and simultaneously have high processing speed, so that the method can be applied to a real-time security inspection system.
The technical scheme is as follows: the invention discloses a detection segmentation method of a target object in a terahertz image, which comprises the following steps of:
(1) Taking an original terahertz image x in a training set as a training sample, marking a target object area in the x by adopting a manual mode to obtain a real segmentation mask y, preprocessing the x and the y, and converting all pixel values in the image to be between-1 and-1;
(2) Inputting the original terahertz image x and the random noise z into a generator, and generating a false segmentation mask by the generator;
(3) The true segmentation mask and the corresponding original terahertz image form a true sample pair, and the false segmentation mask and the corresponding original terahertz image form a false sample pair which are input into a discriminator together for judgment;
(4) Calculating the loss functions of the generator and the arbiter respectively, optimizing the generator by using a gradient descent algorithm to ensure that the loss function of the generator is as small as possible, optimizing the arbiter by using a gradient ascent algorithm to ensure that the loss function of the arbiter is as large as possible until the model converges, returning to the step (1), and entering the step (5) after traversing the training set;
(5) Returning to the step (1) until the set iteration times are reached;
(6) And storing the model trained in the steps, and inputting a new picture to be detected into a generator to obtain a detection and segmentation result corresponding to the new picture to be detected.
In order to preserve the target area while rejecting other useless details, a simple and efficient network structure is obtained, parts of layers of the encoder and decoder can be selectively connected, the generator is a two-dimensional neural network G, comprising 8 convolutional layers and 8 transposed convolutional layers, wherein the filter sizes and the moving steps of all convolution layers and the transposed convolution layers are 5*5 and the depths of 2 x 2,8 convolution layers are 64, 128, respectively the depths of 256, 512,1024,8 transposed convolutional layers are, in order, 512 512,256,128,64,1.
To make the output of the generator more diverse, and to increase the model robustness, the random noise z is generated by the dropout operation in the generator, and is respectively performed on the first three layers of the transposed convolution layer, and the probability that each node is reserved is 50%.
Further, the discriminator is a two-dimensional neural network D, and comprises 3 convolution layers and 1 full-connection layer, wherein the filter sizes of the 3 convolution layers are 5*5, the step sizes are 2 x 2, the filter depths are 64, 128 and 256 respectively, the full-connection layer uses sigmoid as an activation function, and the output result is a one-dimensional scalar.
In order to make a good trade-off between recall and false alarm rate, the loss function of the arbiter is:
wherein ,L s =‖G(x,z)‖ 1 ,‖·‖ 1 is L 1 An expression of a norm; d (x, y) is the output of the arbiter, G (x, z) is the output of the generator,/->For the expectation of the probability distribution of real sample pairs,for the expectation of the probability distribution of the false sample pairs, λ and β are respectively +.> and Ls Weight to the overall loss function.
Further, the method of manual labeling in the step (1) is to label the target area in each terahertz graph, mark the pixel value of the target area as 255 and the rest positions as 0, and the preprocessing process is to convert all the pixel values to between-1 and 1, and the conversion formula is as follows: p/127.5-1, where p is the specific pixel value.
The computer storage medium of the present invention has stored thereon a computer program which, when executed by a computer processor, implements the method of any of the above.
The beneficial effects are that: the invention can train out an accurate detection and segmentation model under the conditions of few training samples and low signal-to-noise ratio of pictures, and the model has high processing speed and high timeliness and can be applied to a real-time security inspection system, thereby reducing the labor intensity of manual security inspection and improving the efficiency.
Drawings
FIG. 1 is a schematic view of a selected terahertz image;
FIG. 2 is a schematic diagram of the model training and testing process of the present method;
FIG. 3 is a comparative schematic of the model structure used and other schemes;
fig. 4 is a detailed structure of the model of the present invention.
Detailed Description
A schematic view of the terahertz image is shown in fig. 1, in which a target object portion is surrounded by a rectangular frame. The specific process of the method is shown in fig. 2, and is divided into a training process and a testing process, wherein the training process aims at establishing a trained model, and the testing process inputs an attempted chip to be tested into the model to test the effect of the method. The generation countermeasure network adopted by the invention is a generation model, and the process of transforming from one picture to another picture can be realized through the countermeasure process. The generation of a standard against the network comprises a generator and a discriminator. The task of the generator is to generate sufficiently realistic false samples to fool the arbiter based on the input information, and on the other hand, the arbiter accepts both the samples actually sampled from the dataset and the false samples generated by the generator and then decides which samples are true and which are false. The final generator can generate samples which cannot be judged by the discriminator, namely the generator captures the distribution of the training samples. The condition-based generation constrains the generator by additional information against the network, and generates good results from the user's tag data even under a small sample size, which is a design concept adopted by the present invention.
Providing correct detection and segmentation results for each sample during the training process, wherein each picture corresponds to a picture with the same size and the value of all pixels being 0, and for the sample containing the weapon, marking the values of the pixels as 255 in the area where the weapon appears; without any weapon sample, no modification is made. Specifically, the target area in each terahertz graph is marked by a manual marking method, and a free marking tool Colabeler is used by a marking tool to mark the pixel value of the target area as 255 and the rest positions as 0, so that a real segmentation mask is obtained. And scaling the original terahertz image and the marked image to 256 pixels in length and width, and then starting image preprocessing. The preprocessing process is to set all pixel values between-1 and 1, and the conversion formula is as follows: p/127.5-1 (p is a specific one pixel value). Random noise z is input into the generator, while the scaled original image to be detected is also input into the generator as condition x. The generator learns a mapping function from the input picture to the detection and segmentation results, and outputs corresponding detection and segmentation results G (x, z). The input to the arbiter consists of two parts, one is a real sample pair: the original terahertz image x and a true segmentation result, namely a true segmentation mask y, are marked as (x, y); the other is a false sample pair: the original terahertz image x and the output pseudo-segmentation mask G (x, z) of the generator are denoted as (x, G (x, z)). For the arbiter it needs to give a higher score to the real sample pair and conversely give a lower score to the fake sample pair, whereas for the generator it needs to produce an increasingly real sample G (x, z) to fool the arbiter so that (x, G (x, z)) gets a higher score and finally the arbiter cannot tell which of the two pairs of inputs is fake, so that the generator learns the implicit mapping from terahertz images to detected segmented images. In the testing process, the picture to be detected is input into a trained generator, then the model discards redundant information such as human body, background noise and the like, and finally the hidden object is segmented. The detection and segmentation are completed simultaneously.
To achieve such an objective, the present invention first defines a suitable loss function.
The loss function of the arbiter is:
L s =‖G(x,z)‖ 1 (3)
the loss function of the generator is:
wherein ,‖·‖1 Is L 1 An expression of a norm; d (x, y) is the output of the arbiter, the value is between 0 and 1, G (x, z) is the output of the generator,for the expectation of the probability distribution of real sample pairs, +.>For the expectation of the probability distribution of the false sample pairs, λ and β are respectively +.> and Ls Weight to the overall loss function. x, y are all sampled at the true sample distribution, the arbiter minimizes the expectation by letting D (x, y) be as close to 1 as possible, while D (x, G (x, z)) is as close to 0 as possible, to maximize, i.e., equation (1), whereas the generator tries to get D (x, G (x, z)) as close to 1 as possible. Noise z is represented as Dropout operation in the generator (weights of part of nodes are randomly given 0 value in the training process), so that the output of the generator is more diversified, and more false samples are generated, so that the generator can obtain original data distribution more easily, and model robustness is improved. At the same time, x is taken as a condition to be respectively added into two networks to form a condition generation countermeasure network, and the output and the input of the constraint generator are paired one by one, whetherThe result of the detection and segmentation will be random.
Since equation (1) only considers the loss function of the generation countermeasure network, the goal is to generate a picture that the discriminator cannot discriminate authenticity. However, it is possible to generate realistic pictures and does not necessarily represent a good segmentation result. Equation (2) uses manhattan distance to measure reconstruction errors. Adding to this constraint requires that the generated picture is not only sufficiently realistic, but also sufficiently close to the correct segmentation result, i.e. the accurate segmentation result. In addition, manhattan distance is used instead of the usual L 2 Norms, consist of L 2 Constraints tend to cause blurring of the generated picture.
As with many object detection segmentation tasks, the hidden weapon occupies only a small part relative to the whole picture, so equation (3) requires that the generated picture should also be sparse. The invention uses L 1 The norm acts as a sparsity constraint rather than a theoretical L 0 Norms and commonly used L 2 Norms, because the former needs to solve the NP-hard problem, while the latter can only approximate each term to a small value and cannot guarantee that most elements are 0.
The loss function of the generator is equation (4). Lambda and beta are respectively and Ls The weight of the whole loss function is influenced, and lambda and beta are respectively 800 and 5 in actual use. The condition x and the reconstruction error can monitor the generation process, ensure that the generated picture corresponds to the input terahertz picture one by one, or else is random, and do not meet the tasks of target detection and segmentation. While maintaining accurate segmentation results by minimizing manhattan distance. As a priori knowledge, sparsity constraints reduce the false alarm rate.
Training of the model uses a gradient-increasing algorithm to optimize the discriminators and a gradient-decreasing algorithm to optimize the generator. During training of the generator, parameters of the discriminator need to be kept unchanged, and the formula (4) is minimized; while the current parameters of the generator are kept unchanged when training the discriminant, equation (1) is maximized. The discriminators and generators are trained alternately in an iterative process until the model converges. In the specific implementation process, the learning rate is set to be 0.00012, the miniband is set to be 8, the whole training set is traversed to be one iteration, and 200 iterations are performed in total, so that a final model is obtained.
The invention adds the connection of partial decoding layer and coding layer compared with the traditional 'Encoder-Encoder' structure, as shown in figure 3, wherein the specific generator structure is shown in figure four, thus obtaining low-level characteristics which can improve the detection capability of the model on small targets. The new model removes portions of the connection very close to the output layer compared to "U-Net" (Ronneberger, O., fischer, P., brox, T.: U-Net: convolutional networks for biomedical image segment. Medical Image Computing and Computer Assisted Intervention (2015) 234-241). As shown in fig. 3, the output of the first few layers of the U-Net encoder contains a lot of edge and background information, which would result in a higher false alarm if these layers are connected to the decoding layer that is very close to the output layer, so the present invention removes this connection and finds a balance between recall and false alarm rate. And (3) according to the loss function and the corresponding model structure, iteratively solving an optimization result by using a gradient descent algorithm to obtain final model parameters. The discriminator comprises 3 convolution layers and a full connection layer, the filter sizes of the 3 convolution layers are 5*5, the step sizes are 2 x 2, and the filter depths are 64, 128 and 256 respectively. The fully connected layer outputs a one-dimensional scalar using sigmoid as the activation function. As shown in fig. 4, the generator comprises 8 convolutional layers and 8 transposed convolutional layers. The filter sizes and the moving step sizes of all the convolution layers and the transposed convolution layers are 5*5, the depths of 2 x 2,8 convolution layers and the depths of 8 transposed convolution layers are as follows: 64, 128, 256, 512,512,512,512,1024, 512,512,512,512,256,128,64,1. Noise z is generated by the dropout operation, respectively in the first three layers of the transposed convolutional layer, with a 50% chance of each node being preserved.
And inputting a new picture to be detected into the generator by using the model trained in the steps, and obtaining a detection and segmentation result corresponding to the new picture to be detected. In actual tests, the model of the invention can control the false alarm rate to 11.35 percent and improve the recall rate to 88.46 percent, which is obviously superior to the existing detection and segmentation algorithms.
Embodiments of the present invention also provide a computer storage medium having a computer program stored thereon. The method of the aforementioned control may be implemented when the computer program is executed by a processor. The computer storage medium is, for example, a computer-readable storage medium.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Claims (5)
1. The detection and segmentation method for the target object in the terahertz image is characterized by comprising the following steps of:
(1) Taking an original terahertz image x in a training set as a training sample, marking a target object area in the x by adopting a manual mode to obtain a real segmentation mask y, preprocessing the x and the y, and converting all pixel values in the image to be between-1 and-1;
(2) Inputting the original terahertz image x into a generator, carrying out dropout operation on the generator to generate random noise z, and generating a false segmentation mask by the generator;
(3) The true segmentation mask and the corresponding original terahertz image form a true sample pair, and the false segmentation mask and the corresponding original terahertz image form a false sample pair which are input into a discriminator together for judgment;
(4) Calculating the loss functions of the generator and the arbiter respectively, optimizing the generator by using a gradient descent algorithm to ensure that the loss function of the generator is as small as possible, optimizing the arbiter by using a gradient ascent algorithm to ensure that the loss function of the arbiter is as large as possible until the model converges, returning to the step (1), and entering the step (5) after traversing the training set;
(5) Returning to the step (1) until the set iteration times are reached;
(6) Storing the model trained in the above steps, inputting a new picture to be detected into a generator to obtain a corresponding detection and segmentation result,
the generator is a two-dimensional neural network G, comprising 8 convolution layers and 8 transposed convolution layers, wherein the filter sizes and the moving steps of all convolution layers and the transposed convolution layers are 5*5 and the depths of 2 x 2,8 convolution layers are 64, 128, respectively the depths of 256, 512,1024,8 transposed convolutional layers are, in order, 512 512,256,128,64,1,
the random noise z is generated by a dropout operation in the pair generator and is respectively performed on the first three layers of the transposed convolution layer, and the probability that each node is reserved is 50%.
2. The detection and segmentation method for the target object in the terahertz image according to claim 1, wherein: the discriminator is a two-dimensional neural network D, and comprises 3 convolution layers and 1 full-connection layer, wherein the filter sizes of the 3 convolution layers are 5*5, the step sizes are 2 x 2, the filter depths are 64, 128 and 256 respectively, the full-connection layer uses sigmoid as an activation function, and the output result is a one-dimensional scalar.
3. The method for detecting and segmenting the target object in the terahertz image according to claim 1, wherein the loss function of the discriminator is:
wherein ,L s =||G(x,z)|| 1 ,||·|| 1 is L 1 An expression of a norm; d (x, y) is the output of the arbiter, G (x, z) is the output of the generator,/->For real sample pairsIs used to determine the probability distribution of the probability distribution,for the expectation of the probability distribution of the false sample pairs, λ and β are respectively +.> and LS Weight to the overall loss function.
4. The detection and segmentation method for the target object in the terahertz image according to claim 1, wherein: the method of manual labeling in the step (1) is to label a target area in each terahertz graph, mark the pixel value of the target area as 255 and the rest positions as 0, and the preprocessing process is to convert all the pixel values to between-1 and 1, and the conversion formula is as follows: p/127.5-1, where p is the specific pixel value.
5. A computer storage medium having a computer program stored thereon, characterized by: the computer program, when executed by a computer processor, implements the method of any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910048648.9A CN109886970B (en) | 2019-01-18 | 2019-01-18 | Detection segmentation method for target object in terahertz image and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910048648.9A CN109886970B (en) | 2019-01-18 | 2019-01-18 | Detection segmentation method for target object in terahertz image and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109886970A CN109886970A (en) | 2019-06-14 |
CN109886970B true CN109886970B (en) | 2023-06-09 |
Family
ID=66926216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910048648.9A Active CN109886970B (en) | 2019-01-18 | 2019-01-18 | Detection segmentation method for target object in terahertz image and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109886970B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110428476A (en) * | 2019-07-05 | 2019-11-08 | 广东省人民医院(广东省医学科学院) | A kind of image conversion method and device based on multi-cycle production confrontation network |
CN110619632B (en) * | 2019-09-18 | 2022-01-11 | 华南农业大学 | Mango example confrontation segmentation method based on Mask R-CNN |
CN111275692B (en) * | 2020-01-26 | 2022-09-13 | 重庆邮电大学 | Infrared small target detection method based on generation countermeasure network |
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
CN111583276B (en) * | 2020-05-06 | 2022-04-19 | 西安电子科技大学 | CGAN-based space target ISAR image component segmentation method |
CN113838076A (en) * | 2020-06-24 | 2021-12-24 | 深圳市中兴微电子技术有限公司 | Method and device for labeling object contour in target image and storage medium |
CN112001846A (en) * | 2020-08-21 | 2020-11-27 | 深圳职业技术学院 | Multi-target X-ray security inspection image synthesis method with higher resolution |
CN112287938B (en) * | 2020-10-29 | 2022-12-06 | 苏州浪潮智能科技有限公司 | Text segmentation method, system, device and medium |
CN112508239A (en) * | 2020-11-22 | 2021-03-16 | 国网河南省电力公司电力科学研究院 | Energy storage output prediction method based on VAE-CGAN |
CN114820429B (en) * | 2022-02-18 | 2023-06-16 | 湖南珞佳智能科技有限公司 | Chip glue climbing height identification method and system based on generation type countermeasure network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105758865A (en) * | 2016-04-26 | 2016-07-13 | 河南工业大学 | Method for detecting foreign matter in grain packing material based on terahertz waves and detecting system |
CN107092040A (en) * | 2017-06-01 | 2017-08-25 | 上海理工大学 | Terahertz imaging rays safety detection apparatus and video procession method |
CN108764328A (en) * | 2018-05-24 | 2018-11-06 | 广东工业大学 | The recognition methods of Terahertz image dangerous material, device, equipment and readable storage medium storing program for executing |
CN108830225A (en) * | 2018-06-13 | 2018-11-16 | 广东工业大学 | The detection method of target object, device, equipment and medium in terahertz image |
CN109001833A (en) * | 2018-06-22 | 2018-12-14 | 天和防务技术(北京)有限公司 | A kind of Terahertz hazardous material detection method based on deep learning |
-
2019
- 2019-01-18 CN CN201910048648.9A patent/CN109886970B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105758865A (en) * | 2016-04-26 | 2016-07-13 | 河南工业大学 | Method for detecting foreign matter in grain packing material based on terahertz waves and detecting system |
CN107092040A (en) * | 2017-06-01 | 2017-08-25 | 上海理工大学 | Terahertz imaging rays safety detection apparatus and video procession method |
CN108764328A (en) * | 2018-05-24 | 2018-11-06 | 广东工业大学 | The recognition methods of Terahertz image dangerous material, device, equipment and readable storage medium storing program for executing |
CN108830225A (en) * | 2018-06-13 | 2018-11-16 | 广东工业大学 | The detection method of target object, device, equipment and medium in terahertz image |
CN109001833A (en) * | 2018-06-22 | 2018-12-14 | 天和防务技术(北京)有限公司 | A kind of Terahertz hazardous material detection method based on deep learning |
Non-Patent Citations (1)
Title |
---|
Segmentation of Concealed Objects in Active Terahertz Images;Zhan Ou等;《Segmentation of Concealed Objects in Active Terahertz Images》;20181018;第1-5页 * |
Also Published As
Publication number | Publication date |
---|---|
CN109886970A (en) | 2019-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886970B (en) | Detection segmentation method for target object in terahertz image and computer storage medium | |
Liu et al. | Attribute-aware face aging with wavelet-based generative adversarial networks | |
Erler et al. | Points2surf learning implicit surfaces from point clouds | |
Nandhini Abirami et al. | Deep CNN and Deep GAN in Computational Visual Perception‐Driven Image Analysis | |
Shakya et al. | Deep learning algorithm for satellite imaging based cyclone detection | |
Wei et al. | Tensor voting guided mesh denoising | |
Tu et al. | Hyperspectral image classification via weighted joint nearest neighbor and sparse representation | |
CN112800964A (en) | Remote sensing image target detection method and system based on multi-module fusion | |
Guo et al. | Deep line drawing vectorization via line subdivision and topology reconstruction | |
CN113095333B (en) | Unsupervised feature point detection method and unsupervised feature point detection device | |
CN112598053A (en) | Active significance target detection method based on semi-supervised learning | |
Zhao et al. | Self-generated defocus blur detection via dual adversarial discriminators | |
CN113361646A (en) | Generalized zero sample image identification method and model based on semantic information retention | |
Chen et al. | Image splicing localization using residual image and residual-based fully convolutional network | |
CN115661459A (en) | 2D mean teacher model using difference information | |
CN116258877A (en) | Land utilization scene similarity change detection method, device, medium and equipment | |
Luo et al. | New deep learning method for efficient extraction of small water from remote sensing images | |
Yang et al. | Research on digital camouflage pattern generation algorithm based on adversarial autoencoder network | |
Yu | Accurate recognition method of human body movement blurred image gait features using graph neural network | |
Bode et al. | Bounded: Neural boundary and edge detection in 3d point clouds via local neighborhood statistics | |
Han et al. | Progressive Feature Interleaved Fusion Network for Remote-Sensing Image Salient Object Detection | |
Pakulich et al. | Age recognition from facial images using convolutional neural networks | |
Sulaiman et al. | Building Precision: Efficient Encoder-Decoder Networks for Remote Sensing based on Aerial RGB and LiDAR data | |
CN108154107B (en) | Method for determining scene category to which remote sensing image belongs | |
Vig et al. | Entropy-based multilevel 2D histogram image segmentation using DEWO optimization algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |