CN109886970A - The detection dividing method and computer storage medium of target object in terahertz image - Google Patents
The detection dividing method and computer storage medium of target object in terahertz image Download PDFInfo
- Publication number
- CN109886970A CN109886970A CN201910048648.9A CN201910048648A CN109886970A CN 109886970 A CN109886970 A CN 109886970A CN 201910048648 A CN201910048648 A CN 201910048648A CN 109886970 A CN109886970 A CN 109886970A
- Authority
- CN
- China
- Prior art keywords
- generator
- target object
- terahertz image
- discriminator
- terahertz
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000001514 detection method Methods 0.000 title claims abstract description 27
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 238000012549 training Methods 0.000 claims abstract description 18
- 230000011218 segmentation Effects 0.000 claims description 45
- 230000006870 function Effects 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 238000009826 distribution Methods 0.000 claims description 9
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 2
- 238000002372 labelling Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 9
- 238000012360 testing method Methods 0.000 abstract description 6
- 238000005336 cracking Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 12
- 238000007689 inspection Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000003709 image segmentation Methods 0.000 description 2
- 238000004883 computer application Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of detection dividing method of target object in terahertz image and computer storage mediums, it will test and divide two tasks using condition generation confrontation network to be organically fused together, its dicing masks is generated for the sample comprising weapon, and the sample for not including weapon generates the picture that the value of an all pixels is zero, suitable loss function and efficiently network structure are set, in training set in the lower situation of lesser and signal noise ratio (snr) of image, whether the terahertz image that the model trained is able to detect input includes target object, and target object is split from image.The present invention can training samples number seldom and the signal-to-noise ratio of picture is very low in the case that trains an accurate detection and parted pattern, and the model has cracking processing speed and very high timeliness, it can apply in real-time safe examination system, to reduce the labor intensity of artificial safety check, improve efficiency.
Description
Technical Field
The present invention relates to a method for detecting and segmenting a target object and a computer storage medium, and more particularly, to a method for detecting and segmenting a target object in a terahertz image and a computer storage medium.
Background
In security inspection, detecting objects hidden under clothing is a critical task. However, the existing manual security inspection means is very inefficient and has higher omission factor. The terahertz imaging technology provides a non-contact scheme. Terahertz is a wavelength at 0.1 × 1012-10×1012Electromagnetic waves between Hz, which obtain images of objects hidden under clothing by measuring the radiant heat of different parts of the human body. And compared with X-rays, such electromagnetic waves are harmless to the human body.
In the case of limited tag data, we need to determine whether the image contains hidden objects, and to accurately segment the detected target. However, due to physical factors inherent in terahertz imaging, the contrast and signal-to-noise ratio of images tend to be low. The difference between the object and the human body part hidden on the pixel value is not obvious, and the edge of the object is almost mixed with the human body and cannot be distinguished. Meanwhile, the picture has more background noise. Therefore, under such poor conditions, the algorithms that lead to standard object segmentation and saliency detection do not solve the problem well. Early work focused on statistical models, Shen (Shen, X., Dietlein, C., Grossman, E.N., Popovic, Z., Meyer, F.G.: Detection and segmentation of connected objects in terahertzimages. EETransmission on Image Processing 17(2008)2465 and 2475) proposed a multi-level thresholding segmentation algorithm that models radiation temperature by a mixed Gaussian model. The segmentation task is accomplished by first removing noise using an anisotropic diffusion algorithm and then tracking the boundary that changes with threshold changes. LEE (Lee, D., Yeom, S., Son, J., Kim, S.: Automatic image segmentation for connected object detection using the approximation-mapping algorithm. optics Express 18(2010)10659-10667) also uses the hybrid Gaussian model. And solving a Bayesian boundary through a two-time expectation-maximization algorithm to segment the human body and the targets contained in the human body. Similarly, Yeom (Lee, d.s., Son, j.y., Jung, m.k., Jung, s.w., Lee, s.j., Yeom, s., Jang, y.s.: Real-time out connected-object detection with passive measuring imaging Express 19(2011) 2530-. However, these conventional algorithms have no detection capability, i.e. they perform segmentation operations on the premise that the current picture contains objects, and have no good segmentation result under such poor imaging conditions.
The current example segmentation algorithm based on the deep convolutional neural network has good potential to solve the problem. This class of methods can be roughly divided into two classes, one being the R-CNN based recommendation method. They used a bottom-up procedure, with the results of the segmentation depending on the recommended regions of the R-CNN, and then sorted for these results. Another class of methods is based on semantic segmentation. The method realizes instance segmentation which depends on the result of semantic segmentation, namely, the result of semantic segmentation is obtained first, and then pixels are divided into different instances. A latest example segmentation algorithm, Mask-RCNN (He, K., Gkioxari, G., Dollar, P., Girshick, R.B., Mask r-cnn. International Conference on computer Vision (2017)2980-2988), shares the features of convolutional layers, and can complete the tasks of detection, classification and segmentation simultaneously. However, these methods generally use the segmentation and detection as two independent units, which makes the model structure complicated, long processing time, and not robust. And the method needs a large amount of data to train out an available model, and obviously the method is not suitable for the target detection and segmentation task of the terahertz scene.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem of providing a method for detecting and segmenting a target object in a terahertz image and a computer storage medium, overcoming the defects that the contrast between the target object and a background is large or a large number of high-quality samples need to be acquired and labeled and the like in the prior art, obtaining accurate detection and segmentation effects under the conditions of low signal-to-noise ratio, extremely similar pixel densities of the target object and the background and less training samples, and having high processing speed at the same time, so that the method can be applied to a real-time security inspection system.
The technical scheme is as follows: the method for detecting and segmenting the target object in the terahertz image comprises the following steps:
(1) taking an original terahertz image x in a training set as a training sample, marking a target object region in x in a manual mode to obtain a real segmentation mask y, preprocessing x and y at the same time, and converting all pixel values in the image between-1 and 1;
(2) inputting the original terahertz image x and the random noise z into a generator, and generating a false segmentation mask by the generator;
(3) forming a real sample pair by the real segmentation mask and the corresponding original terahertz image, forming a false sample pair by the false segmentation mask and the corresponding original terahertz image, and inputting the false segmentation mask and the corresponding original terahertz image into a discriminator together for evaluation;
(4) respectively calculating loss functions of a generator and a discriminator, optimizing the generator by using a gradient descent algorithm to make the loss function of the generator as small as possible, optimizing the discriminator by using a gradient ascent algorithm to make the loss function of the discriminator as large as possible until the model converges, returning to the step (1), and entering the step (5) after traversing the training set;
(5) returning to the step (1) until the set iteration number is reached;
(6) and storing the model trained in the step, and inputting the new picture to be detected into a generator to obtain a detection and segmentation result corresponding to the new picture to be detected.
In order to keep the target area while abandoning other useless details, and obtain a simple and efficient network structure, a partial layer of an encoder and a decoder can be selectively connected, the generator is a two-dimensional neural network G and comprises 8 convolutional layers and 8 transposed convolutional layers, wherein the filter size and the moving step size of all the convolutional layers and the transposed convolutional layers are 5 × 5 and 2 × 2, the depths of the 8 convolutional layers are 64, 128, 256, 512,1024 in sequence, and the depths of the 8 transposed convolutional layers are 512,256,128,64 and 1 in sequence.
In order to make the output of the generator more diversified and increase the robustness of the model, the random noise z generated by the dropout operation in the generator is respectively performed in the first three layers of the transposed convolutional layer, and the probability of each node being reserved is 50%.
Furthermore, the discriminator is a two-dimensional neural network D, and includes 3 convolutional layers and 1 fully-connected layer, the filter size of each of the 3 convolutional layers is 5 × 5, the step length is 2 × 2, the filter depth is 64, 128, 256, respectively, the fully-connected layer uses sigmoid as an activation function, and the output result is a one-dimensional scalar.
In order to make a good trade-off between recall and false alarm rates, the penalty function of the arbiter is:
the loss function of the generator is:
wherein ,Ls=‖G(x,z)‖1,‖·‖1is L1A norm expression; d (x, y) is the output of the discriminator, G (x, z) is the output of the generator,as an expectation of the probability distribution of the true sample pairs,for the expectation of the probability distribution of the false sample pairs, λ and β are and LsThe weight of the whole loss function.
Further, the manual labeling method in step (1) is to label the target region in each terahertz pattern, label the pixel value of the target region as 255, and set the rest positions as 0, wherein the preprocessing process is to convert all pixel values to between-1 and 1, and the conversion formula is as follows: p/127.5-1, where p is the specific pixel value.
The computer storage medium of the present invention, on which a computer program is stored, the computer program implementing any of the methods described above when executed by a computer processor.
Has the advantages that: the invention can train an accurate detection and segmentation model under the conditions of few training samples and low image signal-to-noise ratio, and the model has high processing speed and high timeliness and can be applied to a real-time security inspection system, thereby reducing the labor intensity of manual security inspection and improving the efficiency.
Drawings
FIG. 1 is a schematic diagram of a selected terahertz image;
FIG. 2 is a schematic diagram of the method model training and testing process;
FIG. 3 is a schematic diagram comparing the model structure and other schemes used;
fig. 4 is a detailed structure of the model of the present invention.
Detailed Description
A schematic diagram of a terahertz image is shown in fig. 1, in which a target object portion is surrounded by a rectangular frame. The specific process of the method is shown in fig. 2 and is divided into a training process and a testing process, the trained model is established, the picture to be tested is input into the model in the testing process, and the effect of the method is tested. The generation countermeasure network adopted by the invention is a generation model, and the process of converting one picture into another picture can be realized through the countermeasure process. A standard generative countermeasure network includes a generator and a discriminator. The task of the generator is to generate false samples that are sufficiently realistic to fool the discriminator based on the input information, and on the other hand, the discriminator receives both the true samples from the data set and the false samples generated by the generator and then determines which samples are true and which are false. The final generator can generate samples which can not be distinguished by the discriminator, namely the generator captures the distribution of the training samples. The generation based on condition is to restrain the generator by extra information, and even under the condition of small sample size, the good result can be generated according to the mark data of the user, which is the design idea adopted by the invention.
During training, providing correct detection and segmentation results for each sample, wherein each picture corresponds to a picture with the same size and all pixels with the value of 0, and for the sample containing a weapon, marking the values of the pixels as 255 in the area where the weapon appears; and samples that do not contain any weapons are not modified. Specifically, a target area in each terahertz graph is marked by a manual marking method, a free marking tool Colabeler is used by the marking tool, the pixel value of the target area is marked as 255, and the rest positions are 0, so that a real segmentation mask is obtained. The original terahertz image and the annotation picture are scaled to 256 pixels in length and width, and then image preprocessing is started. The preprocessing process is to set all pixel values between-1 and 1, and the conversion formula is: p/127.5-1 (p is a specific one pixel value). The random noise z is input into the generator, and the scaled original image to be detected is also input into the generator as a condition x. The generator learns the mapping function from the input picture to the detection and segmentation result and outputs the corresponding detection and segmentation result G (x, z). The input to the discriminator contains two parts, one is the true sample pair: an original terahertz image x and a real segmentation and detection result thereof, namely a real segmentation mask y are marked as (x, y); the other is a false sample pair: the original terahertz image x and the output of the generator falsely divide the mask G (x, z), noted as (x, G (x, z)). For the discriminator it needs to give a higher score to real samples and conversely a lower score to false samples, and for the generator it needs to produce more and more real samples G (x, z) to fool the discriminator so that (x, G (x, z)) gets a higher score and finally the discriminator cannot tell which of the two pairs of inputs is false, so that the generator learns an implicit mapping from the terahertz image to the detection segmentation image. In the testing process, the picture to be detected is input into the trained generator, and then the model discards redundant information such as human bodies, background noise and the like, and finally, hidden objects are segmented. Detection and segmentation are done simultaneously.
To achieve such an objective, the present invention first defines a suitable loss function.
The penalty function for the discriminator is:
Ls=‖G(x,z)‖1(3)
the loss function of the generator is:
wherein ,‖·‖1Is L1A norm expression; d (x, y) is the output of the discriminator, the value is between 0 and 1, G (x, z) is the output of the generator,as an expectation of the probability distribution of the true sample pairs,for the expectation of the probability distribution of the false sample pairs, λ and β are and LsThe weight of the whole loss function. x, y are both sampled at the true sample distribution, the arbiter maximizes the expectation, i.e. equation (1), by making D (x, y) as close to 1 as possible, while D (x, G (x, z)) is as close to 0 as possible, and conversely the generator tries to make D (x, G (x, z)) as close to 1 as possible to minimize the expectation. The noise z is represented as a Dropout operation in the generator (the weight of the partial nodes is randomly assigned to a value of 0 during the training process), so that the generation is performedThe output of the generator is more diversified, and more false samples are obtained, so that the generator can more easily obtain the original data distribution, and the robustness of the model is improved. And simultaneously, x is used as a condition and is respectively added into the two networks to form a condition generation countermeasure network, the output and the input of the constraint generator are paired one by one, and otherwise, the detection and segmentation results are random.
Since equation (1) considers only the loss function for generating the countermeasure network, the objective is to generate a picture that the discriminator cannot discriminate true from false. However, a realistic picture can be generated and does not necessarily represent a good segmentation result. Equation (2) uses manhattan distance to measure the reconstruction error. The constraint of adding this term requires that the generated picture not only be sufficiently realistic, but also be close enough to the correct segmentation result, i.e. an accurate segmentation result. In addition, Manhattan distance is used instead of the usual L2Norm of L2Constraints tend to cause blurring of the generated picture.
As with many object detection segmentation tasks, hidden weapons occupy only a small portion of the entire picture, so equation (3) requires that the resulting picture be sparse. The invention uses L1Norm as a sparsity constraint rather than theoretical L0Norm and commonly used L2Norm, because the former needs to solve NP-hard problem, while the latter can only approximate each item to a very small value, and cannot guarantee that most elements are 0.
The loss function of the generator is given by the formula (4). lambda.and β respectively and LsFor the weight affected by the whole loss function, lambda and β are 800 and 5 respectively in practical use, the condition x and reconstruction error can supervise the generation process, ensure that the generated picture and the input terahertz picture are in one-to-one correspondence, otherwise, the generated picture is random and does not meet the task of target detection and segmentationThe false alarm rate is reduced.
Training of the model uses a gradient-up algorithm to optimize the arbiter and a gradient-down algorithm to optimize the generator. When training the generator, the parameters of the discriminator need to be kept unchanged, and the formula (4) is minimized; and when the arbiter is trained, the current parameters of the generator are kept unchanged, so that the formula (1) is maximized. The discriminators and generators are alternately trained in an iterative process until the model converges. In the specific implementation process, the learning rate is set to be 0.00012, the minipatch is 8, the whole training set is traversed for one iteration, and 200 iterations are performed in total to obtain the final model.
The improvement is made in the aspect of network structure, as shown in fig. 3, compared with the traditional "Encoder-Decoder" structure, the invention adds the connection of a partial decoding layer and an encoding layer, wherein the specific generator structure is shown in fig. four, thereby obtaining low-level characteristics which can improve the detection capability of the model for small targets. Compared to the "U-Net", (Ronneberger, O., Fischer, P., Brox, T.: U-Net: relational network for biological Image segmentation. medical Image Computing and Computer application Computing (2015)234-241), the new model removes parts of the connections that are very close to the output layer. As shown in fig. 3, the output of the first layers of the U-Net encoder contains a lot of edge and background information, which would result in a higher false alarm if these layers are connected to decoding layers that are very close to the output layers, so the present invention removes this part of the connection and finds a balance between the recall rate and the false alarm rate. And according to the loss function and the corresponding model structure, iteratively solving an optimization result by using a gradient descent algorithm to obtain a final model parameter. The discriminator comprises 3 convolution layers and a full connection layer, wherein the filter size of each convolution layer is 5 x 5, the step length is 2 x 2, and the filter depth is 64, 128 and 256 respectively. And the fully-connected layer outputs a result as a one-dimensional scalar by using sigmoid as an activation function. As shown in fig. 4, the generator contains 8 convolutional layers and 8 transpose convolutional layers. Wherein the filter size and the moving step of all the convolutional layers and the transposed convolutional layers are 5 × 5 and 2 × 2, and the depths of 8 convolutional layers and the depths of 8 transposed convolutional layers are sequentially as follows: 64, 128, 256, 512,512,512,512,1024, 512,512,512,512,256,128,64,1. The noise z is generated by the dropout operation, which is performed on the first three layers of the transposed convolutional layer, respectively, with a probability of 50% of each node being retained.
And inputting a new picture to be detected into the generator by using the model trained in the steps to obtain a detection and segmentation result corresponding to the new picture. In practical tests, the model can control the false alarm rate to be 11.35%, and simultaneously improve the recall rate to be 88.46%, which is obviously superior to the existing detection and segmentation algorithms.
Embodiments of the present invention also provide a computer storage medium having a computer program stored thereon. The computer program, when executed by a processor, may implement the method of controlling as previously described. For example, the computer storage medium is a computer-readable storage medium.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Claims (7)
1. A method for detecting and segmenting a target object in a terahertz image is characterized by comprising the following steps:
(1) taking an original terahertz image x in a training set as a training sample, marking a target object region in x in a manual mode to obtain a real segmentation mask y, preprocessing x and y at the same time, and converting all pixel values in the image between-1 and 1;
(2) inputting the original terahertz image x and the random noise z into a generator, and generating a false segmentation mask by the generator;
(3) forming a real sample pair by the real segmentation mask and the corresponding original terahertz image, forming a false sample pair by the false segmentation mask and the corresponding original terahertz image, and inputting the false segmentation mask and the corresponding original terahertz image into a discriminator together for evaluation;
(4) respectively calculating loss functions of a generator and a discriminator, optimizing the generator by using a gradient descent algorithm to make the loss function of the generator as small as possible, optimizing the discriminator by using a gradient ascent algorithm to make the loss function of the discriminator as large as possible until the model converges, returning to the step (1), and entering the step (5) after traversing the training set;
(5) returning to the step (1) until the set iteration number is reached;
(6) and storing the model trained in the step, and inputting the new picture to be detected into a generator to obtain a detection and segmentation result corresponding to the new picture to be detected.
2. The method for detecting and segmenting the target object in the terahertz image according to claim 1, wherein: the generator is a two-dimensional neural network G and comprises 8 convolutional layers and 8 transposed convolutional layers, wherein the filter size and the moving step size of all the convolutional layers and the transposed convolutional layers are 5 x 5 and 2 x 2, the depths of the 8 convolutional layers are 64, 128, 256, 512 and 1024 in sequence, and the depths of the 8 transposed convolutional layers are 512,256,128,64 and 1 in sequence.
3. The method for detecting and segmenting the target object in the terahertz image according to claim 2, wherein: the random noise z is generated by the dropout operation in the generator, and is respectively performed in the first three layers of the transposed convolutional layer, and the probability of each node being reserved is 50%.
4. The method for detecting and segmenting the target object in the terahertz image according to claim 1, wherein: the discriminator is a two-dimensional neural network D and comprises 3 convolution layers and 1 full-connection layer, the size of a filter of each convolution layer is 5 x 5, the step length is 2 x 2, the filter depth is 64, 128 and 256, the full-connection layer uses sigmoid as an activation function, and the output result is a one-dimensional scalar.
5. The method of detecting and segmenting a target object in a terahertz image according to claim 1, wherein the loss function of the discriminator is:
the loss function of the generator is:
wherein ,Ls=||G(x,z)||1,||·||1is L1A norm expression; d (x, y) is the output of the discriminator, G (x, z) is the output of the generator,as an expectation of the probability distribution of the true sample pairs,for the expectation of the probability distribution of the false sample pairs, λ and β are and LSThe weight of the whole loss function.
6. The method for detecting and segmenting the target object in the terahertz image according to claim 1, wherein: the manual labeling method in the step (1) is to label a target region in each terahertz graph, label the pixel value of the target region as 255, and set the rest positions as 0, wherein the preprocessing process is to convert all the pixel values to between-1 and 1, and the conversion formula is as follows: p/127.5-1, where p is the specific pixel value.
7. A computer storage medium having a computer program stored thereon, characterized in that: the computer program, when executed by a computer processor, implements the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910048648.9A CN109886970B (en) | 2019-01-18 | 2019-01-18 | Detection segmentation method for target object in terahertz image and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910048648.9A CN109886970B (en) | 2019-01-18 | 2019-01-18 | Detection segmentation method for target object in terahertz image and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109886970A true CN109886970A (en) | 2019-06-14 |
CN109886970B CN109886970B (en) | 2023-06-09 |
Family
ID=66926216
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910048648.9A Active CN109886970B (en) | 2019-01-18 | 2019-01-18 | Detection segmentation method for target object in terahertz image and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109886970B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110428476A (en) * | 2019-07-05 | 2019-11-08 | 广东省人民医院(广东省医学科学院) | A kind of image conversion method and device based on multi-cycle production confrontation network |
CN110619632A (en) * | 2019-09-18 | 2019-12-27 | 华南农业大学 | Mango example confrontation segmentation method based on Mask R-CNN |
CN111275692A (en) * | 2020-01-26 | 2020-06-12 | 重庆邮电大学 | Infrared small target detection method based on generation countermeasure network |
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
CN111583276A (en) * | 2020-05-06 | 2020-08-25 | 西安电子科技大学 | CGAN-based space target ISAR image component segmentation method |
CN112001846A (en) * | 2020-08-21 | 2020-11-27 | 深圳职业技术学院 | Multi-target X-ray security inspection image synthesis method with higher resolution |
CN112287938A (en) * | 2020-10-29 | 2021-01-29 | 苏州浪潮智能科技有限公司 | Text segmentation method, system, device and medium |
CN112508239A (en) * | 2020-11-22 | 2021-03-16 | 国网河南省电力公司电力科学研究院 | Energy storage output prediction method based on VAE-CGAN |
WO2021258955A1 (en) * | 2020-06-24 | 2021-12-30 | 中兴通讯股份有限公司 | Method and apparatus for marking object outline in target image, and storage medium and electronic apparatus |
CN114820429A (en) * | 2022-02-18 | 2022-07-29 | 岳阳珞佳智能科技有限公司 | Chip glue-climbing height identification method and system based on generative countermeasure network |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105758865A (en) * | 2016-04-26 | 2016-07-13 | 河南工业大学 | Method for detecting foreign matter in grain packing material based on terahertz waves and detecting system |
CN107092040A (en) * | 2017-06-01 | 2017-08-25 | 上海理工大学 | Terahertz imaging rays safety detection apparatus and video procession method |
CN108764328A (en) * | 2018-05-24 | 2018-11-06 | 广东工业大学 | The recognition methods of Terahertz image dangerous material, device, equipment and readable storage medium storing program for executing |
CN108830225A (en) * | 2018-06-13 | 2018-11-16 | 广东工业大学 | The detection method of target object, device, equipment and medium in terahertz image |
CN109001833A (en) * | 2018-06-22 | 2018-12-14 | 天和防务技术(北京)有限公司 | A kind of Terahertz hazardous material detection method based on deep learning |
-
2019
- 2019-01-18 CN CN201910048648.9A patent/CN109886970B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105758865A (en) * | 2016-04-26 | 2016-07-13 | 河南工业大学 | Method for detecting foreign matter in grain packing material based on terahertz waves and detecting system |
CN107092040A (en) * | 2017-06-01 | 2017-08-25 | 上海理工大学 | Terahertz imaging rays safety detection apparatus and video procession method |
CN108764328A (en) * | 2018-05-24 | 2018-11-06 | 广东工业大学 | The recognition methods of Terahertz image dangerous material, device, equipment and readable storage medium storing program for executing |
CN108830225A (en) * | 2018-06-13 | 2018-11-16 | 广东工业大学 | The detection method of target object, device, equipment and medium in terahertz image |
CN109001833A (en) * | 2018-06-22 | 2018-12-14 | 天和防务技术(北京)有限公司 | A kind of Terahertz hazardous material detection method based on deep learning |
Non-Patent Citations (1)
Title |
---|
ZHAN OU等: "Segmentation of Concealed Objects in Active Terahertz Images", 《SEGMENTATION OF CONCEALED OBJECTS IN ACTIVE TERAHERTZ IMAGES》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110428476A (en) * | 2019-07-05 | 2019-11-08 | 广东省人民医院(广东省医学科学院) | A kind of image conversion method and device based on multi-cycle production confrontation network |
CN110619632B (en) * | 2019-09-18 | 2022-01-11 | 华南农业大学 | Mango example confrontation segmentation method based on Mask R-CNN |
CN110619632A (en) * | 2019-09-18 | 2019-12-27 | 华南农业大学 | Mango example confrontation segmentation method based on Mask R-CNN |
CN111275692A (en) * | 2020-01-26 | 2020-06-12 | 重庆邮电大学 | Infrared small target detection method based on generation countermeasure network |
CN111275692B (en) * | 2020-01-26 | 2022-09-13 | 重庆邮电大学 | Infrared small target detection method based on generation countermeasure network |
CN111462060A (en) * | 2020-03-24 | 2020-07-28 | 湖南大学 | Method and device for detecting standard section image in fetal ultrasonic image |
CN111583276B (en) * | 2020-05-06 | 2022-04-19 | 西安电子科技大学 | CGAN-based space target ISAR image component segmentation method |
CN111583276A (en) * | 2020-05-06 | 2020-08-25 | 西安电子科技大学 | CGAN-based space target ISAR image component segmentation method |
WO2021258955A1 (en) * | 2020-06-24 | 2021-12-30 | 中兴通讯股份有限公司 | Method and apparatus for marking object outline in target image, and storage medium and electronic apparatus |
CN112001846A (en) * | 2020-08-21 | 2020-11-27 | 深圳职业技术学院 | Multi-target X-ray security inspection image synthesis method with higher resolution |
CN112287938A (en) * | 2020-10-29 | 2021-01-29 | 苏州浪潮智能科技有限公司 | Text segmentation method, system, device and medium |
CN112287938B (en) * | 2020-10-29 | 2022-12-06 | 苏州浪潮智能科技有限公司 | Text segmentation method, system, device and medium |
CN112508239A (en) * | 2020-11-22 | 2021-03-16 | 国网河南省电力公司电力科学研究院 | Energy storage output prediction method based on VAE-CGAN |
CN114820429A (en) * | 2022-02-18 | 2022-07-29 | 岳阳珞佳智能科技有限公司 | Chip glue-climbing height identification method and system based on generative countermeasure network |
CN114820429B (en) * | 2022-02-18 | 2023-06-16 | 湖南珞佳智能科技有限公司 | Chip glue climbing height identification method and system based on generation type countermeasure network |
Also Published As
Publication number | Publication date |
---|---|
CN109886970B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109886970B (en) | Detection segmentation method for target object in terahertz image and computer storage medium | |
Liu et al. | Attribute-aware face aging with wavelet-based generative adversarial networks | |
Shakya et al. | Deep learning algorithm for satellite imaging based cyclone detection | |
CN112686898B (en) | Automatic radiotherapy target area segmentation method based on self-supervision learning | |
CN112700432B (en) | Texture surface defect detection method and system based on abnormal synthesis and decomposition | |
CN109284779A (en) | Object detection method based on deep full convolution network | |
CN113095333B (en) | Unsupervised feature point detection method and unsupervised feature point detection device | |
CN117409192B (en) | Data enhancement-based infrared small target detection method and device | |
CN111862040B (en) | Portrait picture quality evaluation method, device, equipment and storage medium | |
CN115564766A (en) | Method and system for preparing volute casing seat ring of water turbine | |
Gu et al. | Automatic detection of safety helmet wearing based on head region location | |
CN115661459A (en) | 2D mean teacher model using difference information | |
Luo et al. | New deep learning method for efficient extraction of small water from remote sensing images | |
Ziabari et al. | Simurgh: A Framework for CAD-Driven Deep Learning Based X-Ray CT Reconstruction | |
CN112200264B (en) | High-flux imaging-free classification method and device based on scattering multiplexing | |
Chen et al. | Image haze removal by adaptive CycleGAN | |
Pakulich et al. | Age recognition from facial images using convolutional neural networks | |
CN110503631A (en) | A kind of method for detecting change of remote sensing image | |
Wang et al. | An image edge detection algorithm based on multi-feature fusion | |
CN116051813A (en) | Full-automatic intelligent lumbar vertebra positioning and identifying method and application | |
Lazo | Segmentation of skin lesions and their attributes using generative adversarial networks | |
Dhar et al. | Detecting deepfake images using deep convolutional neural network | |
Ma et al. | Unsupervised Prototype-Wise Contrastive Learning for Domain Adaptive Semantic Segmentation in Remote Sensing Image | |
Peng et al. | High-Precision Surface Crack Detection for Rolling Steel Production Equipment in ICPS | |
Xu et al. | Cross-domain image classification through neural-style transfer data augmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |