CN109583454A - Image characteristic extracting method based on confrontation neural network - Google Patents

Image characteristic extracting method based on confrontation neural network Download PDF

Info

Publication number
CN109583454A
CN109583454A CN201811353813.3A CN201811353813A CN109583454A CN 109583454 A CN109583454 A CN 109583454A CN 201811353813 A CN201811353813 A CN 201811353813A CN 109583454 A CN109583454 A CN 109583454A
Authority
CN
China
Prior art keywords
image
extracting
convolutional neural
network
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811353813.3A
Other languages
Chinese (zh)
Inventor
史再峰
李晖
曹清洁
高静
王荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201811353813.3A priority Critical patent/CN109583454A/en
Publication of CN109583454A publication Critical patent/CN109583454A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to deep learning and image domains, the extraction for carrying out image characteristic point with semantic feature according to the image space feature of the different levels of deep learning network extraction for realization.Thus, the technical solution adopted by the present invention is that, based on the image characteristic extracting method of confrontation neural network, steps are as follows: image preprocessing: carrying out centralization and normalized to image data, and uses treated image data as the input of convolutional neural networks;The neural network of characteristic point is extracted in training, is trained using the confrontation form for generating confrontation network to the convolutional neural networks for extracting feature;3) image, semantic characteristic point is obtained using trained convolutional neural networks.Present invention is mainly applied to image procossing occasions.

Description

Image characteristic extracting method based on confrontation neural network
Technical field
The present invention relates to deep learnings and image domains, more particularly in image processing applications, by deep learning The characteristics of image of convolutional neural networks study carries out the extraction of picture semantic characteristic point.More particularly to based on confrontation neural network Image characteristic extracting method.
Background technique
The key component of numerical characteristic extractive technique computer vision field is also the key technology in Digital Image Processing One of, it is some other Digital Image Processing, such as the basis of image mosaic, panoramic video, intelligent video monitoring, how realizes The image characteristics extraction of high quality is all vital for whole system.
Feature extraction is from by carrying out the process that transformation obtains information to image data feature.Conventional method such as scale is not Become Feature Conversion (SIFT) algorithm, can detect and describe the locality characteristic in image, finds the extreme point in space scale, And extract its position, scale and rotational invariants.Histograms of oriented gradients (HOG) method is by calculating and statistical picture part The direction in map-making histogram in region completes image characteristics extraction.
Traditional image characteristic extracting method is all based on image space feature and carries out the extraction of picture feature, but is facing When the tasks such as image classification, image segmentation, tradition is extracting image characteristic extracting method as not considering expressed by picture Meaning, the i.e. semantic feature of image, the effect is unsatisfactory for task completion.
Development is continued to optimize with computer vision field, in order to comply with this development trend, is largely based on depth The image algorithm of habit is studied and improves, and groundwork is progress in terms of the network structure used around deep learning, passes through Network structure is improved, so that the image algorithm of deep learning obtains better result.
Summary of the invention
In order to overcome the deficiencies of the prior art, the present invention is directed to propose a kind of carry out figure based on deep learning convolutional neural networks As the method for feature point extraction.The image space feature and semanteme for the different levels that this method is extracted according to deep learning network are special It levies to carry out the extraction of image characteristic point.For this reason, the technical scheme adopted by the present invention is that the image based on confrontation neural network is special Extracting method is levied, steps are as follows:
1) image preprocessing: carrying out centralization and normalized to image data, and with treated image data Input as convolutional neural networks;
2) neural network of characteristic point is extracted in training, using the confrontation form of generation confrontation network to for extracting feature Convolutional neural networks are trained;
3) image, semantic characteristic point is obtained using trained convolutional neural networks.
Specific step is as follows for image preprocessing: first to picture all pixels point pixel value xiIt sums and divided by image pixel Point total number N calculates image pixel average value mu, each point pixel value subtracted image pixel average μ in image is then used, to the meter It calculates result to be squared, and the calculated result sum of all pixels point is subjected to extraction of square root operation and obtains result σ, to image institute There is pixel to carry out pixel value to subtract μ and obtain pretreated image divided by the operation of σ, formula is as follows:
Wherein xiFor the corresponding pixel value of ith pixel point, xi' it is ith pixel point after image preprocessing step Export result;
Step 2) detailed process is the output that will extract the neural network G of characteristic point and the convolutional Neural net for extracting feature The input D of network is connected, and fixes the parameter of neural network G in training, will be by pretreated data as convolutional Neural The input of network D, and data label is set to 0, and following formula is used to be trained as loss formula:
Loss=- (log (1-D (G (z)))+logD (y))
Wherein D is the convolutional neural networks for extracting feature, and G is the neural network for extracting characteristic point, and z is to extract characteristic point The input of neural network, y are the input for extracting the convolutional neural networks of feature, and G (z) is that the neural network of extraction characteristic point is defeated Out, D (y) is the convolutional neural networks output for extracting feature, calculates the logarithm for extracting the convolutional neural networks output of feature, calculates 1 subtracts the logarithm for the convolutional neural networks output for extracting feature and addition, and the result is as network error, with the opposite of the result Number is penalty values, is trained to network.
The features of the present invention and beneficial effect are:
1. this method extracts characteristics of image by convolutional neural networks, it is diagonal to avoid traditional images joining method The dependence of the characteristics of image such as point.
2. this method extracts characteristics of image by convolutional neural networks, selected relative to conventional method characteristic point position It takes more accurate.
Detailed description of the invention:
Fig. 1 convolutional neural networks structural schematic diagram.
A kind of image, semantic Feature Points Extraction network training method figure based on confrontation neural network of Fig. 2.
A kind of image, semantic Feature Points Extraction flow chart based on confrontation neural network of Fig. 3.
Specific embodiment
In the present invention, the mode for having used two deep learning networks to confront with each other extracts characteristics of image, and leads to It crosses the semantic feature extracted and obtains the semantic feature point of image, to make the characteristic point extracted with the semantic feature of picture.This One technological invention is broadly divided into following components:
1. image preprocessing
It is extracted to enable characteristics of image to be preferably convolved neural network, and improves network training speed.We are first Centralization and normalized first are carried out to the image data in Cifar10 database, and made with treated image data For the input of convolutional neural networks.Described in the following formula of method, sum first to picture all pixels point pixel value and divided by figure As pixel total number N calculating image pixel average value mu, each point pixel value subtracted image pixel average μ in image is then used, The calculated result is squared, and the calculated result sum of all pixels point is subjected to extraction of square root operation and obtains result σ.It is right Image all pixels click through row pixel value and subtract μ and obtain pretreated image divided by the operation of σ.
2. extracting the neural metwork training of characteristic point
The part is made of convolutional neural networks, is the main part of image characteristics extraction process, network structure such as Fig. 2 institute Show.Network inputs are image data, and network output is and inputs similar image data.The network is extracted spy in a manner of fighting The convolutional neural networks training of sign.Detailed process is the output that will extract the neural network of characteristic point and the convolution for extracting feature The input of neural network is connected, and fixes the parameter for extracting the convolutional neural networks of feature in training, will be by pretreatment Input of the data as network, and data label is set to 0, and following formula is used to be trained as loss formula.
Loss=- (log (1-D (G (z)))+logD (y))
Wherein D is the convolutional neural networks for extracting feature, and G is the neural network for extracting characteristic point, and z is to extract characteristic point The input of neural network, y are the input for extracting the convolutional neural networks of feature, and G (z) is that the neural network of extraction characteristic point is defeated Out, D (y) is the convolutional neural networks output for extracting feature, calculates the logarithm for extracting the convolutional neural networks output of feature, calculates 1 subtracts the logarithm for the convolutional neural networks output for extracting feature and addition, and the result is as network error, with the opposite of the result Number is penalty values, is trained to network.
After training, we, which can obtain one, can export the convolutional neural networks class probability for making to extract feature substantially Reduced network.
3. image, semantic characteristic point obtains
The neural network of characteristic point is being extracted after training, it is believed that the network is great to the minor modifications of picture Affect classification of the convolutional neural networks for extracting feature to picture after modification.It is extracted the neural network modification of characteristic point Place can greatly embody the semantic feature of image.So say that the output of previous step training network is compared with original image, Change point is the semantic feature point of image.
The image split-joint method for carrying out feature extraction based on deep learning convolutional neural networks is designed herein passes through depth Network is practised to identify to characteristics of image.It needs to be trained convolutional neural networks using mass data before use.? In actual use, suitable training set can be chosen according to the actual situation, and according to training set situation appropriate adjustment network structure.? During hands-on, in fact it could happen that parameter such as is difficult to restrain at the difficult situation of training, needs manually to be finely adjusted parameter.

Claims (3)

1. a kind of image characteristic extracting method based on confrontation neural network, characterized in that steps are as follows:
1) image preprocessing: carrying out centralization and normalized to image data, and use treated image data as The input of convolutional neural networks;
2) neural network of characteristic point is extracted in training, using the confrontation form of generation confrontation network to the convolution for extracting feature Neural network is trained;
3) image, semantic characteristic point is obtained using trained convolutional neural networks.
2. the image characteristic extracting method as described in claim 1 based on confrontation neural network, characterized in that image preprocessing Specific step is as follows: first to picture all pixels point pixel value xiIt sums and calculates image divided by image slices vegetarian refreshments total number N Pixel average μ is squared the calculated result then with each point pixel value subtracted image pixel average μ in image, and will The calculated result sum of all pixels point carries out extraction of square root operation and obtains result σ, carries out pixel to image all pixels point Value subtracts μ and obtains pretreated image divided by the operation of σ, and formula is as follows:
Wherein xiFor the corresponding pixel value of ith pixel point, xi' it is output of the ith pixel point after image preprocessing step As a result.
3. the image characteristic extracting method as described in claim 1 based on confrontation neural network, characterized in that step 2) is specific Process is that the output for the neural network G for extracting characteristic point is connected with the input D for the convolutional neural networks for extracting feature, The parameter of neural network G is fixed when training, pretreated data will be passed through as the input of convolutional neural networks D, and number It is set to 0 according to label, and following formula is used to be trained as loss formula:
Loss=- (log (1-D (G (z)))+logD (y))
Wherein D is the convolutional neural networks for extracting feature, and G is the neural network for extracting characteristic point, and z is the nerve for extracting characteristic point The input of network, y are the input for extracting the convolutional neural networks of feature, and G (z) is the neural network output for extracting characteristic point, D (y) it is the convolutional neural networks output for extracting feature, calculates the logarithm for extracting the convolutional neural networks output of feature, calculate 1 and subtract It goes to extract logarithm and the addition that the convolutional neural networks of feature export, the result is as network error, with the opposite number of the result For penalty values, network is trained.
CN201811353813.3A 2018-11-14 2018-11-14 Image characteristic extracting method based on confrontation neural network Pending CN109583454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811353813.3A CN109583454A (en) 2018-11-14 2018-11-14 Image characteristic extracting method based on confrontation neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811353813.3A CN109583454A (en) 2018-11-14 2018-11-14 Image characteristic extracting method based on confrontation neural network

Publications (1)

Publication Number Publication Date
CN109583454A true CN109583454A (en) 2019-04-05

Family

ID=65922353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811353813.3A Pending CN109583454A (en) 2018-11-14 2018-11-14 Image characteristic extracting method based on confrontation neural network

Country Status (1)

Country Link
CN (1) CN109583454A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211164A (en) * 2019-06-05 2019-09-06 中德(珠海)人工智能研究院有限公司 The image processing method of characteristic point operator based on neural network learning basic figure

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521603A (en) * 2011-11-17 2012-06-27 西安电子科技大学 Method for classifying hyperspectral images based on conditional random field
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108520202A (en) * 2018-03-15 2018-09-11 华南理工大学 Confrontation robustness image characteristic extracting method based on variation spherical projection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521603A (en) * 2011-11-17 2012-06-27 西安电子科技大学 Method for classifying hyperspectral images based on conditional random field
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108460391A (en) * 2018-03-09 2018-08-28 西安电子科技大学 Based on the unsupervised feature extracting method of high spectrum image for generating confrontation network
CN108520202A (en) * 2018-03-15 2018-09-11 华南理工大学 Confrontation robustness image characteristic extracting method based on variation spherical projection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘海东 等: "基于生成对抗网络的乳腺癌病理图像可疑区域标记", 《科研信息化技术与应用》 *
王坤峰 等: "生成式对抗网络GAN的研究进展与展望", 《自动化学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211164A (en) * 2019-06-05 2019-09-06 中德(珠海)人工智能研究院有限公司 The image processing method of characteristic point operator based on neural network learning basic figure

Similar Documents

Publication Publication Date Title
Tu et al. RGBT salient object detection: A large-scale dataset and benchmark
CN109344701B (en) Kinect-based dynamic gesture recognition method
US11908244B2 (en) Human posture detection utilizing posture reference maps
Zhang et al. Supervised pixel-wise GAN for face super-resolution
Meng et al. Sample fusion network: An end-to-end data augmentation network for skeleton-based human action recognition
CN107316031A (en) The image characteristic extracting method recognized again for pedestrian
CN107194418B (en) Rice aphid detection method based on antagonistic characteristic learning
CN109871845B (en) Certificate image extraction method and terminal equipment
CN110930411B (en) Human body segmentation method and system based on depth camera
CN104063706A (en) Video fingerprint extraction method based on SURF algorithm
CN108257155B (en) Extended target stable tracking point extraction method based on local and global coupling
CN107862680B (en) Target tracking optimization method based on correlation filter
Wang et al. Multiscale deep alternative neural network for large-scale video classification
CN108021869A (en) A kind of convolutional neural networks tracking of combination gaussian kernel function
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN110826534A (en) Face key point detection method and system based on local principal component analysis
CN107729863B (en) Human finger vein recognition method
CN109583454A (en) Image characteristic extracting method based on confrontation neural network
CN109165551B (en) Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics
Gao et al. Recurrent calibration network for irregular text recognition
CN110443277A (en) A small amount of sample classification method based on attention model
Zhang et al. High-frequency attention residual GAN network for blind motion deblurring
CN115294424A (en) Sample data enhancement method based on generation countermeasure network
CN115393491A (en) Ink video generation method and device based on instance segmentation and reference frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190405