CN109948776A - A kind of confrontation network model picture tag generation method based on LBP - Google Patents

A kind of confrontation network model picture tag generation method based on LBP Download PDF

Info

Publication number
CN109948776A
CN109948776A CN201910140998.8A CN201910140998A CN109948776A CN 109948776 A CN109948776 A CN 109948776A CN 201910140998 A CN201910140998 A CN 201910140998A CN 109948776 A CN109948776 A CN 109948776A
Authority
CN
China
Prior art keywords
lbp
picture
texture pattern
confrontation network
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910140998.8A
Other languages
Chinese (zh)
Inventor
岳学军
程子耀
岑振钊
王林惠
凌康杰
卢杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN201910140998.8A priority Critical patent/CN109948776A/en
Publication of CN109948776A publication Critical patent/CN109948776A/en
Pending legal-status Critical Current

Links

Abstract

The confrontation network model picture tag generation method based on LBP that the invention discloses a kind of, the method comprises the following steps: to the picture of input, identification Objective extraction is carried out using production confrontation network model and carries out model modification, obtains initial differentiation result set.It is extracted based on texture pattern of the LBP algorithm to original image, obtains the texture pattern of picture is complete.Small noise largely isolated in texture pattern or pseudo- target are removed by calculating the coefficient of variation, obtains the picture texture pattern after denoising.Picture texture pattern after initial results collection and denoising is input to jointly in confrontation network arbiter, confrontation network texture continuity label training is carried out based on picture texture pattern, the label for obtaining detection target eliminates the initial randomness for differentiating result set well.

Description

A kind of confrontation network model picture tag generation method based on LBP
Technical field
The present invention relates to computer vision research field, in particular to a kind of confrontation network model picture mark based on LBP Sign generation method.
Background technique
With deepening continuously based on the field of target recognition research for fighting network and people are for network model target The requirement of verification and measurement ratio is continuously improved, and the label generating method for fighting network is also increasingly subject to the attention of people.Fight the one of network Typical realize of kind is constructed using tired fold of depth convolutional layer, and the semantic information of picture will necessarily be lost in the convolution process of arbiter, And the arbiter based on convolutional layer can not resist to resisting sample attack, mainly model do not have rotational invariance, how structure The arbiter model for making a strong robustness becomes the hot spot of research.
It is the important component for fighting network model that picture tag, which generates, directly affects confrontation network to picture target The accuracy rate of identification.Currently, it is to generate letter that generating function, which influences confrontation one of them critically important reason of Network Recognition accuracy rate, Exponential model does not have the characteristics of rotational invariance, can not be added according to the texture features of object in picture from what main modulation label generate Weight coefficient, it is therefore desirable to which finding one kind being capable of a kind of confrontation network model picture tag generation method based on LBP.
LBP algorithm due to texture blending effect is good and have the advantages that rotational invariance and become people research coke Point, the prior art are influenced very big by factors such as picture rotation and background changings, are caused during picture tag generates Confrontation network model recognition accuracy is affected.
Summary of the invention
The shortcomings that it is a primary object of the present invention to overcome existing confrontation web tab generation technique and deficiency, provide one kind Confrontation network model picture tag generation method based on LBP can effectively improve confrontation network model recognition accuracy.
In order to achieve the above object, the invention adopts the following technical scheme:
A kind of confrontation network model picture tag generation method based on LBP of the present invention, specifically includes that steps are as follows:
S1, the picture to input carry out identification Objective extraction using production confrontation network model, and carry out model more Newly, initial differentiation result set is obtained;
S2, it is extracted based on texture pattern of the LBP algorithm to original image, obtains the texture pattern of picture is complete;
S3, small noise largely isolated in texture pattern or pseudo- target are removed by calculating the coefficient of variation, obtains denoising Picture texture pattern after sound;
S4, the picture texture pattern after initial results collection and denoising is input to jointly in confrontation network arbiter, is based on Picture texture pattern carries out confrontation network texture continuity label training, obtains the label of detection target, eliminates well just The randomness for the differentiation result set that begins.
The step S1 as a preferred technical solution, specifically:
S11, construction Wo Sesitan production fight network WGAN model, and the WGAN model includes image composer and figure As arbiter, and the loss function using intersection entropy function as both sides;
S12, network WGAN model is fought to the production constructed in S11, generator is by encoder and decoder two parts It constitutes;
S13, network model generator is fought to the generator constructed in S12, encoder is made of eight layers of convolutional layer;
S14, network model generator is fought to the generator constructed in S12, decoder is made of eight layers of warp lamination;
S15, network model generator is fought to the generator constructed in S12, the output of encoder will continue to execute all the way In addition the process of convolution is merged into decoder in the input of corresponding warp lamination by Concatenate operation all the way;
S16, network WGAN model is fought to the production constructed in S11, arbiter is made of four layers of convolutional layer;
S17, network WGAN model is fought to the production constructed in S11, to fixed generator G, optimal arbiterExpression formula is as follows:
Wherein Pdata(x) it is distributed for truthful data, PgIt (x) is generation data distribution;
S18, input raw image data collection are trained the production confrontation network WGAN model of step S11 building, Obtain initial differentiation result set.
The step S2 as a preferred technical solution, specifically:
Assuming that pixel gray value is g at the c of picture target areac, corresponding eight neighborhood territory pixels point gray value sample point Respectively gp, the pixel of each neighborhood and the point carry out binaryzation and compared by p=0 ..., P, obtain the two of one eight into System string;Picture More General Form U (LBP is obtained by neighborhood comparisonP,R), if U (LBPP,R) < T then judges that the pixel belongs to inspection It surveys target and its texture pattern is LBP (gc), otherwise the pixel belongs to background area, texture pattern P+1, wherein T For the mode threshold being manually set according to scene, point g centered on PcSurrounding sample point number, R be sample point and central point away from From, specific formula is as follows:
Represent gray value g at central pixel point ccWith its surrounding pixel point gray value gp, p=0 ..., p-shaped At textural characteristics.
Described the step of picture More General Form is obtained by neighborhood comparison as a preferred technical solution, specifically: for Neighborhood compares the circulation binary string to be formed, and calculating the signal transition times in string is the More General Form that pixel is formed, U (LBPP,R) it is defined as follows:
Wherein g0For gcIt is positive right The neighborhood territory pixel point of side, R and P are respectively sample point away from central point gcRadius and sample point number.
It is described to remove a large amount of orphans in texture pattern by calculating the coefficient of variation as a preferred technical solution, in step S3 The step of vertical small noise or pseudo- target specifically:
Assuming that texture blending result set is Rlbp;Noise model Poisson is established using the component that current texture extracts result set It is distributed and counts its histogram Hist [Rlbp], to resulting histogram calculation correlation variance Var [R];Finally to resulting correlation Variance is ranked up, and searches out maximum value, as coefficient of variation λ;Then determine to differentiate result set hair when coefficient of variation λ is greater than 1 Raw mutation;
λ=max (Var [Rlbp])/C
Wherein RlbpFor initial area judging result set, Var [] is related variance, and C is constant.
As a preferred technical solution, in step S3, statistic texture extracts result set histogram Hist [Rlbp] formula is such as Under:
Wherein H, W are respectively the height and width of picture;
It is as follows to calculate resulting histogram correlation variance Var [R] formula:
Var[Rlbp]=E { [Xlbp-E(Xlbp)]2}
=E (Xlbp 2)-(E(Xlbp))2
As a preferred technical solution, in step 4, confrontation network texture continuity label is carried out based on picture texture pattern Training, the process for obtaining detection target labels are as follows:
Assuming that having extracted and being by the texture pattern of denoisingTruthful data be distributed as y~ pdata, then the input for fighting network arbiter is D (x, y), and the input for fighting network generator is G (x, z), fights the damage of network It is as follows to lose function:
Wherein G is used for the generator function of descriptive model, and D is used for descriptive model arbiter function,With Come describe confrontation network model loss function, ifThen the training of texture continuity label is completed, wherein ε For the error coefficient of artificial settings.
Compared with the prior art, the invention has the following advantages and beneficial effects:
The present invention proposes to carry out identification Objective extraction using production confrontation network model to the picture of input and carries out mould Type updates, and obtains initial differentiation result set.In conjunction with LBP textural characteristics to the insensitivity of rotation, to the texture mould of original image Formula extracts, and obtains the texture pattern of picture is complete.Again by calculate the coefficient of variation remove in texture pattern it is largely isolated Small noise or pseudo- target, obtain the picture texture pattern data after denoising.It finally will be after initial results collection and denoising Picture texture pattern is input to jointly in confrontation network arbiter, carries out confrontation network texture continuity based on picture texture pattern Label training, obtains the label of detection target.This method is efficiently against picture semantic caused by confrontation network model convolutional layer Distortion eliminates the initial randomness for differentiating result set well.Due to the processes operation such as local binary patterns and coefficient of variation Quickly, so the algorithm is with good performance simultaneously.
Detailed description of the invention
Fig. 1 is the flow chart generated for picture tag of the invention.
Fig. 2 is production confrontation network model overall structure figure of the invention.
Fig. 3 is production confrontation network generator and arbiter structure chart of the invention.
Fig. 4 (a)-Fig. 4 (d) is the lab diagram of confrontation network model of the invention.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited In this.
Embodiment
A kind of confrontation network model picture tag generation method based on LBP that the present invention develops, is in microcomputer Under 16.04 environment of Ubuntu, using object-oriented design method and Software engineering standard, realized with Python.
Fig. 1 is the specific flow chart of the method for the invention.Illustrate some specific realities in the present invention by taking Fig. 1 as an example below Existing process.Method of the invention is a kind of confrontation network model picture tag generation method based on LBP, the specific steps are that:
S1 carries out identification Objective extraction using production confrontation network model and carries out model modification to the picture of input, Obtain initial differentiation result set;The specific method that step S1 is used for
S11, construction Wo Sesitan production fight network WGAN model, and model includes image composer and image discriminating Device, and the loss function using intersection entropy function as both sides;Fig. 2 is the whole knot that constructed production fights network model Composition, Fig. 3 are that constructed production fights the generator of network model and the specific structure of arbiter.
S12, network WGAN model is fought to the production constructed in S11, generator is by encoder and decoder two parts It constitutes.
S13, network model generator being fought to the generator constructed in S12, encoder is made of eight layers of convolutional layer, Structure is C64-C128-C256-C512-C512-C512-C512-C512.
S14, network model generator being fought to the generator constructed in S12, decoder is made of eight layers of warp lamination, Its structure is CD512-CD512-CD512-CD512-CD256-CD128-CD64-CD3.
S15, network model generator is fought to the generator constructed in S12, the output of encoder will continue to execute all the way In addition the process of convolution is merged into decoder in the input of corresponding warp lamination by Concatenate operation all the way.
S16, network WGAN model is fought to the production constructed in S11, arbiter is made of four layers of convolutional layer, structure For C64-C128-C256-C512.
S17, network WGAN model is fought to the production constructed in S11, to fixed generator G, optimal arbiterExpression formula is as follows:
Wherein Pdata(x) it is distributed for truthful data, PgIt (x) is generation data distribution;
S18, input raw image data collection are trained the production confrontation network WGAN model of step S11 building, Obtain initial differentiation result set.
S2, it is extracted based on texture pattern of the LBP algorithm to original image, obtains the texture pattern of picture is complete, In extracted based on texture pattern of the LBP algorithm to original image, specifically:
Assuming that pixel gray value is g at the c of picture target areac, corresponding eight neighborhood territory pixels point gray value sample point Respectively gp, the pixel of each neighborhood and the point carry out binaryzation and compared by p=0 ..., P, obtain the two of one eight into System string;Picture More General Form U (LBP is obtained by neighborhood comparisonP,R), if U (LBPP,R) < T then judges that the pixel belongs to inspection It surveys target and its texture pattern is LBP (gc), otherwise the pixel belongs to background area, texture pattern P+1, wherein T For the mode threshold being manually set according to scene.Point g centered on PcSurrounding sample point number, R be sample point and central point away from From.
Wherein,Represent gray value g at central pixel point ccWith its surrounding pixel point gray value gp, p= 0 ..., p-shaped at textural characteristics.
S3, small noise largely isolated in texture pattern or pseudo- target are removed by calculating the coefficient of variation, obtains denoising Picture texture pattern after sound, specific steps are as follows:
Assuming that texture blending result set is Rlbp;Noise model Poisson is established using the component that current texture extracts result set It is distributed and counts its histogram Hist [Rlbp], to resulting histogram calculation correlation variance Var [R];Finally to resulting correlation Variance is ranked up, and searches out maximum value, as coefficient of variation λ;Then determine to differentiate result set hair when coefficient of variation λ is greater than 1 Raw mutation;
λ=max (Var [Rlbp])/C
Wherein RlbpFor initial area judging result set, Var [] is related variance, and C is constant
Statistic texture extracts result set histogram Hist [Rlbp] formula is as follows
Wherein H, W are respectively the height and width of picture
It is as follows to calculate resulting histogram correlation variance Var [R] formula
Var[Rlbp]=E { [Xlbp-E(Xlbp)]2}
=E (Xlbp 2)-(E(Xlbp))2
S4, the picture texture pattern after initial results collection and denoising is input to jointly in confrontation network arbiter, is based on Picture texture pattern carries out confrontation network texture continuity label training, obtains the label of detection target, eliminates well just Begin to differentiate that the randomness of result set obtains wherein carrying out confrontation network texture continuity label training based on picture texture pattern Target labels are detected, specifically:
S41, original image and picture texture pattern are input to jointly in confrontation network arbiter, carry out confrontation network Texture continuity label training;
S42, S41 confrontation network texture continuity label is trained, fights network losses function such as following formula:
Wherein D is arbiter network approximating function, and G is generator network approximating function,Y~ pdata, p hereindataFor original image;
S43, to confrontation network losses function, fight network training loss function final goal formula:
For the purpose of be that the training objective that both makes is opposite.Arbiter D is optimized to given generator G, most Bigization log-likelihood function to differentiate the source of G (z) He x, i.e. arbiter D can correct decision generate picture G (z) and true figure Piece x;On the contrary, optimize generator G to given arbiter D, minimize log-likelihood function so that G (z) picture approaching to reality The picture of the distribution of picture x, i.e. generator output can mix the spurious with the genuine.Fig. 4 (b) and Fig. 4 (d) is original image, and Fig. 4 (a) is The picture tag that network model generates is fought, Fig. 4 (c) is the picture tag that the present invention generates.It can be seen that from Fig. 4 (a), (c) The method of the present invention can eliminate picture noise and pseudo- target generates model label in conjunction with LBP texture pattern binaryzation characteristic Accuracy rate influence.In addition, since the processes operation such as local binary patterns and coefficient of variation is quick, so the algorithm has simultaneously There is good performance.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention, It should be equivalent substitute mode, be included within the scope of the present invention.

Claims (7)

1. a kind of confrontation network model picture tag generation method based on LBP, which is characterized in that specifically include that steps are as follows:
S1, the picture to input carry out identification Objective extraction using production confrontation network model, and carry out model modification, obtain Take initial differentiation result set;
S2, it is extracted based on texture pattern of the LBP algorithm to original image, obtains the texture pattern of picture is complete;
S3, small noise largely isolated in texture pattern or pseudo- target are removed by calculating the coefficient of variation, obtain denoising it Picture texture pattern afterwards;
S4, the picture texture pattern after initial results collection and denoising is input to jointly in confrontation network arbiter, is based on picture Texture pattern carries out confrontation network texture continuity label training, obtains the label of detection target, eliminates initially sentence well The randomness of other result set.
2. the confrontation network model picture tag generation method based on LBP according to claim 1, which is characterized in that described Step S1 specifically:
S11, construction Wo Sesitan production fight network WGAN model, and the WGAN model includes that image composer and image are sentenced Other device, and the loss function using intersection entropy function as both sides;
S12, network WGAN model is fought to the production constructed in S11, generator is made of encoder and decoder two parts;
S13, network model generator is fought to the generator constructed in S12, encoder is made of eight layers of convolutional layer;
S14, network model generator is fought to the generator constructed in S12, decoder is made of eight layers of warp lamination;
S15, network model generator is fought to the generator constructed in S12, the output of encoder will continue to execute convolution all the way Process, in addition all the way by Concatenate operation be merged into decoder in the input of corresponding warp lamination;
S16, network WGAN model is fought to the production constructed in S11, arbiter is made of four layers of convolutional layer;
S17, network WGAN model is fought to the production constructed in S11, to fixed generator G, optimal arbiterTable It is as follows up to formula:
Wherein Pdata(x) it is distributed for truthful data, PgIt (x) is generation data distribution;
S18, input raw image data collection are trained the production confrontation network WGAN model of step S11 building, obtain It is initial to differentiate result set.
3. the confrontation network model picture tag generation method based on LBP according to claim 1, which is characterized in that described Step S2 specifically:
Assuming that pixel gray value is g at the c of picture target areac, corresponding eight neighborhood territory pixels point gray value sample point is respectively gp, the pixel of each neighborhood and the point are carried out binaryzation and compared, obtain one eight binary strings by p=0 ..., P; Picture More General Form U (LBP is obtained by neighborhood comparisonP,R), if U (LBPP,R) < T then judges that the pixel belongs to detection mesh It marks and its texture pattern is LBP (gc), otherwise the pixel belongs to background area, and texture pattern P+1, wherein T is root According to the mode threshold that scene is manually set, point g centered on PcSurrounding sample point number, R are sample point at a distance from central point, tool Body formula is as follows:
Represent gray value g at central pixel point ccWith its surrounding pixel point gray value gp, p=0 ..., p-shaped at Textural characteristics.
4. the confrontation network model picture tag generation method based on LBP according to claim 3, which is characterized in that described The step of obtaining picture More General Form by neighborhood comparison specifically: compare the circulation binary string to be formed for neighborhood, calculate Signal transition times in string are the More General Form that pixel is formed, U (LBPP,R) it is defined as follows:
Wherein g0For gcThe neighborhood territory pixel point of front-right, R and P are respectively sample point away from central point gcRadius and sample point Number.
5. the confrontation network model picture tag generation method based on LBP according to claim 1, which is characterized in that step It is described specific by calculating the step of coefficient of variation removes the small noise largely isolated in texture pattern or pseudo- target in S3 Are as follows:
Assuming that texture blending result set is Rlbp;Noise model Poisson distribution is established using the component that current texture extracts result set And count its histogram Hist [Rlbp], to resulting histogram calculation correlation variance Var [R];Finally to resulting related variance It is ranked up, searches out maximum value, as coefficient of variation λ;Then determine to differentiate that result set occurs to dash forward when coefficient of variation λ is greater than 1 Become;
λ=max (Var [Rlbp])/C
Wherein RlbpFor initial area judging result set, Var [] is related variance, and C is constant.
6. the confrontation network model picture tag generation method based on LBP according to claim 5, which is characterized in that step In S3, statistic texture extracts result set histogram Hist [Rlbp] formula is as follows:
Wherein H, W are respectively the height and width of picture;
It is as follows to calculate resulting histogram correlation variance Var [R] formula:
Var[Rlbp]=E { [Xlbp-E(Xlbp)]2}
=E (Xlbp 2)-(E(Xlbp))2
7. the confrontation network model picture tag generation method based on LBP according to claim 1, which is characterized in that step 4 In, confrontation network texture continuity label training is carried out based on picture texture pattern, the process for obtaining detection target labels is as follows:
Assuming that having extracted and being by the texture pattern of denoisingTruthful data is distributed as y~pdata, then The input for fighting network arbiter is D (x, y), and the input for fighting network generator is G (x, z), fights the loss function of network It is as follows:
Wherein G is used for the generator function of descriptive model, and D is used for descriptive model arbiter function,For describing The loss function of network model is fought, ifThen the training of texture continuity label is completed, and wherein ε is artificial The error coefficient of setting.
CN201910140998.8A 2019-02-26 2019-02-26 A kind of confrontation network model picture tag generation method based on LBP Pending CN109948776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910140998.8A CN109948776A (en) 2019-02-26 2019-02-26 A kind of confrontation network model picture tag generation method based on LBP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910140998.8A CN109948776A (en) 2019-02-26 2019-02-26 A kind of confrontation network model picture tag generation method based on LBP

Publications (1)

Publication Number Publication Date
CN109948776A true CN109948776A (en) 2019-06-28

Family

ID=67006976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910140998.8A Pending CN109948776A (en) 2019-02-26 2019-02-26 A kind of confrontation network model picture tag generation method based on LBP

Country Status (1)

Country Link
CN (1) CN109948776A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401488A (en) * 2019-07-12 2019-11-01 北京邮电大学 A kind of demodulation method and device
CN110717960A (en) * 2019-10-22 2020-01-21 北京建筑大学 Method for generating building rubbish remote sensing image sample
CN111798409A (en) * 2020-05-19 2020-10-20 佛山市南海区广工大数控装备协同创新研究院 Deep learning-based PCB defect data generation method
CN112801297A (en) * 2021-01-20 2021-05-14 哈尔滨工业大学 Machine learning model adversity sample generation method based on conditional variation self-encoder
CN112818159A (en) * 2021-02-24 2021-05-18 上海交通大学 Image description text generation method based on generation countermeasure network
WO2022120532A1 (en) * 2020-12-07 2022-06-16 Huawei Technologies Co., Ltd. Presentation attack detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204586A (en) * 2016-07-08 2016-12-07 华南农业大学 A kind of based on the moving target detecting method under the complex scene followed the tracks of
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN108764005A (en) * 2018-01-31 2018-11-06 华侨大学 A kind of high-spectrum remote sensing atural object space Spectral Characteristic extracting method and system
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204586A (en) * 2016-07-08 2016-12-07 华南农业大学 A kind of based on the moving target detecting method under the complex scene followed the tracks of
CN108764005A (en) * 2018-01-31 2018-11-06 华侨大学 A kind of high-spectrum remote sensing atural object space Spectral Characteristic extracting method and system
CN108564611A (en) * 2018-03-09 2018-09-21 天津大学 A kind of monocular image depth estimation method generating confrontation network based on condition
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN109166126A (en) * 2018-08-13 2019-01-08 苏州比格威医疗科技有限公司 A method of paint crackle is divided on ICGA image based on condition production confrontation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIMO OJALA 等: ""Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
余思泉 等: ""基于对抗生成网络的纹理合成方法"", 《红外与激光工程》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110401488A (en) * 2019-07-12 2019-11-01 北京邮电大学 A kind of demodulation method and device
CN110401488B (en) * 2019-07-12 2021-02-05 北京邮电大学 Demodulation method and device
CN110717960A (en) * 2019-10-22 2020-01-21 北京建筑大学 Method for generating building rubbish remote sensing image sample
CN111798409A (en) * 2020-05-19 2020-10-20 佛山市南海区广工大数控装备协同创新研究院 Deep learning-based PCB defect data generation method
WO2022120532A1 (en) * 2020-12-07 2022-06-16 Huawei Technologies Co., Ltd. Presentation attack detection
CN112801297A (en) * 2021-01-20 2021-05-14 哈尔滨工业大学 Machine learning model adversity sample generation method based on conditional variation self-encoder
CN112801297B (en) * 2021-01-20 2021-11-16 哈尔滨工业大学 Machine learning model adversity sample generation method based on conditional variation self-encoder
CN112818159A (en) * 2021-02-24 2021-05-18 上海交通大学 Image description text generation method based on generation countermeasure network

Similar Documents

Publication Publication Date Title
CN109948776A (en) A kind of confrontation network model picture tag generation method based on LBP
Tu et al. Edge-guided non-local fully convolutional network for salient object detection
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN106997597B (en) It is a kind of based on have supervision conspicuousness detection method for tracking target
CN108346159A (en) A kind of visual target tracking method based on tracking-study-detection
CN108520530A (en) Method for tracking target based on long memory network in short-term
CN111242027B (en) Unsupervised learning scene feature rapid extraction method fusing semantic information
CN103996018A (en) Human-face identification method based on 4DLBP
CN103413120A (en) Tracking method based on integral and partial recognition of object
CN107564035B (en) Video tracking method based on important area identification and matching
CN109598684A (en) In conjunction with the correlation filtering tracking of twin network
CN108268823A (en) Target recognition methods and device again
CN110569855A (en) Long-time target tracking algorithm based on correlation filtering and feature point matching fusion
CN102945553A (en) Remote sensing image partition method based on automatic difference clustering algorithm
CN106203373B (en) A kind of human face in-vivo detection method based on deep vision bag of words
CN110059730A (en) A kind of thyroid nodule ultrasound image classification method based on capsule network
CN109409227A (en) A kind of finger vena plot quality appraisal procedure and its device based on multichannel CNN
CN105373810A (en) Method and system for building action recognition model
CN108830882A (en) Video abnormal behaviour real-time detection method
CN106127766A (en) Based on Space Coupling relation and the method for tracking target of historical models
CN108846845B (en) SAR image segmentation method based on thumbnail and hierarchical fuzzy clustering
CN109902581A (en) It is a kind of based on multistep weighting single sample portion block face identification method
CN113033345A (en) V2V video face recognition method based on public feature subspace
CN104517300A (en) Vision judgment tracking method based on statistical characteristic
Qin et al. Multi-scaling detection of singular points based on fully convolutional networks in fingerprint images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190628