CN109190707A - A kind of domain adapting to image semantic segmentation method based on confrontation study - Google Patents

A kind of domain adapting to image semantic segmentation method based on confrontation study Download PDF

Info

Publication number
CN109190707A
CN109190707A CN201811059300.1A CN201811059300A CN109190707A CN 109190707 A CN109190707 A CN 109190707A CN 201811059300 A CN201811059300 A CN 201811059300A CN 109190707 A CN109190707 A CN 109190707A
Authority
CN
China
Prior art keywords
segmentation
confrontation
source
network
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811059300.1A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201811059300.1A priority Critical patent/CN109190707A/en
Publication of CN109190707A publication Critical patent/CN109190707A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Abstract

A kind of domain adapting to image semantic segmentation method based on confrontation study proposed in the present invention, its main contents includes: field adaptation, network structure, output spatial adaptation, its process is, the image for inputting source domain and aiming field first is transmitted to segmentation network to predict that source domain and aiming field obtain segmentation output;The source prediction exported by source generates the segmentation loss of source domain;Then the input by two segmentation outputs as discriminator generates confrontation loss, then confrontation loss is transmitted to segmentation network;Finally by minimizing segmentation loss and maximizing confrontation loss, with the Pixel-level semantic segmentation image met the requirements.The present invention develops a kind of multi-level confrontation learning method, in scene layout and the local context that adaptively can be effectively aligned between source and target image of partition space, furthermore of the invention simple and convenient easy to operate, the influence for adapting to high dimensional feature complexity can be solved very well.

Description

A kind of domain adapting to image semantic segmentation method based on confrontation study
Technical field
The present invention relates to graph and image processing fields, more particularly, to a kind of domain adapting to image based on confrontation study Semantic segmentation method.
Background technique
Image, semantic segmentation refers to and classifies to each pixel of picture, from pixel scale obtain image content and The position of target in image.Semantic segmentation is applied to immersed body detection, GIS-Geographic Information System, unmanned vehicle driving, medical treatment at present The fields such as image analysing computer, robot;Machine can be allowed to input satellite remote-sensing image, automatic identification road by training neural network Road, river, crops and building etc.;In intelligent medical field, semantic segmentation can be using tumor image segmentation, caries diagnosis etc.; Semantic segmentation is also the core algorithm technology that unmanned vehicle drives, and vehicle-mounted camera or laser radar input after detecting image Into neural network, background computer can divide the image into classification automatically, to avoid the obstacles such as pedestrian and vehicle.Currently based on The method of convolutional neural networks achieves significant progress in terms of semantic segmentation, and is applied to autonomous driving and picture editting, And this mode cannot be generalized to invisible image very well, especially when existence domain gap between training and test image; Another kind of effective ways are in two domain spaces to its feature, so that the feature adapted to can be generalized to two domains, and for not With image classification task, the feature of semantic segmentation adapts to will receive the influence of high dimensional feature complexity, and high dimensional feature is needed to not Same visual cues are encoded, including appearance, shape and context, cause low-dimensional feature that cannot adapt to very well, therefore are lacked Adapt to the prediction task of Pixel-level.
The invention proposes a kind of domain adapting to image semantic segmentation methods based on confrontation study, firstly, input source domain With the image of aiming field, segmentation network is transmitted to predict that source domain and aiming field obtain segmentation output;The source exported by source Prediction generates the segmentation loss of source domain;Then the input by two segmentation outputs as discriminator, generates confrontation loss, then will be right Damage-retardation is lost to be delivered to segmentation network;Finally by minimizing segmentation loss and maximizing confrontation loss, with the pixel met the requirements Grade semantic segmentation image.The present invention develops a kind of multi-level confrontation learning method, in partition space adaptively can be effective The scene layout being aligned between source and target image and local context, the present invention is simple and convenient easy to operate, also can be good at solving Certainly adapt to the influence of high dimensional feature complexity.
Summary of the invention
It adapts to be easy by the complexity of high dimensional feature is influenced, low-dimensional feature cannot adapt to very well for semantic segmentation feature The problem of, the purpose of the present invention is to provide a kind of domain adapting to image semantic segmentation methods based on confrontation study, firstly, defeated The image for entering source domain and aiming field is transmitted to segmentation network to predict that source domain and aiming field obtain segmentation output;It is exported by source The source prediction arrived generates the segmentation loss of source domain;Then the input by two segmentation outputs as discriminator, generates confrontation loss, Confrontation loss is transmitted to segmentation network again;Finally by minimizing segmentation loss and maximizing confrontation loss, to meet the requirements Pixel-level semantic segmentation image.
To solve the above problems, the present invention provides a kind of domain adapting to image semantic segmentation method based on confrontation study, Its main contents includes:
(1) field adapts to;
(2) network structure;
(3) spatial adaptation is exported.
Firstly, the image of input source domain and aiming field, is transmitted to segmentation network to predict that source domain and aiming field are divided Output;The source prediction exported by source generates the segmentation loss of source domain;Then by two segmentation outputs as the defeated of discriminator Enter, generates confrontation loss, then confrontation loss is transmitted to segmentation network;It is fought finally by minimizing segmentation loss and maximizing Loss, with the Pixel-level semantic segmentation image met the requirements.
Wherein, the field adapts to, and main includes the image of source domain and aiming field, is expressed as IsAnd ItAnd two The adaptation task of a loss function, respectivelyWithWhereinIt indicates to adapt to be divided by the prediction of aiming field to source The confrontation loss of the prediction segmentation in domain,It indicates to lose in source domain using the segmentation really annotated;Field is adapted to for solving Certainly the domain displacement between source domain and aiming field, annotation are contained only in source domain image.
Wherein, the network structure, main includes segmentation network G and discriminator network Di;Source domain and target area image Feature is obtained through over-segmentation network, there is high similarity in output space, based on confrontation loss, parted pattern is intended to cheat mirror Other device, the purpose is to source images and target images in the similar distribution of output space generation.
Further, the segmentation network, for predict source domain output result and aiming field output as a result, i.e. source Predict PsWith target prediction Pt, the segmentation feature of different levels, including high-level feature and low level spy are obtained through over-segmentation network Sign, feature have similitude in output space;Good baseline model is the premise for obtaining the segmentation result of high quality, is utilized DeepLab-v2 frame removes last classification layer, by last 2 as segmentation baseline model in the ReNet-101 of ImageNet A convolutional layer stride is changed to 1, Conv4 and Conv5 to use stride respectively to be 2 and 4 extension convolution by 2, adds black dull space gold Word tower basin (ASPP) is used as final classification device, finally using the up-sampling layer with softmax output to match input picture Size.
Further, P is predicted in the sources, by the source domain image I comprising annotationsIt is transmitted to segmentation network, with optimization point Cut lossAnd generate Ps, wherein Ps=G (Is) indicate the segmentation prediction from source domain,It is as follows:
Wherein,Indicate the segmentation loss based on source domain, w ∈ W, h ∈ H indicate the size of output image, c ∈ C table Show the number of classification.
Further, the target prediction Pt, confrontation loss is calculated in target prediction, and be propagated to segmentation net In network;By target area image ItIt is transmitted to segmentation network and generates Pt, wherein Pt=G (It) indicate the segmentation prediction from aiming field; It is closer to predict target prediction and source, optimization confrontation lossIt is as follows:
Wherein, confrontation loss is to train segmentation network using target prediction as maximization a possibility that the prediction of source with this With deception discriminator.
Further, the discriminator network Di, i indicates the rank of the discriminator in multistage confrontation study, utilizes institute There is complete convolutional layer retaining space information, discriminator network is made of 5 convolutional layers, stride 2, by leakage rectification function addition On preceding 4 layers of convolutional layer, the last layer addition up-samples layer to match the size of input picture;Given segmentation softmax exports P =G (I) ∈ RH×W×C, the intersection entropy loss comprising two classifications of source and target is used at this timeP is transmitted to complete convolution mirror In other device D, optimizationIt is as follows:
Wherein, z is constant, and the sample drawn image from aiming field is indicated as z=1, indicates when z=0 to take out from source domain Take sample image.
Wherein, the output spatial adaptation (three), segmentation output include information abundant, are learnt by confrontation, by phase The segmentation prediction that low-dimensional softmax is exported is adapted to like property, is minimizedIt maximizesConfrontation study includes single level pair Anti- study and multi-level confrontation study.
Further, multi-level confrontation study, the multi-level network that fights can be defeated in the realization of different characteristic level The domain in space is adaptive out;Segmentation output is predicted in each feature space, then carries out confrontation study by individual discriminator;Benefit With multistage adaptive model, low level feature far from output, when exporting space and executing confrontation study cannot direct adaptive prediction, because This extracts Feature Mapping in the additional confrontation module of low level feature space on Conv4, and adds ASPP module as auxiliary point Class device, while increasing has mutually isostructural discriminator for fighting study;Therefore, it is based onWithField adapt to mesh MarkIt is as follows:
Wherein, i indicates the rank for predicting segmentation output, λadvIt indicates weight, is lost for balanced division and to damage-retardation It loses, when Optimized Segmentation model, it is necessary to balance λadv
Detailed description of the invention
Fig. 1 is a kind of system architecture diagram of the domain adapting to image semantic segmentation method based on confrontation study of the present invention.
Fig. 2 is a kind of flow diagram of the domain adapting to image semantic segmentation method based on confrontation study of the present invention.
Fig. 3 is a kind of field gap comparison of the domain adapting to image semantic segmentation method based on confrontation study of the present invention Figure.
Fig. 4 is a kind of product image of the domain adapting to image semantic segmentation method based on confrontation study of the present invention.
Specific embodiment
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase It mutually combines, invention is further described in detail in the following with reference to the drawings and specific embodiments.
Fig. 1 is a kind of system architecture diagram of the domain adapting to image semantic segmentation method based on confrontation study of the present invention.It is main It to include that field adapts to, network structure exports spatial adaptation.
Domain adapting to image semantic segmentation method, first input source domain and target area image are transmitted to segmentation network with pre- Survey the segmentation output of source domain and aiming field;The source prediction exported by source generates the segmentation loss of source domain;Then output is made For the input of discriminator network, confrontation loss is generated, then confrontation loss is transmitted to segmentation network;Divide finally by minimizing Loss and maximization confrontation loss, to generate the Pixel-level semantic segmentation image met the requirements.
Fig. 3 is a kind of field gap comparison of the domain adapting to image semantic segmentation method based on confrontation study of the present invention Figure.This figure shows the motivations in output space learning adaptability, although image is different in appearance, structuring when exporting And there is similarity, such as space layout and local context.
Field adapts to, and main includes the image of source domain and aiming field, is expressed as IsAnd ItAnd two loss functions Adaptation task, respectivelyWithWhereinIt indicates to adapt to the prediction point by the prediction segmentation of aiming field to source domain The confrontation loss cut,It indicates to lose in source domain using the segmentation really annotated;Field is adapted to for solving source domain and mesh The domain displacement between domain is marked, annotation is contained only in source domain image.
Network structure, main includes segmentation network G and discriminator network Di;Source domain and target area image are by segmentation network Feature is obtained, there is high similarity in output space, based on confrontation loss, parted pattern is intended to cheat discriminator, purpose It is source images with target image in the similar distribution of output space generation.
Spatial adaptation is exported, segmentation output includes information abundant, learns by confrontation, similitude is adapted to low-dimensional The segmentation prediction of softmax output, minimizesIt maximizesConfrontation study includes single level confrontation study and multilayer Secondary confrontation study.
Fig. 2 is a kind of flow diagram of the domain adapting to image semantic segmentation method based on confrontation study of the present invention.This Figure shows that the size in given source and aiming field is the image of W, H, by the transmitting of segmentation network to obtain output prediction, for The source of C class is predicted, is calculated segmentation loss based on source domain truth and is utilized discriminator area to predict that target prediction close to source Divide whether input comes from source domain or aiming field, then calculates confrontation in target prediction and lose, and pass it to segmentation network; This process is known as an adaptation module, and by illustrating this Shen using two adaptation modules in two different levels The multi-level confrontation study that please be propose.
Divide network, for predict source domain output result and aiming field output as a result, i.e. source predict PsIt is pre- with target Survey Pt, the segmentation feature of different levels is obtained through over-segmentation network, including high-level feature and low level feature, feature are exporting Space has similitude;Good baseline model is the premise for obtaining the segmentation result of high quality, utilizes ImageNet's DeepLab-v2 frame removes last classification layer as segmentation baseline model in ReNet-101, and last 2 convolutional layers are walked Width is changed to 1, Conv4 and Conv5 to use stride respectively to be 2 and 4 extension convolution by 2, adds black dull spatial pyramid pond (ASPP) it is used as final classification device, the size of input picture is finally matched using the up-sampling layer with softmax output.
Predict P in sources, by the source domain image I comprising annotationsIt is transmitted to segmentation network, is lost with Optimized SegmentationAnd it generates Ps, wherein Ps=G (Is) indicate the segmentation prediction from source domain,It is as follows:
Wherein,Indicate the segmentation loss based on source domain, w ∈ W, h ∈ H indicate the size of output image, c ∈ C table Show the number of classification.
Target prediction Pt, confrontation loss is calculated in target prediction, and be propagated in segmentation network;By aiming field figure As ItIt is transmitted to segmentation network and generates Pt, wherein Pt=G (It) indicate the segmentation prediction from aiming field;For make target prediction with Source prediction is closer, optimization confrontation lossIt is as follows:
Wherein, confrontation loss is to train segmentation network using target prediction as maximization a possibility that the prediction of source with this With deception discriminator.
Discriminator network Di, i indicates the rank of the discriminator in multistage confrontation study, all complete convolutional layers utilized to retain Spatial information, discriminator network are made of 5 convolutional layers, stride 2, by leakage rectification function addition on preceding 4 layers of convolutional layer, The last layer addition up-samples layer to match the size of input picture;Given segmentation softmax exports P=G (I) ∈ RH×W×C, this When use the intersection entropy loss comprising two classifications of source and targetP is transmitted in complete convolution discriminator D, is optimizedSuch as Shown in lower:
Wherein, z is constant, and the sample drawn image from aiming field is indicated as z=1, indicates when z=0 to take out from source domain Take sample image.
Multi-level confrontation study, the multi-level network that fights can realize that the domain in output space is adaptive in different characteristic level It answers;Segmentation output is predicted in each feature space, then carries out confrontation study by individual discriminator;Mould is adapted to using multistage Type, low level feature far from output, export space execute confrontation study when cannot direct adaptive prediction, therefore low level spy The additional confrontation module in space is levied, Feature Mapping is extracted on Conv4, and add ASPP module as subsidiary classification device, increased simultaneously Adding has mutually isostructural discriminator for fighting study;Therefore, it is based onWithField adapt to targetSuch as Shown in lower:
Wherein, i indicates the rank for predicting segmentation output, λadvIt indicates weight, is lost for balanced division and to damage-retardation It loses, when Optimized Segmentation model, it is necessary to balance λadv
Fig. 4 is a kind of product image of the domain adapting to image semantic segmentation method based on confrontation study of the present invention.This figure Mainly show the image, semantic segmentation result obtained under different situations, including before Target Photo, real situation, adaptation, it is special Sign adapts to and the adaptivenon-uniform sampling of the application.
For those skilled in the art, the present invention is not limited to the details of above-described embodiment, without departing substantially from essence of the invention In the case where mind and range, the present invention can be realized in other specific forms.In addition, those skilled in the art can be to this hair Bright to carry out various modification and variations without departing from the spirit and scope of the present invention, these improvements and modifications also should be regarded as of the invention Protection scope.Therefore, it includes preferred embodiment and all changes for falling into the scope of the invention that the following claims are intended to be interpreted as More and modify.

Claims (10)

1. a kind of domain adapting to image semantic segmentation method based on confrontation study, which is characterized in that mainly include that field adapts to (1);Network structure (two);It exports spatial adaptation (three).
2. based on adapting to image semantic segmentation method in domain described in claims 1, which is characterized in that first input source domain and Target area image is transmitted to segmentation network to predict the segmentation output of source domain and aiming field;The source prediction life exported by source It is lost at the segmentation of source domain;Then it will be output as the input of discriminator network, generate confrontation loss, then transmitting is lost into confrontation To segmentation network;Loss is fought finally by minimizing segmentation loss and maximizing, it is semantic to generate the Pixel-level met the requirements Segmented image.
3. adapting to (one) based on field described in claims 1, which is characterized in that main includes the figure of source domain and aiming field Picture is expressed as IsAnd ItAnd the adaptation task of two loss functions, respectivelyWithWhereinIndicate suitable The confrontation of the prediction segmentation of source domain should be lost by the prediction segmentation of aiming field,It indicates in source domain using really annotating Segmentation loss;Field is adapted to for solving the displacement of the domain between source domain and aiming field, and annotation is contained only in source domain image.
4. being based on network structure described in claim 1 (two), which is characterized in that main includes segmentation network G and discriminator net Network Di;Source domain and target area image pass through segmentation network and obtain feature, have high similarity in output space, based on to damage-retardation It loses, parted pattern is intended to cheat discriminator, and the purpose is to source images and target images in the similar distribution of output space generation.
5. based on segmentation network described in claims 4, which is characterized in that for predicting the output result and aiming field of source domain Output as a result, i.e. source predict PsWith target prediction Pt, the segmentation feature of different levels, including high level are obtained through over-segmentation network Secondary feature and low level feature, feature have similitude in output space;Good baseline model is the segmentation for obtaining high quality As a result premise is removed last using DeepLab-v2 frame in the ReNet-101 of ImageNet as segmentation baseline model Last 2 convolutional layer strides are changed to 1, Conv4 and Conv5 to use stride respectively to be 2 and 4 extension convolution by classification layer by 2, Black dull spatial pyramid pond (ASPP) is added as final classification device, finally using the up-sampling layer with softmax output with Just the size of input picture is matched.
6. predicting P based on source described in claims 5s, which is characterized in that by the source domain image I comprising annotationsIt is transmitted to point Network is cut, is lost with Optimized SegmentationAnd generate Ps, wherein Ps=G (Is) indicate the segmentation prediction from source domain,Such as Shown in lower:
Wherein,Indicate the segmentation loss based on source domain, w ∈ W, h ∈ H indicate the size of output image, and c ∈ C indicates class Other number.
7. based on target prediction P described in claims 5t, which is characterized in that confrontation loss is calculated in target prediction, and will It is traveled in segmentation network;By target area image ItIt is transmitted to segmentation network and generates Pt, wherein Pt=G (It) indicate to come from mesh Mark the segmentation prediction in domain;It is closer to predict target prediction and source, optimization confrontation lossIt is as follows:
Wherein, confrontation loss is to train segmentation network using target prediction as maximization a possibility that the prediction of source with this and take advantage of Deceive discriminator.
8. based on discriminator network D described in claims 4i, which is characterized in that i indicates the discriminator in multistage confrontation study Rank, using all complete convolutional layer retaining space information, discriminator network is made of 5 convolutional layers, and stride 2 will be let out On preceding 4 layers of convolutional layer, the last layer addition up-samples layer to match the size of input picture for dew rectification function addition;Given point Cut softmax output P=G (I) ∈ RH×W×C, the intersection entropy loss comprising two classifications of source and target is used at this timeP is passed It is delivered in complete convolution discriminator D, optimizesIt is as follows:
Wherein, z is constant, and the sample drawn image from aiming field is indicated as z=1, indicates when z=0 to extract sample from source domain This image.
9. being based on output spatial adaptation (three) described in claim 1, which is characterized in that segmentation output includes information abundant, Learnt by confrontation, the segmentation that similitude adapts to low-dimensional softmax output is predicted, is minimizedIt maximizesConfrontation Study includes single level confrontation study and multi-level confrontation study.
10. based on multi-level confrontation study described in claims 9, which is characterized in that multi-level confrontation network can be not Realize that the domain in output space is adaptive with feature hierarchy;Segmentation output is predicted in each feature space, is then identified by individual Device carries out confrontation study;Using multistage adaptive model, low level feature is far from output, not when exporting space and executing confrontation study The direct adaptive prediction of energy, therefore in the additional confrontation module of low level feature space, Feature Mapping is extracted on Conv4, and add ASPP module is as subsidiary classification device, while increasing has mutually isostructural discriminator for fighting study;Therefore, it is based on WithField adapt to targetIt is as follows:
Wherein, i indicates the rank for predicting segmentation output, λadvIt indicates weight, loss is lost and fought for balanced division, When Optimized Segmentation model, it is necessary to balance λadv
CN201811059300.1A 2018-09-12 2018-09-12 A kind of domain adapting to image semantic segmentation method based on confrontation study Withdrawn CN109190707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811059300.1A CN109190707A (en) 2018-09-12 2018-09-12 A kind of domain adapting to image semantic segmentation method based on confrontation study

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811059300.1A CN109190707A (en) 2018-09-12 2018-09-12 A kind of domain adapting to image semantic segmentation method based on confrontation study

Publications (1)

Publication Number Publication Date
CN109190707A true CN109190707A (en) 2019-01-11

Family

ID=64910146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811059300.1A Withdrawn CN109190707A (en) 2018-09-12 2018-09-12 A kind of domain adapting to image semantic segmentation method based on confrontation study

Country Status (1)

Country Link
CN (1) CN109190707A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902809A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 It is a kind of to utilize generation confrontation network assistance semantic segmentation model
CN110111335A (en) * 2019-05-08 2019-08-09 南昌航空大学 A kind of the urban transportation Scene Semantics dividing method and system of adaptive confrontation study
CN110148142A (en) * 2019-05-27 2019-08-20 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of Image Segmentation Model
CN110246145A (en) * 2019-06-21 2019-09-17 福州大学 A kind of dividing method of abdominal CT images
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
CN110414631A (en) * 2019-01-29 2019-11-05 腾讯科技(深圳)有限公司 Lesion detection method, the method and device of model training based on medical image
CN110414387A (en) * 2019-07-12 2019-11-05 武汉理工大学 A kind of lane line multi-task learning detection method based on lane segmentation
CN110570433A (en) * 2019-08-30 2019-12-13 北京影谱科技股份有限公司 Image semantic segmentation model construction method and device based on generation countermeasure network
CN110738663A (en) * 2019-09-06 2020-01-31 上海衡道医学病理诊断中心有限公司 Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method
CN110738107A (en) * 2019-09-06 2020-01-31 上海衡道医学病理诊断中心有限公司 microscopic image recognition and segmentation method with model migration function
CN111046760A (en) * 2019-11-29 2020-04-21 山东浪潮人工智能研究院有限公司 Handwriting identification method based on domain confrontation network
CN111242134A (en) * 2020-01-15 2020-06-05 武汉科技大学 Remote sensing image ground object segmentation method based on feature adaptive learning
CN111523680A (en) * 2019-12-23 2020-08-11 中山大学 Domain adaptation method based on Fredholm learning and antagonistic learning
CN111582449A (en) * 2020-05-07 2020-08-25 广州视源电子科技股份有限公司 Training method, device, equipment and storage medium for target domain detection network
CN111832570A (en) * 2020-07-02 2020-10-27 北京工业大学 Image semantic segmentation model training method and system
CN111951220A (en) * 2020-07-10 2020-11-17 北京工业大学 Unsupervised cerebral hemorrhage segmentation method based on multi-layer field self-adaptive technology
CN113221902A (en) * 2021-05-11 2021-08-06 中国科学院自动化研究所 Cross-domain self-adaptive semantic segmentation method and system based on data distribution expansion
CN113627443A (en) * 2021-10-11 2021-11-09 南京码极客科技有限公司 Domain self-adaptive semantic segmentation method for enhancing feature space counterstudy
CN114882220A (en) * 2022-05-20 2022-08-09 山东力聚机器人科技股份有限公司 Domain-adaptive priori knowledge-based GAN (generic object model) image generation method and system
CN115100491A (en) * 2022-08-25 2022-09-23 山东省凯麟环保设备股份有限公司 Abnormal robust segmentation method and system for complex automatic driving scene
CN115222940A (en) * 2022-07-07 2022-10-21 北京邮电大学 Semantic segmentation method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945204A (en) * 2017-10-27 2018-04-20 西安电子科技大学 A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
CN108230426A (en) * 2018-02-07 2018-06-29 深圳市唯特视科技有限公司 A kind of image generating method based on eye gaze data and image data set

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945204A (en) * 2017-10-27 2018-04-20 西安电子科技大学 A kind of Pixel-level portrait based on generation confrontation network scratches drawing method
CN108230426A (en) * 2018-02-07 2018-06-29 深圳市唯特视科技有限公司 A kind of image generating method based on eye gaze data and image data set

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414631B (en) * 2019-01-29 2022-02-01 腾讯科技(深圳)有限公司 Medical image-based focus detection method, model training method and device
CN110414631A (en) * 2019-01-29 2019-11-05 腾讯科技(深圳)有限公司 Lesion detection method, the method and device of model training based on medical image
CN109902809A (en) * 2019-03-01 2019-06-18 成都康乔电子有限责任公司 It is a kind of to utilize generation confrontation network assistance semantic segmentation model
CN110111335A (en) * 2019-05-08 2019-08-09 南昌航空大学 A kind of the urban transportation Scene Semantics dividing method and system of adaptive confrontation study
CN110148142A (en) * 2019-05-27 2019-08-20 腾讯科技(深圳)有限公司 Training method, device, equipment and the storage medium of Image Segmentation Model
US11961233B2 (en) 2019-05-27 2024-04-16 Tencent Technology (Shenzhen) Company Limited Method and apparatus for training image segmentation model, computer device, and storage medium
CN110246145A (en) * 2019-06-21 2019-09-17 福州大学 A kind of dividing method of abdominal CT images
CN110246145B (en) * 2019-06-21 2023-02-21 福州大学 Segmentation method of abdominal CT image
CN110322446A (en) * 2019-07-01 2019-10-11 华中科技大学 A kind of domain adaptive semantic dividing method based on similarity space alignment
CN110322446B (en) * 2019-07-01 2021-02-19 华中科技大学 Domain self-adaptive semantic segmentation method based on similarity space alignment
CN110414387A (en) * 2019-07-12 2019-11-05 武汉理工大学 A kind of lane line multi-task learning detection method based on lane segmentation
CN110414387B (en) * 2019-07-12 2021-10-15 武汉理工大学 Lane line multi-task learning detection method based on road segmentation
CN110570433A (en) * 2019-08-30 2019-12-13 北京影谱科技股份有限公司 Image semantic segmentation model construction method and device based on generation countermeasure network
CN110570433B (en) * 2019-08-30 2022-04-22 北京影谱科技股份有限公司 Image semantic segmentation model construction method and device based on generation countermeasure network
CN110738663A (en) * 2019-09-06 2020-01-31 上海衡道医学病理诊断中心有限公司 Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method
CN110738107A (en) * 2019-09-06 2020-01-31 上海衡道医学病理诊断中心有限公司 microscopic image recognition and segmentation method with model migration function
CN111046760B (en) * 2019-11-29 2023-08-08 山东浪潮科学研究院有限公司 Handwriting identification method based on domain countermeasure network
CN111046760A (en) * 2019-11-29 2020-04-21 山东浪潮人工智能研究院有限公司 Handwriting identification method based on domain confrontation network
CN111523680A (en) * 2019-12-23 2020-08-11 中山大学 Domain adaptation method based on Fredholm learning and antagonistic learning
CN111523680B (en) * 2019-12-23 2023-05-12 中山大学 Domain adaptation method based on Fredholm learning and countermeasure learning
CN111242134A (en) * 2020-01-15 2020-06-05 武汉科技大学 Remote sensing image ground object segmentation method based on feature adaptive learning
CN111582449A (en) * 2020-05-07 2020-08-25 广州视源电子科技股份有限公司 Training method, device, equipment and storage medium for target domain detection network
CN111582449B (en) * 2020-05-07 2023-08-04 广州视源电子科技股份有限公司 Training method, device, equipment and storage medium of target domain detection network
CN111832570A (en) * 2020-07-02 2020-10-27 北京工业大学 Image semantic segmentation model training method and system
CN111951220A (en) * 2020-07-10 2020-11-17 北京工业大学 Unsupervised cerebral hemorrhage segmentation method based on multi-layer field self-adaptive technology
CN113221902B (en) * 2021-05-11 2021-10-15 中国科学院自动化研究所 Cross-domain self-adaptive semantic segmentation method and system based on data distribution expansion
CN113221902A (en) * 2021-05-11 2021-08-06 中国科学院自动化研究所 Cross-domain self-adaptive semantic segmentation method and system based on data distribution expansion
CN113627443B (en) * 2021-10-11 2022-02-15 南京码极客科技有限公司 Domain self-adaptive semantic segmentation method for enhancing feature space counterstudy
CN113627443A (en) * 2021-10-11 2021-11-09 南京码极客科技有限公司 Domain self-adaptive semantic segmentation method for enhancing feature space counterstudy
CN114882220A (en) * 2022-05-20 2022-08-09 山东力聚机器人科技股份有限公司 Domain-adaptive priori knowledge-based GAN (generic object model) image generation method and system
CN115222940A (en) * 2022-07-07 2022-10-21 北京邮电大学 Semantic segmentation method and system
CN115100491A (en) * 2022-08-25 2022-09-23 山东省凯麟环保设备股份有限公司 Abnormal robust segmentation method and system for complex automatic driving scene
CN115100491B (en) * 2022-08-25 2022-11-18 山东省凯麟环保设备股份有限公司 Abnormal robust segmentation method and system for complex automatic driving scene
US11954917B2 (en) 2022-08-25 2024-04-09 Shandong Kailin Environmental Protection Equipment Co., Ltd. Method of segmenting abnormal robust for complex autonomous driving scenes and system thereof

Similar Documents

Publication Publication Date Title
CN109190707A (en) A kind of domain adapting to image semantic segmentation method based on confrontation study
Huang et al. Autonomous driving with deep learning: A survey of state-of-art technologies
Cui et al. Semantic segmentation of remote sensing images using transfer learning and deep convolutional neural network with dense connection
Li et al. Deep neural network for structural prediction and lane detection in traffic scene
Ni et al. An improved deep network-based scene classification method for self-driving cars
Torralba et al. Using the forest to see the trees: exploiting context for visual object detection and localization
CN112446398A (en) Image classification method and device
Teng et al. Underwater target recognition methods based on the framework of deep learning: A survey
Suresha et al. A study on deep learning spatiotemporal models and feature extraction techniques for video understanding
Dewangan et al. Towards the design of vision-based intelligent vehicle system: methodologies and challenges
Le et al. Bayesian Gabor network with uncertainty estimation for pedestrian lane detection in assistive navigation
Ghadi et al. A graph-based approach to recognizing complex human object interactions in sequential data
CN114973199A (en) Rail transit train obstacle detection method based on convolutional neural network
WO2021073311A1 (en) Image recognition method and apparatus, computer-readable storage medium and chip
CN117157679A (en) Perception network, training method of perception network, object recognition method and device
Yu Deep learning methods for human action recognition
Wu et al. Self-learning and explainable deep learning network toward the security of artificial intelligence of things
Jin et al. Improving the performance of deep learning model-based classification by the analysis of local probability
CN115546668A (en) Marine organism detection method and device and unmanned aerial vehicle
Veluchamy et al. RBorderNet: Rider Border Collie Optimization-based Deep Convolutional Neural Network for road scene segmentation and road intersection classification
Liu et al. Li Zhang
Choudhury et al. Detection of One-horned Rhino from Green Environment Background using Deep Learning
Mehtab Deep neural networks for road scene perception in autonomous vehicles using LiDARs and vision sensors
Wang et al. Cropland encroachment detection via dual attention and multi-loss based building extraction in remote sensing images
Liu et al. Kernelised correlation filters target tracking fused multi‐feature based on the unmanned aerial vehicle platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190111

WW01 Invention patent application withdrawn after publication