CN110111295A - A kind of image collaboration conspicuousness detection method and device - Google Patents

A kind of image collaboration conspicuousness detection method and device Download PDF

Info

Publication number
CN110111295A
CN110111295A CN201810103215.4A CN201810103215A CN110111295A CN 110111295 A CN110111295 A CN 110111295A CN 201810103215 A CN201810103215 A CN 201810103215A CN 110111295 A CN110111295 A CN 110111295A
Authority
CN
China
Prior art keywords
feature atlas
feature
atlas
network layer
collaboration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810103215.4A
Other languages
Chinese (zh)
Other versions
CN110111295B (en
Inventor
王睿
王蕴红
赵文婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Keaosen Data Technology Co Ltd
Original Assignee
Beijing Keaosen Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Keaosen Data Technology Co Ltd filed Critical Beijing Keaosen Data Technology Co Ltd
Priority to CN201810103215.4A priority Critical patent/CN110111295B/en
Publication of CN110111295A publication Critical patent/CN110111295A/en
Application granted granted Critical
Publication of CN110111295B publication Critical patent/CN110111295B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The embodiment of the present invention discloses a kind of image collaboration conspicuousness detection method and device in fact, this method comprises: respectively obtaining fisrt feature atlas and second feature atlas to two images progress feature extractions using first network layer group;Learn the concertedness between fisrt feature atlas and second feature atlas using the second network layer group, obtains collaboration feature atlas;Fisrt feature atlas and second feature atlas are overlapped with feature atlas is cooperateed with respectively, and the two feature atlas obtained after superposition are subjected to deconvolution respectively, obtains the corresponding collaboration notable figure of two images.Entire detection process in the embodiment of the present invention is completed in network model, so either extraction characteristics of image still determines the conspiracy relation between image, the network parameter that each step uses in entire detection process can be carried out continuing to optimize, conspicuousness is cooperateed with using between the network model detection image after continuing to optimize, can guarantee the accuracy of testing result.

Description

A kind of image collaboration conspicuousness detection method and device
Technical field
The present embodiments relate to technical field of image processing more particularly to a kind of image collaboration conspicuousness detection method and Device.
Background technique
Collaboration conspicuousness detection is computer vision field is emerging and fast-developing in recent years research field, purpose from Multiple detect in plurality of pictures in the picture with correlation (concertedness) while occurring and distinguishing in free hand drawing significant Region.Collaboration conspicuousness detection technique such as cooperates with segmentation, collaboration identification and collaboration fixed the multi-task of computer vision field The work such as position are all of great significance.Image characteristics extraction is first carried out headed by the process of collaboration conspicuousness detection, is then made The conspiracy relation between image is determined with clustering method or set formula, and then association is generated according to the conspiracy relation between image With conspicuousness testing result.
Image characteristics extraction process is for extracting feature representative in image to describe the marking area in image. A large amount of collaboration conspicuousness detection method is believed using the marking area of low-level image feature (color, shape, texture) description image at present Breath, such as using color histogram, Gabor filter or SIFT (Scale-invariant feature transform, scale Invariant features transformation) Feature Descriptor etc. is described.But using low-level image feature describe the characteristic information accuracy of image compared with Difference, the accuracy so as to cause collaboration conspicuousness detection are poor.
And determine that the clustering method that uses of conspiracy relation between image or set formula can not be trained or excellent Change, poor so as to cause obtained conspiracy relation accuracy, the conspiracy relation of mistake, which will lead to the marking area detected, not to be had Concertedness, and then cause to cooperate with the accuracy of conspicuousness detection poor.
Summary of the invention
The embodiment of the invention provides a kind of images to cooperate with conspicuousness detection method, to solve existing collaboration conspicuousness detection The poor problem of method accuracy.
In a first aspect, the embodiment of the invention provides a kind of images to cooperate with conspicuousness detection method, this method comprises:
Feature extraction is carried out to two images using first network layer group, respectively obtains fisrt feature atlas and the second spy Levy atlas;
Learn the concertedness between the fisrt feature atlas and the second feature atlas using the second network layer group, obtains To collaboration feature atlas;
Feature atlas is cooperateed with to be overlapped with described respectively the fisrt feature atlas and the second feature atlas, And the two feature atlas obtained after superposition are subjected to deconvolution respectively, it is significant to obtain the corresponding collaboration of two images Figure.
Further, feature extraction is carried out to two images using first network layer group, respectively obtains fisrt feature atlas And the step of second feature atlas, comprising:
Two images are passed through into the convolutional layer that VGG16 network is included respectively and carry out process of convolution, obtain third feature figure Collection and fourth feature atlas;
The third feature atlas and fourth feature atlas are subjected to process of convolution respectively again, obtain fisrt feature atlas And second feature atlas.
Further, learnt between the fisrt feature atlas and the second feature atlas using the second network layer group Concertedness obtains the step of cooperateing with feature atlas, comprising:
It is overlapped fisrt feature atlas and the second feature atlas to obtain fifth feature atlas;
The fifth feature atlas is subjected to process of convolution, obtains collaboration feature atlas.
Further, after the step of obtaining two images corresponding collaboration notable figure, further includes:
Two collaborations notable figure is compared with corresponding true collaboration notable figure respectively, obtain first error with And second error;
Optimize the first network layer group, the second network layer group using the first error and second error And carry out the parameter for the network layer that the deconvolution processing utilizes.
Further, this method further include:
The collaboration feature atlas is subjected to dimensionality reduction, obtains the feature vector of 1 × 2 dimension;
Two classification processings are carried out to described eigenvector, obtain indicating whether two images have the classification of concertedness As a result.
Further, two classification processings are carried out to described eigenvector, obtains indicating whether two images have association After the step of classification results of the same sex, further includes:
The classification results and true classification results are compared, third error is obtained,;
Optimize the first network layer group, the second network layer group using the third error and carries out the dimensionality reduction The parameter of the network layer utilized.
Second aspect, the embodiment of the invention also provides a kind of images to cooperate with conspicuousness detection device, comprising:
Feature extraction unit respectively obtains first for carrying out feature extraction to two images using first network layer group Feature atlas and second feature atlas;
Cooperative Study unit, for learning the fisrt feature atlas and the second feature figure using the second network layer group Concertedness between collection obtains collaboration feature atlas;
Superpositing unit, for the fisrt feature atlas and the second feature atlas to be cooperateed with feature with described respectively Atlas is overlapped;
Method unit, two feature atlas for obtaining after being superimposed carry out deconvolution respectively, obtain two figures As corresponding collaboration notable figure.
Further, the feature extraction unit is specifically used for:
Two images are passed through into the convolutional layer that VGG16 network is included respectively and carry out process of convolution, obtain third feature figure Collection and fourth feature atlas;
The third feature atlas and fourth feature atlas are subjected to process of convolution respectively again, obtain fisrt feature atlas And second feature atlas.
Further, the Cooperative Study unit is specifically used for:
It is overlapped fisrt feature atlas and the second feature atlas to obtain fifth feature atlas;
The fifth feature atlas is subjected to process of convolution, obtains collaboration feature atlas.
Further, which further includes optimization unit, is used for:
Two collaborations notable figure is compared with corresponding true collaboration notable figure respectively, obtain first error with And second error;
Optimize the first network layer group, the second network layer group using the first error and second error And carry out the parameter for the network layer that the deconvolution processing utilizes.
The feature that the embodiment of the present invention is extracted is further feature, can accurately describe each notable feature in image Information, and then guarantee accurately to detect the collaboration salient region between image;Since entire detection process is in network It is completed in model, so either extracting characteristics of image still determines conspiracy relation between image, it is each in entire detection process The network parameter that a step uses can be carried out continuing to optimize, and use the network model detection image after continuing to optimize Between cooperate with conspicuousness, can guarantee the accuracy of testing result.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached Figure is briefly described, it should be apparent that, for those of ordinary skills, before not making the creative labor property It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of structure of network model for cooperateing with conspicuousness between detection image provided in an embodiment of the present invention Schematic diagram;
Fig. 2 is the flow chart that a kind of image provided in an embodiment of the present invention cooperates with conspicuousness detection method;
Fig. 3 is the structural block diagram that a kind of image provided in an embodiment of the present invention cooperates with conspicuousness detection device.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, with reference to the accompanying drawing and specific real Applying mode, the present invention is described in further detail.
It is provided in an embodiment of the present invention a kind of for cooperateing with the network model of conspicuousness between detection image referring to Fig. 1 Structural schematic diagram.Image to be detected can be inputted in network model as shown in Figure 1 cooperate with and shown by inventive embodiments The detection of work property.The network model is divided into 3 branches, and two of them branch (the Liang Ge branch up and down in Fig. 1) is collaboration conspicuousness Detection branches 11, another branch's (medial fascicle in Fig. 1) are concertedness detection branches 12.Cooperate with conspicuousness detection branches 11 It can be designed, concertedness detection branches 12 based on full convolutional network (Fully Convolutional Networks, FCN) It can be designed based on convolutional neural networks (Convolutional Neural Network, CNN).It is each in the network model The application of a branch will be specifically described in conjunction with following each steps.
It should be noted that each network layer in network model shown in FIG. 1, such as convolutional layer, pond layer, full connection The parameter of layer etc. according to an embodiment of the present invention can illustrate, under the premise of not needing to pay any creative work It can be determined, and the value of parameters can according to need and be adjusted correspondingly, so the embodiment of the present invention is not to each The parameter of a network layer carries out concrete restriction.
It referring to fig. 2, is a kind of flow chart of image collaboration conspicuousness detection method provided in an embodiment of the present invention, this method It can specifically include following steps.
Step 201, feature extractions are carried out to two images using first network layer group, respectively obtain fisrt feature atlas with And second feature atlas.
Whether network model provided in this embodiment can input multiple images simultaneously, have between multiple images to detect Cooperate with conspicuousness.When inputting multiple images, two images wantonly in multiple images can be classified as one group by the present embodiment, and with group It is detected respectively for unit.When each group of image all has collaboration conspicuousness, then illustrate have collaboration aobvious between multiple images Work property.Such as 4 figures of input simultaneously, respectively figure A, figure B, figure C and figure D, then 6 groups can be divided into, respectively figure A and figure The combination of B, figure A and combination, figure A and the combination, figure C and the figure B that scheme the combination of D, the combination schemed B and scheme C, figure B and figure D for scheming C Combination, 6 group pictures are detected respectively, to detect whether there is collaboration correlation between 4 figures.Due to multiple groups figure The treatment process of each group image is substantially similar as in, so to clearly demonstrate the embodiment of the present invention, to carry out to one group of image The process of detection is illustrated.
Before carrying out feature extraction to image, the image size of input can be adjusted to default size, image it is big Small to be expressed as M × N, wherein M indicates the line number of image pixel, and N indicates the columns of image pixel, such as can be by image Size is adjusted to 224 × 224.
Collaboration conspicuousness detection branches 11 described in Fig. 1 may include first network layer group, which includes At least one layer of convolutional layer, for extracting the notable feature of image.Specifically, which may include VGG16 network In whole convolutional layers (13 layers of convolutional layer) and customized three-layer coil lamination.The process for carrying out feature extraction to image can be with Are as follows: first two images are passed through into the convolutional layer that VGG16 network is included respectively and carry out process of convolution, obtains third feature atlas And fourth feature atlas, then the third feature atlas and fourth feature atlas are subjected to process of convolution respectively again, it obtains Fisrt feature atlas and second feature atlas.Characteristic pattern in fisrt feature atlas and second feature atlas is for describing The characteristic pattern of notable feature.
During image is carried out process of convolution by preceding 13 layers of convolutional layer of VGG16 network, also it can use 4 layers of pond layer in VGG16 network carry out pondization operation to characteristic pattern, and the embodiment of the present invention is not specifically limited herein.This hair Bright embodiment extracts characteristics of image using depth e-learning, extracts the further feature for being characterized by Semantic, can Each notable feature information of image is more accurately described.
Two images for carrying out feature extraction may be respectively referred to as the first image and the second image.First image passes through Available multiple lower characteristic patterns of resolution ratio after 13 layers of convolutional layer of VGG16 network, such as 7 × 7 characteristic pattern, these are special Sign figure group is combined into third feature atlas.Since VGG16 network has generality, and need to carry out feature extraction in a particular application Image there is particularity again, so using VGG16 network extract feature possibly can not accurately to describe specific image significant The characteristic information in region so also needing further to learn the feature extracted using VGG16 network, that is, is utilized Convolutional layer suitable for concrete application carries out process of convolution to third feature atlas again.When carrying out process of convolution again, It can use three layers of customized convolutional layer and multiple characteristic patterns obtained to third feature atlas progress process of convolution, these characteristic patterns Group is combined into fisrt feature atlas.After carrying out process of convolution again using customized convolutional layer to third feature atlas, it can obtain To characteristic pattern the characteristic information of salient region of image can be described more accurately.The step of obtaining second feature atlas with The step of to fisrt feature atlas, is similar, does not repeat herein the step of obtaining second feature atlas.It should be noted that Each characteristic pattern concentrates the quantity of characteristic pattern and size can according to need and carries out different settings, is not specifically limited herein.
First network layer group includes the preceding 13 layers of convolutional layer and customized three-layer coil lamination of VGG16 network, to distinguish This three-layer coil lamination can be respectively designated as convolutional layer 1, convolutional layer 2 and convolutional layer 3 by the customized three-layer coil lamination:
The convolution kernel size of convolutional layer 1 can be 7 × 7, span 1, and the convolution kernel size of convolutional layer 2 can be 1 × 1, Span is 1;The convolution kernel size of convolutional layer 3 can be 1 × 1, span 1;When the size of the characteristic pattern of input convolutional layer 1 is 7 When × 7, the size for the characteristic pattern that convolutional layer 1, convolutional layer 2 and convolutional layer 3 export can be 7 × 7, that is to say, that first is special The size for levying atlas and each characteristic pattern in second feature atlas can be 7 × 7.The embodiment of the present invention is not to each network layer The quantity of the characteristic pattern of output is specifically limited.
Step 202, learnt between the fisrt feature atlas and the second feature atlas using the second network layer group Concertedness obtains collaboration feature atlas.
What the characteristic pattern in fisrt feature atlas and second feature atlas characterized respectively is that two images are independent aobvious It writes feature and determines that two images include so needing to learn the concertedness between fisrt feature atlas and second feature atlas Notable feature with concertedness.
As shown in Figure 1, the second network layer group is located in concertedness detection branches 12, obtained by first network layer group One feature atlas and the second feature atlas are inputted respectively in the second network layer group.It can be by first in the second network layer group Feature atlas and the second feature atlas are overlapped to obtain fifth feature atlas, then the fifth feature atlas is rolled up Product processing obtains collaboration feature atlas.If respectively including 256 Zhang great little in fisrt feature atlas and second feature atlas is 7 × 7 characteristic pattern, then available includes the fifth feature atlas that 512 Zhang great little are 7 × 7 characteristic patterns after superposition.? When carrying out process of convolution to fifth feature atlas, fifth feature atlas can be passed through into two layers of convolutional layer, to obtain collaboration feature Atlas.
It is overlapped between feature atlas, is that each characteristic pattern for concentrating a characteristic pattern and another characteristic pattern are concentrated Corresponding characteristic pattern is overlapped, and the superposition between characteristic pattern is by the element and another characteristic pattern in a characteristic pattern matrix Respective element in matrix is added.Such as the matrix of characteristic pattern A is [a, b], the matrix of characteristic pattern B is [c, d], then characteristic pattern The characteristic pattern that A is obtained after being superimposed with characteristic pattern B is [a+c, b+d].The characteristic pattern of two images is overlapped, then having association Feature with conspicuousness will be amplified, and after convolution, the feature with collaboration conspicuousness can be made to be more clear accurately Ground is shown, and the feature for not having collaboration conspicuousness can be suppressed.
Second network layer group may include superimposed layer and two layers of convolutional layer, to distinguish two layers of convolutional layer, by this two layers Convolutional layer is respectively designated as convolutional layer 4 and convolutional layer 5.Superimposed layer is used to carry out fisrt feature atlas and second feature atlas Superposition, obtains fifth feature atlas.The convolution kernel size of convolutional layer 4 is 7 × 7, span 1.The convolution kernel size of convolutional layer 5 is 1 × 1, span 1.When the size of the characteristic pattern in fifth feature atlas is 7 × 7, the spy of convolutional layer 4 and the output of convolutional layer 5 The size of sign figure may be 7 × 7.
Step 203, the fisrt feature atlas and the second feature atlas are cooperateed with into feature atlas with described respectively It is overlapped, and the two feature atlas obtained after superposition is subjected to deconvolution respectively, obtain two images and respectively correspond Collaboration notable figure.
By fisrt feature atlas and second feature atlas respectively with the process and step that cooperate with feature atlas to be overlapped The step of being overlapped fisrt feature atlas and second feature atlas in 202 is similar, and this will not be repeated here.It is implementing In, it can be operated by deconvolution and amplify characteristic pattern, obtain the grayscale image that can characterize collaboration marking area.
When fisrt feature atlas and second feature atlas are overlapped with feature atlas is cooperateed with respectively, collaboration can be passed through Feature atlas is modified, so that it is determined that having the spy of collaboration significant characteristics in fisrt feature atlas and second feature atlas Sign figure after each feature atlas obtained after superposition amplifies using deconvolution, obtains collaboration notable figure.The collaboration notable figure can Think grayscale image identical with original image size, cooperate with the color of marking area shallower in grayscale image, close to white, and cooperates with aobvious The color for writing the region outside region is deeper, close to black, to can determine two by the shade of pixel in grayscale image The collaboration marking area of image.
As shown in Figure 1, each collaboration conspicuousness detection branches 11 further include the third network layer group that step 203 uses, it should Third network layer group may include superimposed layer, convolutional layer and three layers of warp lamination, and to distinguish three layers of warp lamination, this three Layer warp lamination can be named as warp lamination 1, warp lamination 2, warp lamination 3 respectively.
The superimposed layer is used for the fisrt feature atlas that exports first network layer group and second feature atlas respectively with the The collaboration characteristic pattern of two network layer groups output is overlapped.One can be first inputted before amplifying superimposed characteristic pattern Layer convolutional layer carries out transition, then the characteristic pattern after process of convolution is inputted three layers of warp lamination and is amplified.The volume of the convolutional layer Product core size is 1 × 1, span 1, and the size of the characteristic pattern of output can be 7 × 7.The convolution kernel size of warp lamination 1 is 4 × 4, span 2, the size of the characteristic pattern of output can be 14 × 14.The convolution kernel size of warp lamination 2 is 4 × 4, and span is 2, the size of the characteristic pattern of output can be 28 × 28.The convolution kernel size of warp lamination 3 is 16 × 16, span 8, output The size for cooperateing with notable figure can be 224 × 224.
The feature that the embodiment of the present invention is extracted is further feature, can accurately describe each notable feature in image Information, and then guarantee accurately to detect the collaboration salient region between image;Since entire detection process is in network It is completed in model, so either extracting characteristics of image still determines conspiracy relation between image, it is each in entire detection process The network parameter that a step uses can be carried out continuing to optimize, and use the network model detection image after continuing to optimize Between cooperate with conspicuousness, can guarantee the accuracy of testing result.
Since the embodiment of the present invention is using the collaboration conspicuousness between network model detection image, so also can use inspection Survey the relevant parameter in result optimizing network model.Collaboration notable figure to be obtained using detection optimizes the network model, After the step of collaboration notable figure corresponding to two images, further includes: distinguish two collaboration notable figures It is compared with corresponding true collaboration notable figure, obtains first error and the second error;Using the first error and Second error optimizes the first network layer group, the second network layer group and carries out what the deconvolution processing utilized The parameter of network layer.True collaboration notable figure is can have the gray scale in the region of collaboration conspicuousness between accurate characterization image Figure.
Predict obtained collaboration notable figure with really cooperate with the expression formula of the error L between notable figure that can indicate are as follows:
Wherein, n indicates the quantity for the pixel that collaboration notable figure or true collaboration notable figure are included, and yi indicates true association With the true mark value of ith pixel point in notable figure, pi indicates the predicted value of ith pixel point in collaboration notable figure.
For the accuracy for being further ensured that collaboration conspicuousness detection, the embodiment of the present invention can also utilize the detection of concertedness Resultant error optimizes the relevant parameter in network model.The detection process of concertedness may include: by the collaboration feature atlas Dimensionality reduction is carried out, the feature vector of 1 × 2 dimension is obtained, then two classification processings are carried out to described eigenvector, obtains indicating described two Whether image has the classification results of concertedness.
As shown in Figure 1, further include three layers of full articulamentum after the second network layer group in concertedness detection branches 12, it can To carry out dimensionality reduction to the collaboration feature atlas by this three layers full articulamentum.Feature vector after dimensionality reduction can use Softmax Classifier carries out two classification, to obtain the classification results whether with concertedness.0 or 1 progress can be used in the classification results It indicates, such as when output is 0, expression does not have concertedness, indicates there is concertedness when exporting 1.
It may include: by classification results and true using the process that the classification results error carries out network reference services Classification results compare, and obtain third error, and the third error is recycled to optimize the first network layer group, described second Network layer group and the parameter for carrying out the network layer that the dimensionality reduction utilizes.The calculation formula of the third error and above-mentioned first error Calculation formula it is similar, details are not described herein.
The embodiment of the present invention utilizes concertedness testing result that can further optimize the net for cooperateing with conspicuousness to detect Network model can be improved the accuracy of collaboration conspicuousness detection, the marking area detected avoided not have concertedness.
Whether when training the network model in the embodiment of the present invention using image pattern, can will be marked with has collaboration Property label two images and the corresponding true collaboration notable figure input network model of two images in, carry out network Training.The content of the tag representation is true classification results.Process that network model is trained and it is above-mentioned to image into The process of row collaboration conspicuousness detection is similar, and details are not described herein.
Referring to Fig. 3, for a kind of structural block diagram of image collaboration conspicuousness detection device provided in an embodiment of the present invention, the dress It sets and can specifically include feature extraction unit 301, Cooperative Study unit 302, superpositing unit 303, warp product unit 304.
Wherein, feature extraction unit 301, for carrying out feature extraction to two images using first network layer group, respectively Obtain fisrt feature atlas and second feature atlas.
Cooperative Study unit 302, for learning the fisrt feature atlas and second spy using the second network layer group The concertedness between atlas is levied, collaboration feature atlas is obtained.
Superpositing unit 303, for cooperateing with the fisrt feature atlas and the second feature atlas with described respectively Feature atlas is overlapped.
Warp product unit 304, two feature atlas for obtaining after being superimposed carry out deconvolution respectively, obtain described two Open the corresponding collaboration notable figure of image.
Preferably, the feature extraction unit is specifically used for: two images are passed through the volume that VGG16 network is included respectively Lamination carries out process of convolution, obtains third feature atlas and fourth feature atlas;By the third feature atlas and the 4th Feature atlas carries out process of convolution respectively again, obtains fisrt feature atlas and second feature atlas.
Preferably, the Cooperative Study unit is specifically used for: fisrt feature atlas and the second feature atlas are carried out Superposition obtains fifth feature atlas;The fifth feature atlas is subjected to process of convolution, obtains collaboration feature atlas.
Preferably, which further includes concertedness detection unit 305, is used for: the collaboration feature atlas is subjected to dimensionality reduction, Obtain the feature vector of 1 × 2 dimension;Two classification processings are carried out to described eigenvector, obtain indicating whether two images have There are the classification results of concertedness.
Preferably, the device further include optimization unit 306, be used for: by two collaborations notable figure respectively with it is corresponding True collaboration notable figure compares, and obtains first error and the second error;Utilize the first error and described second Error optimizes the first network layer group, the second network layer group and carries out the network layer that the deconvolution processing utilizes Parameter.
Preferably, optimization unit 306 is also used to: the classification results and true classification results being compared, obtain the Three errors;Optimize the first network layer group, the second network layer group using the third error and carries out the dimensionality reduction The parameter of the network layer utilized.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
The apparatus embodiments described above are merely exemplary, wherein described, unit can as illustrated by the separation member It is physically separated with being or may not be, component shown as a unit may or may not be physics list Member, it can it is in one place, or may be distributed over multiple network units.It can be selected according to the actual needs In some or all of the modules achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying creativeness Labour in the case where, it can understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It realizes by means of software and necessary general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on Stating technical solution, substantially the part that contributes to existing technology can be embodied in the form of software products in other words, should Computer software product may be stored in a computer readable storage medium, such as ROM/RAM, magnetic disk, CD, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation Method described in certain parts of example or embodiment.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (10)

1. a kind of image cooperates with conspicuousness detection method characterized by comprising
Feature extraction is carried out to two images using first network layer group, respectively obtains fisrt feature atlas and second feature figure Collection;
Learn the concertedness between the fisrt feature atlas and the second feature atlas using the second network layer group, is assisted With feature atlas;
Feature atlas is cooperateed with to be overlapped with described respectively the fisrt feature atlas and the second feature atlas, and will The two feature atlas obtained after superposition carry out deconvolution respectively, obtain the corresponding collaboration notable figure of two images.
2. the method as described in claim 1, which is characterized in that carry out feature to two images using first network layer group and mention The step of taking, respectively obtaining fisrt feature atlas and second feature atlas, comprising:
Two images are passed through into the convolutional layer that VGG16 network included respectively and carry out process of convolution, obtain third feature atlas with And fourth feature atlas;
The third feature atlas and fourth feature atlas are subjected to process of convolution respectively again, obtain fisrt feature atlas and Second feature atlas.
3. the method as described in claim 1, which is characterized in that using the second network layer group learn the fisrt feature atlas with Concertedness between the second feature atlas obtains the step of cooperateing with feature atlas, comprising:
It is overlapped the fisrt feature atlas and the second feature atlas to obtain fifth feature atlas;
The fifth feature atlas is subjected to process of convolution, obtains collaboration feature atlas.
4. the method as described in claim 1, which is characterized in that obtain the corresponding collaboration notable figure of two images After step, further includes:
Two collaborations notable figure is compared with corresponding true collaboration notable figure respectively, obtains first error and the Two errors;
Using the first error and second error optimize the first network layer group, the second network layer group and Carry out the parameter for the network layer that the deconvolution processing utilizes.
5. the method as described in claim 1, which is characterized in that further include:
The collaboration feature atlas is subjected to dimensionality reduction, obtains the feature vector of 1 × 2 dimension;
Two classification processings are carried out to described eigenvector, obtain indicating whether two images have the classification knot of concertedness Fruit.
6. method as claimed in claim 5, which is characterized in that carry out two classification processings to described eigenvector, indicated Whether two images had after the step of classification results of concertedness, further includes:
The classification results and true classification results are compared, third error is obtained;
Optimize the first network layer group, the second network layer group using the third error and carries out the dimensionality reduction utilization Network layer parameter.
7. a kind of image cooperates with conspicuousness detection device characterized by comprising
Feature extraction unit respectively obtains fisrt feature for carrying out feature extraction to two images using first network layer group Atlas and second feature atlas;
Cooperative Study unit, for using the second network layer group learn the fisrt feature atlas and the second feature atlas it Between concertedness, obtain collaboration feature atlas;
Superpositing unit, for the fisrt feature atlas and the second feature atlas to be cooperateed with feature atlas with described respectively It is overlapped;
Warp product unit, two feature atlas for obtaining after being superimposed carry out deconvolution respectively, obtain two images Corresponding collaboration notable figure.
8. device as claimed in claim 7, which is characterized in that the feature extraction unit is specifically used for:
Two images are passed through into the convolutional layer that VGG16 network included respectively and carry out process of convolution, obtain third feature atlas with And fourth feature atlas;
The third feature atlas and fourth feature atlas are subjected to process of convolution respectively again, obtain fisrt feature atlas and Second feature atlas.
9. device as claimed in claim 7, which is characterized in that the Cooperative Study unit is specifically used for:
It is overlapped fisrt feature atlas and the second feature atlas to obtain fifth feature atlas;
The fifth feature atlas is subjected to process of convolution, obtains collaboration feature atlas.
10. device as claimed in claim 7, which is characterized in that further include optimization unit, be used for:
Two collaborations notable figure is compared with corresponding true collaboration notable figure respectively, obtains first error and the Two errors;
Using the first error and second error optimize the first network layer group, the second network layer group and Carry out the parameter for the network layer that the deconvolution processing utilizes.
CN201810103215.4A 2018-02-01 2018-02-01 Image collaborative saliency detection method and device Active CN110111295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810103215.4A CN110111295B (en) 2018-02-01 2018-02-01 Image collaborative saliency detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810103215.4A CN110111295B (en) 2018-02-01 2018-02-01 Image collaborative saliency detection method and device

Publications (2)

Publication Number Publication Date
CN110111295A true CN110111295A (en) 2019-08-09
CN110111295B CN110111295B (en) 2021-06-11

Family

ID=67483213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810103215.4A Active CN110111295B (en) 2018-02-01 2018-02-01 Image collaborative saliency detection method and device

Country Status (1)

Country Link
CN (1) CN110111295B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521617A (en) * 2011-12-26 2012-06-27 西北工业大学 Method for detecting collaboration saliency by aid of sparse bases
CN103942774A (en) * 2014-01-20 2014-07-23 天津大学 Multi-target collaborative salient-region detection method based on similarity propagation
CN104966285A (en) * 2015-06-03 2015-10-07 北京工业大学 Method for detecting saliency regions
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN107169417A (en) * 2017-04-17 2017-09-15 上海大学 Strengthened based on multinuclear and the RGBD images of conspicuousness fusion cooperate with conspicuousness detection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521617A (en) * 2011-12-26 2012-06-27 西北工业大学 Method for detecting collaboration saliency by aid of sparse bases
CN103942774A (en) * 2014-01-20 2014-07-23 天津大学 Multi-target collaborative salient-region detection method based on similarity propagation
CN104966285A (en) * 2015-06-03 2015-10-07 北京工业大学 Method for detecting saliency regions
CN106157319A (en) * 2016-07-28 2016-11-23 哈尔滨工业大学 The significance detection method that region based on convolutional neural networks and Pixel-level merge
CN107169417A (en) * 2017-04-17 2017-09-15 上海大学 Strengthened based on multinuclear and the RGBD images of conspicuousness fusion cooperate with conspicuousness detection method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DINGWEN ZHANG ET AL: "Co-saliency detection via looking deep and wide", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
HONGYANG LI ET AL: "CNN for saliency detection with low-level feature integration", 《NEUROCOMPUTING》 *
LINA WEI ET AL: "Group-wise Deep Co-saliency Detection", 《PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-17)》 *
邓浩海: "视觉协同显著性目标检测算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Also Published As

Publication number Publication date
CN110111295B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
Haque et al. Object detection based on VGG with ResNet network
CN107844795B (en) Convolutional neural networks feature extracting method based on principal component analysis
CN106682569A (en) Fast traffic signboard recognition method based on convolution neural network
CN110399821B (en) Customer satisfaction acquisition method based on facial expression recognition
CN105069448A (en) True and false face identification method and device
CN109409384A (en) Image-recognizing method, device, medium and equipment based on fine granularity image
CN109190643A (en) Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment
CN112907595B (en) Surface defect detection method and device
CN106408037A (en) Image recognition method and apparatus
CN107633229A (en) Method for detecting human face and device based on convolutional neural networks
CN108416270A (en) A kind of traffic sign recognition method based on more attribute union features
CN109977830A (en) Face fusion detection method based on color and vein binary channels convolutional neural networks and Recognition with Recurrent Neural Network
CN111401156A (en) Image identification method based on Gabor convolution neural network
CN109472733A (en) Image latent writing analysis method based on convolutional neural networks
CN112364883A (en) American license plate recognition method based on single-stage target detection and deptext recognition network
CN106650798A (en) Indoor scene recognition method combining deep learning and sparse representation
Majumder et al. A tale of a deep learning approach to image forgery detection
CN111881965B (en) Hyperspectral pattern classification and identification method, device and equipment for medicinal material production place grade
Chaturvedi et al. Automatic license plate recognition system using surf features and rbf neural network
Zuo et al. An intelligent knowledge extraction framework for recognizing identification information from real-world ID card images
Gao et al. Segmentation-free vehicle license plate recognition using CNN
Li et al. Multi-scale convolutional neural networks for natural scene license plate detection
Zeng et al. Miniature interactive offset networks (minions) for wafer map classification
CN110111295A (en) A kind of image collaboration conspicuousness detection method and device
CN112884022B (en) Unsupervised depth characterization learning method and system based on image translation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant