CN108229504A - Method for analyzing image and device - Google Patents

Method for analyzing image and device Download PDF

Info

Publication number
CN108229504A
CN108229504A CN201810085628.4A CN201810085628A CN108229504A CN 108229504 A CN108229504 A CN 108229504A CN 201810085628 A CN201810085628 A CN 201810085628A CN 108229504 A CN108229504 A CN 108229504A
Authority
CN
China
Prior art keywords
module
semantic segmentation
image
feature
edge detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810085628.4A
Other languages
Chinese (zh)
Other versions
CN108229504B (en
Inventor
陈益民
张伟
林倞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201810085628.4A priority Critical patent/CN108229504B/en
Publication of CN108229504A publication Critical patent/CN108229504A/en
Application granted granted Critical
Publication of CN108229504B publication Critical patent/CN108229504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Abstract

This disclosure relates to a kind of method for analyzing image and device.This method realizes that model includes by analytic modell analytical model:Feature sharing module, semantic segmentation module, edge detection module, this method include:Parsing image is treated by feature sharing module and carries out feature extraction processing, obtains sharing feature, sharing feature includes the characteristic information of multiple network depths;Semantic segmentation processing and edge detection process are carried out to sharing feature by semantic segmentation module and edge detection module respectively, obtain the preliminary semantic segmentation result of image to be resolved and preliminary edge testing result.According to the embodiment of the present disclosure, the sharing feature of image to be resolved can be extracted by analytic modell analytical model, and respectively by carrying out semantic segmentation processing and edge detection process to sharing feature to obtain preliminary semantic segmentation result and preliminary edge testing result, so as to improve the consistency between semantic segmentation result and edge detection results.

Description

Method for analyzing image and device
Technical field
This disclosure relates to field of computer technology more particularly to a kind of method for analyzing image and device.
Background technology
With quick universal and e-commerce the rise and development of internet, the image analysis based on computer vision Technology has obtained unprecedented development.Object (such as human body, animal, vehicle etc.) in image is parsed and identifies object Various pieces (such as the head of people, arm, clothes etc.), have in fields such as video monitoring, personage's behavioural analyses important Meaning.
In the related art, the detection of object is often relied on to the parsing of objects one or more in image, is being detected After going out the object in image, then the object is parsed.This mode is larger to object detection result dependence, it is possible that The inconsistent situation of testing result and segmentation (parsing) result, accuracy and precision are poor, can not meet demand.
Invention content
In view of this, the present disclosure proposes a kind of method for analyzing image and devices, can improve the semanteme of image to be resolved Consistency between segmentation result and edge detection results.
According to the one side of the disclosure, a kind of method for analyzing image is provided, the method is realized by analytic modell analytical model, institute Analytic modell analytical model is stated to include:Feature sharing module, semantic segmentation module, edge detection module,
The method includes:
Parsing image is treated by the feature sharing module and carries out feature extraction processing, obtains sharing feature, it is described common It enjoys feature and includes the characteristic information of multiple network depths that the multiple network layers through the feature sharing module are handled;
Semantic segmentation is carried out to the sharing feature by the semantic segmentation module and the edge detection module respectively Processing and edge detection process obtain the preliminary semantic segmentation result and preliminary edge testing result of the image to be resolved.
In a kind of possible realization method, the analytic modell analytical model further includes optimizing polymerization module,
Wherein, the preliminary semantic segmentation result and preliminary edge testing result for obtaining the image to be resolved the step of it Afterwards, the method further includes:
The preliminary semantic segmentation result and the preliminary edge testing result are inputted by the optimizing polymerization module Characteristic layer is combined as in the optimizing polymerization module;
The characteristic layer is polymerize in the optimizing polymerization module, and using the multiple of the optimizing polymerization module Convolutional network layer optimizes processing, determines semantic segmentation result and edge detection results for the image to be resolved.
In a kind of possible realization method, the feature sharing module includes cascade first convolution pond network, the Two residual error networks, third residual error network, the 4th residual error network and the 5th residual error network;
The sharing feature includes the third residual error network, the 4th residual error network and the 5th residual error network point The characteristic information not exported.
In a kind of possible realization method, semantic segmentation result and the edge inspection for the image to be resolved are being determined After the step of surveying result, the method further includes:
To the edge detection results into line dividing processing, multiple cut zone in the image to be resolved are determined;
Aggregation processing is carried out to the multiple cut zone according to the semantic segmentation result, determines the image to be resolved In at least one analysis object aggregation zone;
The aggregation zone of at least one analysis object is associated with the semantic segmentation result, it determines for described The analysis result of at least one analysis object.
In a kind of possible realization method, the edge detection results include edge graph,
Wherein, multiple segmentations in the image to be resolved are determined into line dividing processing to the edge detection results Region, including:
The edge graph is both horizontally and vertically being scanned respectively, is obtaining a plurality of horizontal line section in non-background area and more Vertical segment, wherein, the endpoint of every horizontal line section and every vertical segment is the marginal point in the edge graph, every water Region where horizontal line section belongs to same cut zone, and the region where every vertical segment belongs to same cut zone;
Aggregation processing is carried out to a plurality of horizontal line section in non-background area and the region where multi-drop line section, is obtained Multiple cut zone in the image to be resolved.
In a kind of possible realization method, the multiple cut zone is assembled according to the semantic segmentation result Processing determines the aggregation zone of at least one analysis object in the image to be resolved, including following at least one step:
It is greater than or equal to first threshold, and the cut zone includes multiple semantic segmentation results in the size of cut zone When, determine the aggregation zone that the cut zone is same analysis object;
Cut zone size be less than second threshold, and the cut zone include a semantic segmentation result when, will The cut zone is incorporated to the nearest aggregation zone of the distance between the cut zone.
In a kind of possible realization method, the method further includes:
Sample image is inputted in initial analytic modell analytical model and is handled, obtains the training parsing knot for the sample image Fruit, wherein, the initial analytic modell analytical model includes initial characteristics sharing module, initial semantic segmentation module, initial edge detection mould Block, initial polymerization optimization module and monitoring modular;
According to the expectation analysis result of the sample image and the trained analysis result, the sample image is determined Model loses, the loss of the model of the sample image include the initial characteristics sharing module, the initial semantic segmentation module, The initial edge detection module and the weighted sum of the model of initial polymerization optimization module loss;
It is lost according to the model of the sample image, adjusts the parameters weighting in the initial analytic modell analytical model, determine adjustment Analytic modell analytical model afterwards;
In the case where the model loss of the sample image meets training condition, the analytic modell analytical model after adjustment is determined as Final analytic modell analytical model.
In a kind of possible realization method, which is characterized in that
The preliminary semantic segmentation result includes:Semantic feature and semantic segmentation figure;And/or
The preliminary edge testing result includes:Edge feature and edge detection graph.
In a kind of possible realization method, the feature sharing module, the semantic segmentation module, the edge detection Module and the optimizing polymerization module respectively include full convolutional neural networks.
According to another aspect of the present disclosure, a kind of image analysis apparatus is provided, described device is realized by analytic modell analytical model, The analytic modell analytical model includes:Feature sharing module, semantic segmentation module, edge detection module, described device include:
Sharing feature acquiring unit is carried out for treating parsing image by the feature sharing module at feature extraction Reason obtains sharing feature, and the sharing feature handles multiple including the multiple network layers through the feature sharing module The characteristic information of network depth;
PRELIMINARY RESULTS determination unit, for respectively by the semantic segmentation module and the edge detection module to described Sharing feature carries out semantic segmentation processing and edge detection process, obtain the image to be resolved preliminary semantic segmentation result and Preliminary edge testing result.
In a kind of possible realization method, the analytic modell analytical model further includes optimizing polymerization module, and described device further includes:
Characteristic layer assembled unit, for by the optimizing polymerization module by the preliminary semantic segmentation result and it is described just Step edge detection results, which are inputted in the optimizing polymerization module, is combined as characteristic layer;
Optimizing polymerization unit, for polymerizeing in the optimizing polymerization module to the characteristic layer, and described in use Multiple convolutional network layers of optimizing polymerization module optimize processing, determine the semantic segmentation result for the image to be resolved And edge detection results.
In a kind of possible realization method, the feature sharing module includes cascade first convolution pond network, the Two residual error networks, third residual error network, the 4th residual error network and the 5th residual error network;
The sharing feature includes the third residual error network, the 4th residual error network and the 5th residual error network point The characteristic information not exported.
In a kind of possible realization method, described device further includes:
Dividing processing unit, for, into line dividing processing, determining the image to be resolved to the edge detection results In multiple cut zone;
Aggregation zone determination unit, for carrying out habitat to the multiple cut zone according to the semantic segmentation result Reason determines the aggregation zone of at least one analysis object in the image to be resolved;
Analysis result determination unit, for by the aggregation zone of at least one analysis object and the semantic segmentation knot Fruit is associated, and determines the analysis result at least one analysis object.
In a kind of possible realization method, the edge detection results include edge graph, wherein, the dividing processing list Member includes:
Subelement is scanned, for both horizontally and vertically scanning the edge graph respectively, is obtained in non-background area A plurality of horizontal line section and multi-drop line section, wherein, the endpoint of every horizontal line section and every vertical segment is the edge graph In marginal point, the region where every horizontal line section belongs to same cut zone, and the region where every vertical segment belongs to Same cut zone;
Cut zone determination subelement, for where a plurality of horizontal line section in non-background area and multi-drop line section Region carry out aggregation processing, obtain multiple cut zone in the image to be resolved.
In a kind of possible realization method, the aggregation zone determination unit includes following at least one subelement:
Region determination subelement, for being greater than or equal to first threshold, and the cut zone in the size of cut zone During including multiple semantic segmentation results, the aggregation zone that the cut zone is same analysis object is determined;
Region merging technique subelement, for being less than second threshold in the size of cut zone, and the cut zone includes one During a semantic segmentation result, the cut zone is incorporated to the nearest aggregation zone of the distance between the cut zone.
In a kind of possible realization method, described device further includes:
Training resolution unit, is handled for sample image to be inputted in initial analytic modell analytical model, is obtained for the sample The training analysis result of this image, wherein, the initial analytic modell analytical model includes initial characteristics sharing module, initial semantic segmentation mould Block, initial edge detection module, initial polymerization optimization module and monitoring modular;
Determination unit is lost, for the expectation analysis result according to the sample image and the trained analysis result, Determine the model loss of the sample image, the model loss of the sample image includes the initial characteristics sharing module, institute State the weighting of the model loss of initial semantic segmentation module, the initial edge detection module and the initial polymerization optimization module With;
Model adjustment unit for being lost according to the model of the sample image, is adjusted in the initial analytic modell analytical model Parameters weighting determines the analytic modell analytical model after adjustment;
Model determination unit in the case of meeting training condition in the loss of the model of the sample image, will adjust Analytic modell analytical model afterwards is determined as final analytic modell analytical model.
In a kind of possible realization method,
The preliminary semantic segmentation result includes:Semantic feature and semantic segmentation figure;And/or
The preliminary edge testing result includes:Edge feature and edge detection graph.
In a kind of possible realization method, the feature sharing module, the semantic segmentation module, the edge detection Module and the optimizing polymerization module respectively include full convolutional neural networks.
According to another aspect of the present disclosure, a kind of image analysis apparatus is provided, including:Processor;It is handled for storage The memory of device executable instruction;Wherein, the processor is configured as performing the above method.
According to another aspect of the present disclosure, a kind of non-volatile computer readable storage medium storing program for executing is provided, is stored thereon with Computer program instructions, wherein, the computer program instructions realize the above method when being executed by processor.
According to the method for analyzing image and device of all aspects of this disclosure, the feature sharing module of analytic modell analytical model can be passed through The sharing feature of image to be resolved is extracted, and language is carried out to sharing feature by semantic segmentation module and edge detection module respectively Adopted dividing processing and edge detection process are treated with obtaining preliminary semantic segmentation result and preliminary edge testing result so as to improve Parse the consistency between the semantic segmentation result and edge detection results of image.
According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will become It is clear.
Description of the drawings
Comprising in the description and the attached drawing of a part for constitution instruction and specification together illustrate the disclosure Exemplary embodiment, feature and aspect, and the principle for explaining the disclosure.
Fig. 1 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.
Fig. 2 is the schematic diagram according to the analytic modell analytical model shown in an exemplary embodiment.
Fig. 3 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.
Fig. 4 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.
Fig. 5 a, Fig. 5 b and Fig. 5 c are the schematic diagrames according to the sample image shown in an exemplary embodiment.
Fig. 6 is the schematic diagram according to the initial analytic modell analytical model shown in an exemplary embodiment.
Fig. 7 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.
Fig. 8 is the schematic diagram according to the method for analyzing image shown in an exemplary embodiment.
Fig. 9 is the block diagram according to a kind of image analysis apparatus shown in an exemplary embodiment.
Figure 10 is the block diagram according to a kind of image analysis apparatus shown in an exemplary embodiment.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing Reference numeral represent functionally the same or similar element.Although the various aspects of embodiment are shown in the drawings, remove It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary " Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
In addition, in order to better illustrate the disclosure, numerous details is given in specific embodiment below. It will be appreciated by those skilled in the art that without certain details, the disclosure can equally be implemented.In some instances, for Method well known to those skilled in the art, means, element and circuit are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.This method can be applied to In server.This method realizes that the analytic modell analytical model includes by analytic modell analytical model:Feature sharing module, semantic segmentation module, edge Detection module.As shown in Figure 1, the method for analyzing image according to the embodiment of the present disclosure includes:
In step S101, parsing image is treated by the feature sharing module and carries out feature extraction processing, is obtained altogether Feature is enjoyed, the sharing feature includes multiple network depths that the multiple network layers through the feature sharing module are handled Characteristic information;
In step s 102, respectively by the semantic segmentation module and the edge detection module to the sharing feature Semantic segmentation processing and edge detection process are carried out, obtains the preliminary semantic segmentation result and preliminary edge of the image to be resolved Testing result.
In accordance with an embodiment of the present disclosure, the shared of image to be resolved can be extracted by the feature sharing module of analytic modell analytical model Feature, and semantic segmentation processing and edge detection are carried out to sharing feature by semantic segmentation module and edge detection module respectively Processing is to obtain preliminary semantic segmentation result and preliminary edge testing result, so as to improve the semantic segmentation knot of image to be resolved Consistency between fruit and edge detection results.
For example, the common ground that semantic segmentation and edge detection have some crucial, be required for can be dense identify Object and its position, and these are required for through this low level information of difference between adjacent pixel and high level used for positioning Information determines.It is therefore possible to use the convolutional neural networks of partial sharing extract the sharing feature of image to be resolved, so as to Improve the efficiency of feature extraction.
For the image to be resolved for including one or more objects (such as human body, animal, vehicle etc.), this can be treated Parsing image, which is input in analytic modell analytical model trained in advance, to be handled, and analytic modell analytical model can include multiple full convolutional Neural nets Network.The analytic modell analytical model can extract the sharing feature of image to be resolved by sharing part convolution algorithm;Sharing feature is distinguished It inputs semantic segmentation and object edge detects and semantic segmentation processing and edge detection process are carried out in two modules, determine to treat respectively Parse the preliminary semantic segmentation result of image and preliminary edge testing result.
Fig. 2 is the schematic diagram according to the analytic modell analytical model shown in an exemplary embodiment.As shown in Fig. 2, the analytic modell analytical model packet It includes:Feature sharing module 21, semantic segmentation module 22, edge detection module 23.
In a kind of possible realization method, feature sharing module 21 may include cascade first convolution pond network (conv1+pool1), the second residual error network (res2), third residual error network (res3), the 4th residual error network (res4) and Five residual error networks (res5);It is residual that sharing feature includes third residual error network (res3), the 4th residual error network (res4) and the 5th The characteristic information that poor network (res5) exports respectively.
Full convolutional neural networks model can be used in feature sharing module 21 (trunk neural network), for example, by using such as 101 The ResNet residual error network models of layer.As shown in Fig. 2, feature sharing module 21 can include cascade conv1+pool1 (convolution 1+ pondizations 1), multiple convolutional neural networks such as res2 (residual error 2), res3 (residual error 3), res4 (residual error 4), res5 (residual error 5).Its In, the different network depths of multiple convolutional neural networks correspond to different characteristic informations, the group of the feature of multiple network depths Close the accuracy for being conducive to improve image analysis.The disclosure is to the specific neural network model of feature sharing module 21 and model The specific network number of plies is not restricted.
In a kind of possible realization method, image 25 to be resolved is input to progress feature in feature sharing module 21 and is carried Processing is taken, can obtaining multiple sharing features 201 of image 25 to be resolved, (such as neural network res3, res4, res5 are defeated in Fig. 2 Three characteristic informations gone out), which includes multiple networks that the multiple network layers through feature sharing module are handled The characteristic information of depth, so as to embody the different depth information of image 25 to be resolved.Multiple sharing features 201 can be distinguished defeated Enter into semantic segmentation module 22 and edge detection module 23 and handled.It should be appreciated that any number of nerve can be obtained Multiple sharing features of network layer output, the disclosure are not restricted this.
In a kind of possible realization method, it is right in image that semantic segmentation module 22 (semantic segmentation network) can be partitioned into As the various pieces of (human body), such as it is partitioned into each portion such as the head of human body, the upper part of the body, arm, jacket, skirt in image Point.Preliminary semantic segmentation result may include semantic feature and semantic segmentation figure.As shown in Fig. 2, semantic segmentation module 22 can wrap Include cascade multiple convolutional layers and pyramid pond layer (Pyramid Pooling).Multiple sharing features 201 can be combined For new characteristic layer, and progress semantic segmentation processing in cascade multiple convolutional layers is input to, according to the spy of multiple network depths Reference ceases to obtain the semantic feature 202 under heterogeneous networks depth;Then semantic feature 202 can be input to pyramid pond layer Middle carry out fusion treatment, obtains semantic segmentation Figure 20 3;And can by semantic feature 202 and semantic segmentation Figure 20 3 collectively as Preliminary semantic segmentation result.As shown in Fig. 2, semantic segmentation Figure 20 3 is represented by image 26.
In a kind of possible realization method, it is right in image that edge detection module 23 (edge detection network) can detect As the marginal position of (human body).Preliminary edge testing result may include edge feature and edge detection graph.As shown in Fig. 2, edge Detection module 23 can include cascade multiple convolutional layers and pyramid pond layer.Cascade multiple convolutional layers pair can be passed through Multiple sharing features 201 carry out different diffusion convolution algorithms respectively, (more to detect the edge feature information of heterogeneous networks depth A edge feature 204);And the different characteristic layer of conventional part can be polymerize, and using the side of multiple network depths Edge characteristic information obtains marginal information all in image, so as to obtaining edge detection graph 205;And it is possible to by edge spy Sign 204 and edge detection graph 205 are collectively as preliminary edge testing result.As shown in Fig. 2, edge detection graph 205 is represented by Image 27.
Fig. 3 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.In a kind of possible reality In existing mode, the analytic modell analytical model further includes optimizing polymerization module.As shown in figure 3, after step s 102, the method is also wrapped It includes:
In step s 103, by the optimizing polymerization module by the preliminary semantic segmentation result and the preliminary edge Testing result inputs in the optimizing polymerization module and is combined as characteristic layer;
In step S104, the characteristic layer is polymerize in the optimizing polymerization module, and uses the polymerization Multiple convolutional network layers of optimization module optimize processing, determine the semantic segmentation result for the image to be resolved and side Edge testing result.
For example, optimizing polymerization module 24 (optimizing polymerization network) is examined available for process of refinement semantic segmentation and edge The result of survey.After preliminary semantic segmentation result and the preliminary edge testing result is obtained, the poly- of analytic modell analytical model can be passed through It closes optimization module and advanced optimizes processing.As shown in Fig. 2, optimizing polymerization module 24 may include cascade multiple convolutional layers and divide Not carry out semantic segmentation and edge detection two pyramid pond layers.It can be by semantic feature 202, semantic segmentation Figure 20 3, side Edge feature 204 and edge detection graph 205 are combined as new characteristic layer, cascade multiple convolutional layers of input optimizing polymerization module 24 In polymerize, and the characteristic information of multiple network depths is respectively adopted to carry out operation optimization;Via for semantic segmentation After pyramid pond layer and pyramid pond layer for edge detection are handled, it may be determined that final for image to be resolved Semantic segmentation result 206 (image 28 being represented by Fig. 2) and edge detection results 207 (be represented by the image in Fig. 2 29)。
In this way, image, semantic segmentation result to be resolved and edge detection results can be obtained, improve image solution The precision of analysis.
It, can be to initial analytic modell analytical model before handling parsing image in a kind of possible realization method It is trained, determines the parameters weighting of parameters in initial analytic modell analytical model, so that the final analytic modell analytical model that training obtains is expired Sufficient accuracy requirement.The training process of analytic modell analytical model is illustrated below.
Fig. 4 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.Fig. 5 a, Fig. 5 b and Fig. 5 c It is the schematic diagram according to the sample image shown in an exemplary embodiment.
As shown in figure 4, in a kind of possible realization method, this method may also include:
In step S105, sample image is inputted in initial analytic modell analytical model and is handled, obtained for the sample graph The training analysis result of picture, wherein, the initial analytic modell analytical model include initial characteristics sharing module, initial semantic segmentation module, Initial edge detection module, initial polymerization optimization module and monitoring modular;
In step s 106, it according to the expectation analysis result of the sample image and the trained analysis result, determines The sample image model loss, the sample image model loss include the initial characteristics sharing module, it is described at the beginning of The weighted sum of the model loss of beginning semantic segmentation module, the initial edge detection module and the initial polymerization optimization module;
In step s 107, it is lost according to the model of the sample image, adjusts the parameter in the initial analytic modell analytical model Weight determines the analytic modell analytical model after adjustment;
In step S108 in the case where the loss of the model of the sample image meets training condition, after adjustment Analytic modell analytical model is determined as final analytic modell analytical model.
For example, sample image may be used to be trained initial analytic modell analytical model.As shown in Figure 5 a, sample image can Using electric business model picture and open academic picture (data set) etc., pair of pixel scale can be marked in sample image As position (as shown in Figure 5 b), and distinguish each position belongs to which human body (as shown in Figure 5 c, being divided into human body 1~11).It can The sample image of object's position will be marked as it is expected analysis result.
Fig. 6 is the schematic diagram according to the initial analytic modell analytical model shown in an exemplary embodiment.As shown in fig. 6, initial parsing Model may include that initial characteristics sharing module 61, initial semantic segmentation module 62, initial edge detection module 63, initial polymerization are excellent Change module 64 and monitoring modular 65.
Wherein, monitoring modular 65 may include multiple pitch black spatial pyramid pond layer (Atrous Spatial Pyramid Pooling, ASPP), the quantity of ASPP can (example identical with the quantity for the training sharing feature that initial characteristics sharing module 61 exports It such as it is 3).Multiple supervision messages can be inputted respectively in multiple ASPP, to generate multiple monitoring edge features, with initial spy Multiple trained sharing features that sharing module 61 exports are levied, collectively as the input of initial edge detection module 63.In this way, can be with Improve the training speed and training effect of entire initial analytic modell analytical model.
In a kind of possible realization method, sample image can be inputted to progress feature in initial characteristics sharing module 61 and carried Processing is taken, determines multiple trained sharing features of sample image;Multiple trained sharing features are inputted into initial semantic segmentation module Semantic segmentation processing is carried out in 62, determines the training semantic feature of sample image and training semantic segmentation figure (preliminary semantic segmentation As a result);Monitoring feature Input Monitor Connector module 65 is handled, determines monitoring edge feature;It will training sharing feature and monitoring Edge detection process is carried out in edge feature input edge detection module 63, determines the training edge feature and instruction of image to be resolved Practice edge detection graph (preliminary edge testing result);It will training semantic feature, training semantic segmentation figure, training edge feature and instruction Practice in edge detection graph input initial polymerization optimization module 64 and optimize processing, determine the training parsing knot for sample image Fruit (namely semantic segmentation result and edge detection results).The specific acquisition process of the training analysis result of sample image is solved with waiting The semantic segmentation result for analysing image is similar with the acquisition process of edge detection results, is not repeated to describe herein.
In a kind of possible realization method, according to the expectation analysis result of sample image (namely the object manually marked Position and semantic segmentation situation) and training analysis result, it may be determined that the model loss of sample image, the mould of the sample image It is excellent that type loss includes initial characteristics sharing module 61, initial semantic segmentation module 62, initial edge detection module 63, initial polymerization Change the weighted sum of the model loss of module 64 and monitoring modular 65.The loss function of entire analytic modell analytical model can be such as formula (1) institute Show:
In formula (1), L can represent the model loss of entire analytic modell analytical model, LsegIt can represent the semanteme of optimizing polymerization module Divide loss function, L 'segIt can represent the semantic segmentation loss function of semantic segmentation module, LedgeIt can represent optimizing polymerization module Edge detection loss function, L 'edgeIt can represent the edge detection loss function of edge detection module,It can represent monitoring mould The monitoring loss function of n-th of ASPP of block, the value of n is 1~N, and N can represent the quantity of the ASPP of monitoring modular.
Wherein, α and β represents the coefficient of semantic segmentation part and edge detection part respectively, can by technical staff according to Actual conditions are set, to adjust the weight of semantic segmentation part and edge detection part in entire analytic modell analytical model.
Wherein, loss function Lseg、L′seg、Ledge、L′edgeLoss function well known in the art can be respectively adopted, The disclosure is not restricted specific choose of each loss function.
In a kind of possible realization method, the parameters weighting adjusted in initial analytic modell analytical model can be lost according to model, Determine the analytic modell analytical model after adjustment.It such as can be by back-propagation algorithm, based on model loss to the parameters weighting of the model Seek gradient, and the parameters weighting in initial analytic modell analytical model is adjusted based on the gradient.If model loss meets training condition, example Such as reach the repetitive exercise number of setting and/or meet the condition of convergence of setting, then it can be true by the analytic modell analytical model after adjustment It is set to final analytic modell analytical model.
In this way, the training process of analytic modell analytical model is realized.
Fig. 7 is the flow chart according to a kind of method for analyzing image shown in an exemplary embodiment.Fig. 8 is according to an example Property implement the schematic diagram of method for analyzing image that exemplifies.As shown in fig. 7, in a kind of possible realization method, in step 104 Later, this method may also include:
In step S109, to the edge detection results into line dividing processing, determine in the image to be resolved Multiple cut zone;
In step s 110, aggregation processing is carried out to the multiple cut zone according to the semantic segmentation result, determined The aggregation zone of at least one analysis object in the image to be resolved;
It is in step S111, the aggregation zone of at least one analysis object is related to the semantic segmentation result Connection determines the analysis result at least one analysis object.
For example, parsing image is treated by analytic modell analytical model to be carried out at the same time semantic segmentation and edge detection and optimize Processing, can be true according to semantic segmentation result and edge detection results after determining semantic segmentation result and edge detection results The analysis result of at least one analysis object in image is determined, so as to fulfill the solution to one or more of image analysis object Analysis.
As shown in figure 8, right using the analytic modell analytical model 82 (Detection-Free Network, DFN) according to the disclosure After image 81 to be resolved carries out dissection process, semantic segmentation result (segmentation figure 821) and the side of image 81 to be resolved can be obtained Edge testing result (edge graph 822).It is carried out in such a case it is possible to treat one or more of parsing image 81 analysis object It divides.
In a kind of possible realization method, edge detection results may include edge graph.Wherein, step S109 may include:
The edge graph is both horizontally and vertically being scanned respectively, is obtaining a plurality of horizontal line section in non-background area and more Vertical segment, wherein, the endpoint of every horizontal line section and every vertical segment is the marginal point in the edge graph, every water Region where horizontal line section belongs to same cut zone, and the region where every vertical segment belongs to same cut zone;
Aggregation processing is carried out to a plurality of horizontal line section in non-background area and the region where multi-drop line section, is obtained Multiple cut zone in the image to be resolved.
In a kind of possible realization method, can respectively both horizontally and vertically scan edge graph 822 (or scanning side Edge Figure 82 2 and segmentation figure 821), to edge Figure 82 2 into line dividing processing, such as the horizontal line segmentation figure 831 in Fig. 8 and vertically Shown in line segmentation figure 832.Line dividing processing can obtain horizontal and vertical line segment, by taking horizontal sweep as an example, along the every of image A line is scanned, and skips background area automatically, if a marginal point is encountered in scanning, using the marginal point as horizontal line section Starting point until encountering terminal of next marginal point as line segment, and gives each line segment one number.And so on, It can obtain the line segment of vertical direction.Regard all line segments as a unicom figure, and using the region on each line segment as category In the region of same person (object), everyone object can thus be assembled, obtain everyone segmentation result, Namely multiple cut zone as shown in the segmentation figure 833 in Fig. 8.
In a kind of possible realization method, multiple cut zone shown in segmentation figure 833 by edge flase drop influenced compared with Greatly, the flase drop of marginal point can generate many small regions.It thus in step s 110, can be according to semantic segmentation result to more A cut zone carries out aggregation processing, determines the aggregation zone of at least one analysis object in the image to be resolved.
In a kind of possible realization method, step S110 may include following at least one step:
It is greater than or equal to first threshold, and the cut zone includes multiple semantic segmentation results in the size of cut zone When, determine the aggregation zone that the cut zone is same analysis object;
Cut zone size be less than second threshold, and the cut zone include a semantic segmentation result when, will The cut zone is incorporated to the nearest aggregation zone of the distance between the cut zone.
For example, for multiple cut zone, if the size of some cut zone is larger (to be greater than or equal to the first threshold Value), and the cut zone includes multiple semantic segmentations as a result, it may be considered that the cut zone is the poly- of same analysis object Collect region;If the size in some Target Segmentation region is smaller (being less than second threshold), and the Target Segmentation region only includes One semantic segmentation result, then it is assumed that the Target Segmentation region is misaggregation as a result, can be by the Target Segmentation region simultaneously Enter the nearest aggregation zone in the distance between the Target Segmentation region.For example, any two cut zone can be calculated respectively In at a distance of nearest two points distance, Target Segmentation region is incorporated to closest cut zone.To all cut sections After domain is handled, the final polymerization result after being optimized, as shown in 841 in Fig. 8.It in this way, can be with Improve the precision of polymerization result.
It should be appreciated that first threshold and the specific value of second threshold can be set according to actual conditions;It also, can The merging in Target Segmentation region is realized in a manner of using various region merging techniques well known in the art, the disclosure is not restricted this.
In a kind of possible realization method, in the aggregation zone at least one analysis object for obtaining image 81 to be resolved Afterwards, can be associated with semantic segmentation result by the aggregation zone of at least one analysis object, semantic segmentation result is corresponded to On each personage's (object), so that it is determined that the analysis result at least one analysis object.It for example, can be by 841 in Fig. 8 It is associated with semantic segmentation Figure 82 2, determine the analysis result as shown in 851.86 in Fig. 8 be to treat parsing image manually to mark Reference analysis result figure, it is seen that 851 is more close with 86, can realize treat parsing image accurate Analysis.
According to the method for analyzing image of the embodiment of the present disclosure, parsing image can be treated by analytic modell analytical model and is carried out at the same time language Justice segmentation and edge detection and optimizing polymerization processing, determine high-precision semantic segmentation result and edge detection results, and according to Semantic segmentation result and edge detection results determine the analysis result of at least one analysis object in image, so as to fulfill to image One or more of analysis object accurate Analysis.
According to the method for analyzing image of the embodiment of the present disclosure, human testing network is eliminated, joint network can be used to instruct To practice depth convolutional neural networks, realize personage's parsing, analysis result provides not only the partes corporis humani point positioning of pixel scale, and And the clothes result of segmentation contributes to the identification of subsequent garments attribute;Also, joint network trains semantic segmentation and edge simultaneously Detection, the training simultaneously of two tasks is helpful for the raising of respective performance, also simplifies the process of network training.
Fig. 9 is the block diagram according to a kind of image analysis apparatus shown in an exemplary embodiment.As shown in figure 9, the dress It puts and is realized by analytic modell analytical model, the analytic modell analytical model includes:Feature sharing module, semantic segmentation module, edge detection module, institute Device is stated to include:
Sharing feature acquiring unit 901 carries out feature extraction for treating parsing image by the feature sharing module Processing obtains sharing feature, and the sharing feature handles more including the multiple network layers through the feature sharing module The characteristic information of a network depth;
PRELIMINARY RESULTS determination unit 902, for passing through the semantic segmentation module and the edge detection module pair respectively The sharing feature carries out semantic segmentation processing and edge detection process, obtains the preliminary semantic segmentation knot of the image to be resolved Fruit and preliminary edge testing result.
In a kind of possible realization method, the analytic modell analytical model further includes optimizing polymerization module, and described device further includes:
Characteristic layer assembled unit, for by the optimizing polymerization module by the preliminary semantic segmentation result and it is described just Step edge detection results, which are inputted in the optimizing polymerization module, is combined as characteristic layer;
Optimizing polymerization unit, for polymerizeing in the optimizing polymerization module to the characteristic layer, and described in use Multiple convolutional network layers of optimizing polymerization module optimize processing, determine the semantic segmentation result for the image to be resolved And edge detection results.
In a kind of possible realization method, the feature sharing module includes cascade first convolution pond network, the Two residual error networks, third residual error network, the 4th residual error network and the 5th residual error network;
The sharing feature includes the third residual error network, the 4th residual error network and the 5th residual error network point The characteristic information not exported.
In a kind of possible realization method, described device further includes:
Dividing processing unit, for, into line dividing processing, determining the image to be resolved to the edge detection results In multiple cut zone;
Aggregation zone determination unit, for carrying out habitat to the multiple cut zone according to the semantic segmentation result Reason determines the aggregation zone of at least one analysis object in the image to be resolved;
Analysis result determination unit, for by the aggregation zone of at least one analysis object and the semantic segmentation knot Fruit is associated, and determines the analysis result at least one analysis object.
In a kind of possible realization method, the edge detection results include edge graph, wherein, the dividing processing list Member includes:
Subelement is scanned, for both horizontally and vertically scanning the edge graph respectively, is obtained in non-background area A plurality of horizontal line section and multi-drop line section, wherein, the endpoint of every horizontal line section and every vertical segment is the edge graph In marginal point, the region where every horizontal line section belongs to same cut zone, and the region where every vertical segment belongs to Same cut zone;
Cut zone determination subelement, for where a plurality of horizontal line section in non-background area and multi-drop line section Region carry out aggregation processing, obtain multiple cut zone in the image to be resolved.
In a kind of possible realization method, the aggregation zone determination unit includes following at least one subelement:
Region determination subelement, for being greater than or equal to first threshold, and the cut zone in the size of cut zone During including multiple semantic segmentation results, the aggregation zone that the cut zone is same analysis object is determined;
Region merging technique subelement, for being less than second threshold in the size of cut zone, and the cut zone includes one During a semantic segmentation result, the cut zone is incorporated to the nearest aggregation zone of the distance between the cut zone.
In a kind of possible realization method, described device further includes:
Training resolution unit, is handled for sample image to be inputted in initial analytic modell analytical model, is obtained for the sample The training analysis result of this image, wherein, the initial analytic modell analytical model includes initial characteristics sharing module, initial semantic segmentation mould Block, initial edge detection module, initial polymerization optimization module and monitoring modular;
Determination unit is lost, for the expectation analysis result according to the sample image and the trained analysis result, Determine the model loss of the sample image, the model loss of the sample image includes the initial characteristics sharing module, institute State the weighting of the model loss of initial semantic segmentation module, the initial edge detection module and the initial polymerization optimization module With;
Model adjustment unit for being lost according to the model of the sample image, is adjusted in the initial analytic modell analytical model Parameters weighting determines the analytic modell analytical model after adjustment;
Model determination unit in the case of meeting training condition in the loss of the model of the sample image, will adjust Analytic modell analytical model afterwards is determined as final analytic modell analytical model.
In a kind of possible realization method, the preliminary semantic segmentation result includes:Semantic feature and semantic segmentation figure; And/or the preliminary edge testing result includes:Edge feature and edge detection graph.
In a kind of possible realization method, the feature sharing module, the semantic segmentation module, the edge detection Module and the optimizing polymerization module respectively include full convolutional neural networks.
About the device in above-described embodiment, wherein each unit performs the concrete mode of operation in related this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Figure 10 is the block diagram according to a kind of image analysis apparatus 1900 shown in an exemplary embodiment.For example, device 1900 may be provided as a server.With reference to Figure 10, device 1900 includes processing component 1922, further comprise one or Multiple processors and memory resource represented by a memory 1932, can be by the execution of processing component 1922 for storing Instruction, such as application program.The application program stored in memory 1932 can include it is one or more each Corresponding to the module of one group of instruction.In addition, processing component 1922 is configured as execute instruction, to perform the above method.
Device 1900 can also include a power supply module 1926 and be configured as the power management of executive device 1900, one Wired or wireless network interface 1950 is configured as device 1900 being connected to network and input and output (I/O) interface 1958.Device 1900 can be operated based on the operating system for being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, such as including calculating The memory 1932 of machine program instruction, above computer program instruction can be performed to complete by the processing component 1922 of device 1900 The above method.
The disclosure can be system, method and/or computer program product.Computer program product can include computer Readable storage medium storing program for executing, containing for make processor realize various aspects of the disclosure computer-readable program instructions.
Computer readable storage medium can keep and store to perform the tangible of the instruction that uses of equipment by instruction Equipment.Computer readable storage medium for example can be-- but be not limited to-- storage device electric, magnetic storage apparatus, optical storage Equipment, electromagnetism storage device, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium More specific example (non exhaustive list) includes:Portable computer diskette, random access memory (RAM), read-only is deposited hard disk It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static RAM (SRAM), portable Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon It is stored with the punch card of instruction or groove internal projection structure and above-mentioned any appropriate combination.Calculating used herein above Machine readable storage medium storing program for executing is not interpreted instantaneous signal in itself, and the electromagnetic wave of such as radio wave or other Free propagations leads to It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/ Processing equipment downloads to outer computer or outer by network, such as internet, LAN, wide area network and/or wireless network Portion's storage device.Network can include copper transmission cable, optical fiber transmission, wireless transmission, router, fire wall, interchanger, gateway Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment In calculation machine readable storage medium storing program for executing.
For perform the disclosure operation computer program instructions can be assembly instruction, instruction set architecture (ISA) instruction, Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages Arbitrarily combine the source code or object code write, the programming language includes the programming language of object-oriented-such as Procedural programming languages-such as " C " language or similar programming language of Smalltalk, C++ etc. and routine.Computer Readable program instructions can be performed fully, partly perform on the user computer, is only as one on the user computer Vertical software package performs, part performs or on the remote computer completely in remote computer on the user computer for part Or it is performed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind Include LAN (LAN) or wide area network (WAN)-be connected to subscriber computer or, it may be connected to outer computer (such as profit Pass through Internet connection with ISP).In some embodiments, by using computer-readable program instructions Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can Programmed logic array (PLA) (PLA), the electronic circuit can perform computer-readable program instructions, so as to fulfill each side of the disclosure Face.
Referring herein to the method, apparatus (system) according to the embodiment of the present disclosure and the flow chart of computer program product and/ Or block diagram describes various aspects of the disclosure.It should be appreciated that each box and flow chart of flow chart and/or block diagram and/ Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to all-purpose computer, special purpose computer or other programmable datas The processor of processing unit, so as to produce a kind of machine so that these instructions are passing through computer or other programmable datas When the processor of processing unit performs, produce and realize work(specified in one or more of flow chart and/or block diagram box The device of energy/action.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, so as to be stored with instruction Computer-readable medium then includes a manufacture, including realizing in one or more of flow chart and/or block diagram box The instruction of the various aspects of defined function/action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other In equipment so that series of operation steps are performed on computer, other programmable data processing units or miscellaneous equipment, with production Raw computer implemented process, so that performed on computer, other programmable data processing units or miscellaneous equipment Function/action specified in one or more of flow chart and/or block diagram box is realized in instruction.
Flow chart and block diagram in attached drawing show the system, method and computer journey of multiple embodiments according to the disclosure Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation One module of table, program segment or a part for instruction, the module, program segment or a part for instruction include one or more use In the executable instruction of logic function as defined in realization.In some implementations as replacements, the function of being marked in box It can be occurred with being different from the sequence marked in attached drawing.For example, two continuous boxes can essentially be held substantially in parallel Row, they can also be performed in the opposite order sometimes, this is depended on the functions involved.It is also noted that block diagram and/or The combination of each box in flow chart and the box in block diagram and/or flow chart can use function or dynamic as defined in performing The dedicated hardware based system made is realized or can be realized with the combination of specialized hardware and computer instruction.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.In the case of without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes will be apparent from for the those of ordinary skill in art field.The selection of term used herein, purport In the principle for best explaining each embodiment, practical application or to the technological improvement of the technology in market or lead this technology Other those of ordinary skill in domain are understood that each embodiment disclosed herein.

Claims (10)

1. a kind of method for analyzing image, which is characterized in that the method realizes that the analytic modell analytical model includes by analytic modell analytical model: Feature sharing module, semantic segmentation module, edge detection module,
The method includes:
Parsing image is treated by the feature sharing module and carries out feature extraction processing, obtains sharing feature, the shared spy Sign includes the characteristic information of multiple network depths that the multiple network layers through the feature sharing module are handled;
Semantic segmentation processing is carried out to the sharing feature by the semantic segmentation module and the edge detection module respectively And edge detection process, the preliminary semantic segmentation result and preliminary edge testing result of the acquisition image to be resolved.
2. according to the method described in claim 1, it is characterized in that, the analytic modell analytical model further includes optimizing polymerization module,
Wherein, after the step of preliminary semantic segmentation result and preliminary edge testing result for obtaining the image to be resolved, The method further includes:
It will be described in the preliminary semantic segmentation result and preliminary edge testing result input by the optimizing polymerization module Characteristic layer is combined as in optimizing polymerization module;
The characteristic layer is polymerize in the optimizing polymerization module, and using multiple convolution of the optimizing polymerization module Network layer optimizes processing, determines semantic segmentation result and edge detection results for the image to be resolved.
3. according to the method described in claim 1, it is characterized in that, the feature sharing module includes cascade first convolution pond Change network, the second residual error network, third residual error network, the 4th residual error network and the 5th residual error network;
It is defeated that the sharing feature includes the third residual error network, the 4th residual error network and the 5th residual error network difference The characteristic information gone out.
4. according to the method described in claim 2, it is characterized in that, determining the semantic segmentation knot for the image to be resolved After the step of fruit and edge detection results, the method further includes:
To the edge detection results into line dividing processing, multiple cut zone in the image to be resolved are determined;
Aggregation processing is carried out to the multiple cut zone according to the semantic segmentation result, is determined in the image to be resolved extremely The aggregation zone of a few analysis object;
The aggregation zone of at least one analysis object is associated with the semantic segmentation result, it determines for described at least The analysis result of one analysis object.
5. according to the method described in claim 4, it is characterized in that, the edge detection results include edge graph,
Wherein, multiple cut zone in the image to be resolved are determined into line dividing processing to the edge detection results, Including:
The edge graph is both horizontally and vertically being scanned respectively, obtain a plurality of horizontal line section in non-background area and a plurality of is being hung down Straightway, wherein, the endpoint of every horizontal line section and every vertical segment is the marginal point in the edge graph, every horizontal line Region where section belongs to same cut zone, and the region where every vertical segment belongs to same cut zone;
Aggregation processing is carried out to a plurality of horizontal line section in non-background area and the region where multi-drop line section, described in acquisition Multiple cut zone in image to be resolved.
6. according to the method described in claim 4, it is characterized in that, according to the semantic segmentation result to the multiple cut section Domain carries out aggregation processing, determines the aggregation zone of at least one analysis object in the image to be resolved, including following at least one A step:
Cut zone size be greater than or equal to first threshold, and the cut zone include multiple semantic segmentation results when, Determine the aggregation zone that the cut zone is same analysis object;
Cut zone size be less than second threshold, and the cut zone include a semantic segmentation result when, will described in Cut zone is incorporated to the nearest aggregation zone of the distance between the cut zone.
7. according to the method described in claim 2, it is characterized in that, the method further includes:
Sample image is inputted in initial analytic modell analytical model and is handled, obtains the training analysis result for the sample image, Wherein, the initial analytic modell analytical model include initial characteristics sharing module, initial semantic segmentation module, initial edge detection module, Initial polymerization optimization module and monitoring modular;
According to the expectation analysis result of the sample image and the trained analysis result, the model of the sample image is determined Loss, the model loss of the sample image include the initial characteristics sharing module, the initial semantic segmentation module, described Initial edge detection module and the weighted sum of the model of initial polymerization optimization module loss;
It is lost according to the model of the sample image, adjusts the parameters weighting in the initial analytic modell analytical model, after determining adjustment Analytic modell analytical model;
In the case where the model loss of the sample image meets training condition, the analytic modell analytical model after adjustment is determined as finally Analytic modell analytical model.
8. a kind of image analysis apparatus, which is characterized in that described device realizes that the analytic modell analytical model includes by analytic modell analytical model: Feature sharing module, semantic segmentation module, edge detection module,
Described device includes:
Sharing feature acquiring unit carries out feature extraction processing for treating parsing image by the feature sharing module, obtains Sharing feature is taken, it is deep that the sharing feature includes multiple networks that the multiple network layers through the feature sharing module are handled The characteristic information of degree;
PRELIMINARY RESULTS determination unit, for respectively by the semantic segmentation module and the edge detection module to described shared Feature carries out semantic segmentation processing and edge detection process, obtains the preliminary semantic segmentation result of the image to be resolved and preliminary Edge detection results.
9. a kind of image analysis apparatus, which is characterized in that including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:Method in perform claim requirement 1-7 described in any one.
10. a kind of non-volatile computer readable storage medium storing program for executing, is stored thereon with computer program instructions, which is characterized in that institute State the method realized when computer program instructions are executed by processor in claim 1-7 described in any one.
CN201810085628.4A 2018-01-29 2018-01-29 Image analysis method and device Active CN108229504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810085628.4A CN108229504B (en) 2018-01-29 2018-01-29 Image analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810085628.4A CN108229504B (en) 2018-01-29 2018-01-29 Image analysis method and device

Publications (2)

Publication Number Publication Date
CN108229504A true CN108229504A (en) 2018-06-29
CN108229504B CN108229504B (en) 2020-09-08

Family

ID=62669263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810085628.4A Active CN108229504B (en) 2018-01-29 2018-01-29 Image analysis method and device

Country Status (1)

Country Link
CN (1) CN108229504B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis
CN110599514A (en) * 2019-09-23 2019-12-20 北京达佳互联信息技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN110674685A (en) * 2019-08-19 2020-01-10 电子科技大学 Human body analytic segmentation model and method based on edge information enhancement
CN110782468A (en) * 2019-10-25 2020-02-11 北京达佳互联信息技术有限公司 Training method and device of image segmentation model and image segmentation method and device
CN111222522A (en) * 2018-11-23 2020-06-02 北京市商汤科技开发有限公司 Neural network training, road surface detection and intelligent driving control method and device
CN111259686A (en) * 2018-11-30 2020-06-09 华为终端有限公司 Image analysis method and device
WO2020119420A1 (en) * 2018-12-15 2020-06-18 深圳壹账通智能科技有限公司 Front-end page generation method and apparatus, computer device, and storage medium
CN111445493A (en) * 2020-03-27 2020-07-24 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111915703A (en) * 2019-05-10 2020-11-10 浙江大学 Image generation method and device
CN112000099A (en) * 2020-08-26 2020-11-27 大连理工大学 Collaborative robot flexible path planning method under dynamic environment
CN112053439A (en) * 2020-09-28 2020-12-08 腾讯科技(深圳)有限公司 Method, device and equipment for determining instance attribute information in image and storage medium
CN114328990A (en) * 2021-10-13 2022-04-12 腾讯科技(深圳)有限公司 Image integrity identification method and device, computer equipment and storage medium
AU2021240229B1 (en) * 2021-09-21 2023-02-02 Sensetime International Pte. Ltd. Stacked object recognition method, apparatus and device, and computer storage medium
WO2023047167A1 (en) * 2021-09-21 2023-03-30 Sensetime International Pte. Ltd. Stacked object recognition method, apparatus and device, and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650728A (en) * 2009-08-26 2010-02-17 北京邮电大学 Video high-level characteristic retrieval system and realization thereof
CN101706780A (en) * 2009-09-03 2010-05-12 北京交通大学 Image semantic retrieving method based on visual attention model
CN102360432A (en) * 2011-09-30 2012-02-22 北京航空航天大学 Semantic marking method for image scene based on geodesic transmission

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101650728A (en) * 2009-08-26 2010-02-17 北京邮电大学 Video high-level characteristic retrieval system and realization thereof
CN101706780A (en) * 2009-09-03 2010-05-12 北京交通大学 Image semantic retrieving method based on visual attention model
CN102360432A (en) * 2011-09-30 2012-02-22 北京航空航天大学 Semantic marking method for image scene based on geodesic transmission

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222522B (en) * 2018-11-23 2024-04-12 北京市商汤科技开发有限公司 Neural network training, road surface detection and intelligent driving control method and device
CN111222522A (en) * 2018-11-23 2020-06-02 北京市商汤科技开发有限公司 Neural network training, road surface detection and intelligent driving control method and device
CN111259686A (en) * 2018-11-30 2020-06-09 华为终端有限公司 Image analysis method and device
CN111259686B (en) * 2018-11-30 2024-04-09 华为终端有限公司 Image analysis method and device
WO2020119420A1 (en) * 2018-12-15 2020-06-18 深圳壹账通智能科技有限公司 Front-end page generation method and apparatus, computer device, and storage medium
CN110033055A (en) * 2019-04-19 2019-07-19 中共中央办公厅电子科技学院(北京电子科技学院) A kind of complex object image weight illumination method based on the parsing of semantic and material with synthesis
CN111915703B (en) * 2019-05-10 2023-05-09 浙江大学 Image generation method and device
CN111915703A (en) * 2019-05-10 2020-11-10 浙江大学 Image generation method and device
CN110674685A (en) * 2019-08-19 2020-01-10 电子科技大学 Human body analytic segmentation model and method based on edge information enhancement
CN110599514A (en) * 2019-09-23 2019-12-20 北京达佳互联信息技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN110599514B (en) * 2019-09-23 2022-10-04 北京达佳互联信息技术有限公司 Image segmentation method and device, electronic equipment and storage medium
CN110782468A (en) * 2019-10-25 2020-02-11 北京达佳互联信息技术有限公司 Training method and device of image segmentation model and image segmentation method and device
CN111445493B (en) * 2020-03-27 2024-04-12 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN111445493A (en) * 2020-03-27 2020-07-24 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN112000099A (en) * 2020-08-26 2020-11-27 大连理工大学 Collaborative robot flexible path planning method under dynamic environment
CN112053439A (en) * 2020-09-28 2020-12-08 腾讯科技(深圳)有限公司 Method, device and equipment for determining instance attribute information in image and storage medium
CN112053439B (en) * 2020-09-28 2022-11-25 腾讯科技(深圳)有限公司 Method, device and equipment for determining instance attribute information in image and storage medium
WO2023047167A1 (en) * 2021-09-21 2023-03-30 Sensetime International Pte. Ltd. Stacked object recognition method, apparatus and device, and computer storage medium
AU2021240229B1 (en) * 2021-09-21 2023-02-02 Sensetime International Pte. Ltd. Stacked object recognition method, apparatus and device, and computer storage medium
CN114328990A (en) * 2021-10-13 2022-04-12 腾讯科技(深圳)有限公司 Image integrity identification method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN108229504B (en) 2020-09-08

Similar Documents

Publication Publication Date Title
CN108229504A (en) Method for analyzing image and device
Estrada et al. Tree topology estimation
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
KR20220034210A (en) Object detection and instance segmentation of 3D point cloud based on deep learning
CN107679507A (en) Facial pores detecting system and method
CN111161311A (en) Visual multi-target tracking method and device based on deep learning
Rahaman et al. An efficient multilevel thresholding based satellite image segmentation approach using a new adaptive cuckoo search algorithm
CN109241871A (en) A kind of public domain stream of people's tracking based on video data
CN110232404A (en) A kind of recognition methods of industrial products surface blemish and device based on machine learning
CN109214298A (en) A kind of Asia women face value Rating Model method based on depth convolutional network
CN113421192B (en) Training method of object statistical model, and statistical method and device of target object
CN109344845A (en) A kind of feature matching method based on Triplet deep neural network structure
CN109685097A (en) A kind of image detecting method and device based on GAN
CN113011509B (en) Lung bronchus classification method and device, electronic equipment and storage medium
Cuadros et al. Segmentation of large images with complex networks
Fernandes et al. Grapevine winter pruning automation: On potential pruning points detection through 2D plant modeling using grapevine segmentation
Xin et al. Three‐dimensional reconstruction of Vitis vinifera (L.) cvs Pinot Noir and Merlot grape bunch frameworks using a restricted reconstruction grammar based on the stochastic L‐system
CN106510708A (en) Framework for Abnormality Detection in Multi-Contrast Brain Magnetic Resonance Data
Kok et al. Obscured tree branches segmentation and 3D reconstruction using deep learning and geometrical constraints
CN114187530A (en) Remote sensing image change detection method based on neural network structure search
Li et al. Auto-segmentation and time-dependent systematic analysis of mesoscale cellular structure in β-cells during insulin secretion
Valiente et al. Non-destructive image processing analysis for defect identification and maturity detection on avocado fruit
CN109934352B (en) Automatic evolution method of intelligent model
CN110210523A (en) A kind of model based on shape constraint diagram wears clothing image generating method and device
CN115471724A (en) Fine-grained fish epidemic disease identification fusion algorithm based on self-adaptive normalization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant