CN110084828A - A kind of image partition method, device and terminal device - Google Patents

A kind of image partition method, device and terminal device Download PDF

Info

Publication number
CN110084828A
CN110084828A CN201910355864.8A CN201910355864A CN110084828A CN 110084828 A CN110084828 A CN 110084828A CN 201910355864 A CN201910355864 A CN 201910355864A CN 110084828 A CN110084828 A CN 110084828A
Authority
CN
China
Prior art keywords
image
depth
training
color image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910355864.8A
Other languages
Chinese (zh)
Inventor
向晶
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN201910355864.8A priority Critical patent/CN110084828A/en
Publication of CN110084828A publication Critical patent/CN110084828A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of image partition method, device and terminal devices, obtain the color image and depth image of target object in the scene;The corresponding minimum circumscribed rectangle of target object is determined from depth image;Using pixel mapping relations each between depth image and color image, the corresponding target color image of minimum circumscribed rectangle is obtained from color image;Target color image is split using Image Segmentation Model, image after being divided;Model includes the residual error structure that depth separates convolutional coding structure, reversed residual error and linear bottleneck convolutional coding structure and Xception network.Because model calculation amount is few and picture size to be treated is small, so model running fast speed, can realize real-time perfoming image segmentation on the weaker terminal device of the CPU processing capacity such as mobile phone.Carrying out image segmentation to target color image reduces image segmentation difficulty, while model tormulation ability is strong, and the detailed information of loss is few, promotes image segmentation fineness, realizes fine stingy figure.

Description

A kind of image partition method, device and terminal device
Technical field
This application involves technical field of image processing, more particularly to a kind of image partition method, device and terminal device.
Background technique
Image processing techniques is widely used in the numerous areas such as medium, scientific research and industrial design.Image Segmentation Technology belongs to One of graph processing technique, the purpose of image segmentation are to divide the image into several areas specific, with unique properties Domain, and extract interesting target.
Presently, there are some tradition based on image segmentation to scratch drawing method, such as Closed Form method, Walk Matting method, Nonlocal Matting method and Shared Sampling method etc., but these methods are really applied The less effective into video flowing.Facebook company invented this network based on deep learning of Mask RCNN for scratch scheme, But the network is very big, needs the very strong central processing unit of processing capacity (Central Processing Unit, CPU) It can apply in real time.
With the continuous development of cell-phone function, more and more users tend to set using mobile phone is this small-sized and portable Standby realize scratches figure.For example, user wishes to take out by the personage in video by mobile phone to combine with certain picture, generate new Picture.It should be clear that the CPU processing function of mobile phone is markedly less than PC (Personal Computer, PC), therefore apply Mask RCNN network can not carry out scratching figure in real time.It is carried out although there is also the models of some lightweights at present suitable for mobile phone terminal Figure, such as the Mobilenet v2 that Google provides are scratched, but the profile fineness of segmentation object is poor.Therefore, mobile phone etc. The weaker terminal device of CPU processing function is difficult to realize in real time finely stingy figure and has become the severe technology that this field faces to ask Topic.
Summary of the invention
Based on the above issues, this application provides a kind of image partition method, device and terminal devices, in CPU such as mobile phones The weaker equipment of processing function realizes that fine scratch is schemed in real time.
The embodiment of the present application discloses following technical solution:
In a first aspect, the application provides a kind of image partition method, comprising:
Obtain the color image and depth image of target object in the scene;
The corresponding minimum circumscribed rectangle of the target object is determined from the depth image;
Using the mapping relations of pixel each between the depth image and the color image, obtained from the color image Obtain the corresponding target color image of the minimum circumscribed rectangle;
The target color image is split using Image Segmentation Model, image after being divided;Described image point Cut model, comprising: depth separates convolutional coding structure, reversed residual error and linear bottleneck convolutional coding structure and Xception network Residual error structure.
Optionally, before the utilization Image Segmentation Model is split the target color image, the method Further include:
Determine the size and length-width ratio of the image that described image parted pattern allows to divide;
The target color image is cut according to the length-width ratio, the image after being cut;
The image after the cutting is subjected to diminution processing according to the size, the image after being reduced;
It is described that the target color image is split using Image Segmentation Model, it specifically includes:
The image after the diminution is split using Image Segmentation Model.
Optionally, image after the cutting is carried out by diminution processing according to the size described, after being reduced After image, the method also includes:
Image after the diminution is normalized, the image after being normalized;
It is described that the image after the diminution is split using Image Segmentation Model, it specifically includes:
The image after the normalization is split using Image Segmentation Model.
Optionally, after described divided after image, the method also includes:
Image spreading after the segmentation is extremely consistent with the size of the color image, before being obtained according to the image after extension Scape and background.
Optionally, described that the corresponding minimum circumscribed rectangle of the target object, specific packet are determined from the depth image It includes:
The facial positions of target object described in the depth image are determined by type of face detection method;
The depth areas block of the target object is grown using region growing method according to the facial positions;
Obtain the minimum circumscribed rectangle of the depth areas block.
Optionally, before the utilization Image Segmentation Model is split the target color image, the method Further include:
Obtain colored training image and depth training image of first object in the first scene;
Corresponding first minimum circumscribed rectangle of first object is determined from the depth training image;
The corresponding target color training image of first minimum circumscribed rectangle is obtained from the colored training image;
Training set is obtained using the target color training image;
Training pattern is treated using the training set and majorized function to be trained, and obtains described image parted pattern.
Optionally, the majorized function includes: that root mean square propagates majorized function and stochastic gradient descent majorized function;It is described Training pattern is treated using the training set and majorized function to be trained, and is specifically included:
Majorized function is propagated using the training set and the root mean square to be trained to described to training pattern;
Current intersection union is judged than whether being less than preset first threshold value, if it is, utilizing the training set and institute Stochastic gradient descent majorized function is stated to be trained to described to training pattern;
Current intersection union is judged than whether being less than default second threshold, if it is, terminating training.
Second aspect, the application provide a kind of image segmentation device, comprising:
Image first obtains module, for obtaining the color image and depth image of target object in the scene;
The first determining module of rectangle, for determining the corresponding external square of minimum of the target object from the depth image Shape;
Image second obtains module, for the mapping using each pixel between the depth image and the color image Relationship obtains the corresponding target color image of the minimum circumscribed rectangle from the color image;
Image segmentation module is divided for being split using Image Segmentation Model to the target color image Image afterwards;Described image parted pattern, comprising: depth separates convolutional coding structure, reversed residual error and linear bottleneck convolutional coding structure, with And the residual error structure of Xception network.
Optionally, device further include:
Image third obtains module, for obtaining colored training image and depth training of first object in the first scene Image;
The second determining module of rectangle, for determining first object corresponding first most from the depth training image Small boundary rectangle;
Image the 4th obtains module, corresponding for obtaining first minimum circumscribed rectangle from the colored training image Target color training image;
Training set obtains module, for obtaining training set using the target color training image;
Model training module is trained for treating training pattern using the training set and majorized function, obtains institute State Image Segmentation Model.
Optionally, device can also further comprise:
Size and length-width ratio determining module, for determining that described image parted pattern allows the size and length of the image of segmentation Wide ratio;
Cut module, for cutting according to the length-width ratio to the target color image, the figure after being cut Picture;
Processing module is reduced to be reduced for the image after the cutting to be carried out diminution processing according to the size Image afterwards;
Then image segmentation module specifically includes:
First cutting unit, for being split using Image Segmentation Model to the image after the diminution.
Optionally, device can also further comprise:
Normalized module, for the image after the diminution to be normalized, the figure after being normalized Picture;
Then described image divides module, specifically includes:
Second cutting unit, for being split using Image Segmentation Model to the image after the normalization.
Optionally, the first determining module of rectangle can specifically include:
Face-detecting unit, for determining the face of target object described in the depth image by type of face detection method Position;
Region growing unit, for growing the target object using region growing method according to the facial positions Depth areas block;
Rectangle acquiring unit, for obtaining the minimum circumscribed rectangle of the depth areas block.
The third aspect, the application provide a kind of terminal device, comprising: photographic device and processor;
The photographic device is used for sampling depth image and color image, and by the depth image and the cromogram As being sent to the processor;
The processor executes image described in first aspect point for running computer program when described program is run Segmentation method.
Compared to the prior art, the application has the advantages that
Image partition method provided by the present application utilizes target object color image in the scene and depth image and pre- First trained Image Segmentation Model realizes image segmentation.It, can be by target using the depth information of target object in depth image Object identification distinguishes over other objects or background, to primarily determine the corresponding external square of minimum of target object in depth image Shape.Depth image is corresponded to each other with target object in the scene of color image, and each pixel also has mapping between two images Relationship, therefore the corresponding target color image of minimum circumscribed rectangle can be obtained from color image.For by target object from colour Fine segmentation comes out in target image, is split using Image Segmentation Model to target color image.Minimum circumscribed rectangle Size is less than the size of depth image and color image, therefore target color image is compared to the color image initially obtained, ruler It is very little smaller, model data volume to be treated is reduced, the image segmentation efficiency of image processing model is effectively promoted.In addition, image Depth, which separates convolutional coding structure, in parted pattern can effectively reduce model calculation amount.Because model calculation amount is few, and needs to locate The picture size of reason is small, so model running fast speed, terminal that can be weaker in the CPU processing capacity such as mobile phone using this method Real-time perfoming image segmentation is realized in equipment.
Further, since the main body in minimum circumscribed rectangle is target object, therefore the coloured silk obtained according to minimum circumscribed rectangle Main body in the logo image of Semu is also target object, compared to processing color image, is carried out using model to target color image Image segmentation correspondingly reduces model to the segmentation difficulty of image.Meanwhile reversed residual error and linear bottleneck in Image Segmentation Model Convolutional coding structure ensure that the ability to express of model;The residual error structure of Xception network can reduce the loss of detailed information.Because Target color image reduces the processing difficulty of model, and the ability to express of model is strong, and the detailed information of loss is few, so using This method can promote the fineness of image segmentation, realize that fine scratch is schemed.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application without any creative labor, may be used also for those of ordinary skill in the art To obtain other drawings based on these drawings.
Fig. 1 is a kind of flow chart of image partition method provided by the embodiments of the present application;
Fig. 2 is a kind of structural schematic diagram of Image Segmentation Model provided by the embodiments of the present application;
Fig. 3 is the flow chart of another image partition method provided by the embodiments of the present application;
Fig. 4 is a kind of model training flow chart provided by the embodiments of the present application;
Fig. 5 is the flow chart of another image partition method provided by the embodiments of the present application;
Fig. 6 is a kind of structural schematic diagram of image segmentation device provided by the embodiments of the present application;
Fig. 7 is the structural schematic diagram of another image segmentation device provided by the embodiments of the present application;
Fig. 8 is a kind of structural schematic diagram of terminal device provided by the embodiments of the present application;
Fig. 9 is the structural schematic diagram of another terminal device provided by the embodiments of the present application.
Specific embodiment
Inventor has found that current some image partition methods be difficult to it is weaker in the CPU processing capacity such as mobile phone Terminal device on apply, realize real-time and fine stingy figure.Certain methods only can guarantee that fine scratch is schemed, but can not be in mobile phone etc. Real-time perfoming scratches figure on the weaker terminal device of CPU processing capacity;And also certain methods only can guarantee on the terminal device in real time Figure is scratched, not can guarantee the fineness of stingy figure but.Therefore, current image partition method is difficult to meet the stingy figure demand of people.
By research, inventor provides a kind of image partition method, device and terminal device.Utilize depth image, colour Image and Image Segmentation Model guarantee the fineness of cutting operation, meanwhile, image partition method provided by the embodiments of the present application is also It can be applied to the weaker terminal device of the CPU processing capacity such as mobile phone, realize fine image segmentation.
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only this Invention a part of the embodiment, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art exist Every other embodiment obtained under the premise of creative work is not made, shall fall within the protection scope of the present invention.
First embodiment
Referring to Fig. 1, which is a kind of flow chart of image partition method provided by the embodiments of the present application.
As shown in Figure 1, image partition method provided in this embodiment, comprising:
Step 101: obtaining the color image and depth image of target object in the scene.
In the present embodiment, for target object concrete type without limit.As an example, target object can be people Or animal etc..
In practical applications, it can be and taken the photograph by same as a kind of possible implementation, color image and depth image As device synchronization collects.It include colour information in color image;It include depth information in depth image.Color image It is consistent with the lateral dimension of depth image, and color image is consistent with the longitudinal size of depth image.
In color image, the colour information of target object may be closer to other objects or background, thus it is single according to The target object in image segmentation acquisition image is carried out according to color image, realizes that difficulty is larger.And in depth image, due to it Reflect that the depth information of each object in scene therefore, can be convenient if target object is different from the depth information of other objects Ground is by target object Division identification in other objects or background.But its profile of the target object of determination is fine in depth image It spends often lower.Therefore, it is also desirable to extract the fine definition of target object, further to realize that fine scratch is schemed.For this purpose, obtaining After color image and depth image, the present embodiment continues to execute step 102-103.
Step 102: the corresponding minimum circumscribed rectangle of the target object is determined from the depth image.
It is understood that target object may only occupy fraction region in depth image.It can be from depth image It is middle to determine that the corresponding minimum circumscribed rectangle of target object, minimum circumscribed rectangle include to exist according to depth information according to depth information The target object determined in depth image.The lateral dimension of the minimum circumscribed rectangle is less than or equal to the lateral ruler of depth image It is very little;The longitudinal size of the minimum circumscribed rectangle is less than or equal to the longitudinal size of depth image.
For example, the lateral extent that target object occupies is a~b, and target object accounts for using the lower left corner as origin in depth image According to longitudinal extent be c~d, wherein a < b and c < d.Therefore, four vertex of the minimum circumscribed rectangle that this step is determined are sat Mark is respectively (a, c), (b, c), (a, d) and (b, d).Wherein, the difference of b and a is less than or equal to the lateral dimension of depth image; The difference of d and c is less than or equal to the longitudinal size of depth image.
Step 103: using the mapping relations of each pixel between the depth image and the color image, from the coloured silk Chromatic graph picture obtains the corresponding target color image of the minimum circumscribed rectangle.
It is understood that depth image is identical as the size of color image, the pixel in two images, which also has, to be reflected Penetrate relationship.For example, similarly with depth image, color image is using the lower left corner as origin, and coordinate is the picture of (a, c) in depth image Coordinate is that the pixel of (a, c) has mapping relations in vegetarian refreshments and color image;Coordinate is the pixel of (a, d) in depth image There are mapping relations for the pixel of (a, d) with coordinate in color image.Therefore, target object is corresponding in known depth image Minimum circumscribed rectangle under the premise of, according to minimum circumscribed rectangle each vertex correspondence pixel coordinate and aforementioned picture The mapping relations of vegetarian refreshments can accordingly obtain the corresponding target color image of minimum circumscribed rectangle from color image.It is understood that , include target object in target color image.The size of target color image and the size of minimum circumscribed rectangle are consistent.
As an example, if step 102 determine minimum circumscribed rectangle four apex coordinates be respectively (a, c), (b, C), (a, d) and (b, d), then the lateral dimension of target color image is b-a, longitudinal size d-c.
By execute above-mentioned steps 102-103, obtained from color image include target object target color image.It is aobvious So, compared to color image, the information for the interference image segmentation for including in target color image is less, while picture size obtains Reduction.Therefore, compared to color image, the difficulty for carrying out image dividing processing for target color image declines, needs to handle Data it is less.Target color figure is split using Image Segmentation Model below, to obtain the fine mesh of segmentation contour Mark object.
Step 104: the target color image being split using Image Segmentation Model, image after being divided.
This step mainly utilizes Image Segmentation Model, it should be noted that the model is that training obtains in advance.Image segmentation The primary structure of model has: depth separates convolution (Depthwise Separable Convolution, DSC) structure, reversely Residual error and linear bottleneck convolution (Inverted Residual and Linear Bottlenecks, IRALB) structure, and The residual error structure of Xception network.
Referring to fig. 2, which is a kind of structural schematic diagram of Image Segmentation Model provided by the embodiments of the present application.Below with reference to Image Segmentation Model provided by the embodiments of the present application is described in Fig. 2.In Fig. 2, Conv indicates Standard convolution block, IRALB table Show reversed residual error and linear bottleneck convolutional coding structure, DSC indicates that depth separates convolutional coding structure, and 1 × 1Conv indicates Xception net In network convolution kernel having a size of 1 × 1 and step-length be 2 residual error structure.In Fig. 2, figureIndicate concat operation.It is encoding On direction, Image Segmentation Model mainly uses the residual error structure of IRALB structure and Xception network.On decoding direction, figure As parted pattern mainly uses DSC structure.
In practical applications, Standard convolution is resolved into depth convolution and 1 × 1 convolution by DSC structure.It is specific next It says, depth convolution is filtered for each single input channel application single filter, and then point-by-point convolution applies 1 × 1 volume Product operates the output to combine all depth convolution.This decomposition can effectively largely reduce calculation amount compared to Standard convolution And the size of model.For ease of understanding, citing is illustrated below:
Assuming that the size of input feature vector figure F is Df*Df*M, the size of output characteristic pattern G is Dg*Dg*N, convolution kernel Long and wide respectively Dk and Dk, step-length 1, then input feature vector figure F calculates output characteristic pattern G, the calculation amount of Standard convolution is Dk*Dk*M*N*Df*Df, and the calculation amount of DSC is Dk*Dk*Df*Df*M+1*1*Df*Df*M*N.Usual N be set as 100 or 100 or more numerical value.If Dk=3, it is clear that the order of magnitude of N is much larger than Dk, then DSC, compared to Standard convolution, calculation amount subtracts Nearly 8 times of major general.And studies have shown that DSC accuracy decline is seldom.The characteristics of based on DSC and advantage, the application is using DSC as base Plinth convolution mode, DSC is for extracting characteristics of image.
In Image Segmentation Model, changes port number using 1 × 1 convolution by IRALB before DSC and then rise dimension, depth volume Product carries out channel compressions by 1 × 1 convolution again.That is, first expanding to feature, it is defeated that validity feature is chosen after convolution again Out, and then the segmentation precision of model is improved.In addition, IRALB ensure that the ability to express of model.
While 1 × 1Conv structure ensure that deepening semantic information with network is more and more obvious in Image Segmentation Model, carefully It is less to save information loss.
It is above image partition method provided by the embodiments of the present application.This method utilizes the coloured silk of target object in the scene Chromatic graph picture and depth image and in advance trained Image Segmentation Model realization image segmentation.In depth image, object is utilized The depth information of body, which can identify target object, distinguishes over other objects or background, to primarily determine target in depth image The corresponding minimum circumscribed rectangle of object.Depth image is corresponded to each other with target object in the scene of color image, two images it Between each pixel also there are mapping relations, therefore the corresponding target color figure of minimum circumscribed rectangle can be obtained from color image Picture.For by target object, fine segmentation is come out from target color image, using Image Segmentation Model to target color image into Row segmentation.The size of minimum circumscribed rectangle be less than depth image and color image size, therefore target color image compared to The color image initially obtained, it is smaller, model data volume to be treated is reduced, the figure of image processing model is effectively promoted As segmentation efficiency.In addition, the separable convolutional coding structure of depth can effectively reduce model calculation amount in Image Segmentation Model.Because of model Calculation amount is few, and picture size to be treated is small, so model running fast speed, it can be in mobile phone etc. using this method Real-time perfoming image segmentation is realized on the weaker terminal device of CPU processing capacity.
Further, since the main body in minimum circumscribed rectangle is target object, therefore the coloured silk obtained according to minimum circumscribed rectangle Main body in the logo image of Semu is also target object, compared to processing color image, is carried out using model to target color image Image segmentation correspondingly reduces model to the segmentation difficulty of image.Meanwhile reversed residual error and linear bottleneck in Image Segmentation Model Convolutional coding structure ensure that the ability to express of model;The residual error structure of Xception network can reduce the loss of detailed information.Because Target color image reduces the processing difficulty of model, and the ability to express of model is strong, and the detailed information of loss is few, so using This method can promote the fineness of image segmentation, realize that fine scratch is schemed.
Image Segmentation Model for ease of understanding, be described below with reference to training process of the Fig. 3 to Image Segmentation Model and Explanation.Fig. 3 is another image partition method flow chart provided by the embodiments of the present application.
In image partition method provided by the embodiments of the present application, before executing step 104, that is, image segmentation mould is utilized Type is split the target color image, can also include:
Step M1: colored training image and depth training image of first object in the first scene are obtained.
Herein, the first object can be the object with preceding aim object same type, be also possible to and aforementioned targets The different types of object of body.First scene may be the same or different with the scene in abovementioned steps 101.First object exists Colored training image and depth training image in first scene are mainly used for model training.
Step M2: corresponding first minimum circumscribed rectangle of first object is determined from the depth training image.
From depth training image determine the first minimum circumscribed rectangle implementation, in abovementioned steps 102 from depth Determine that the mode of minimum circumscribed rectangle is identical in image.Details are not described herein again.
Step M3: the corresponding target color training figure of first minimum circumscribed rectangle is obtained from the colored training image Picture.
The reality of the corresponding target color training image of first minimum circumscribed rectangle is obtained from the colored training image Existing mode is identical as the mode of target color image is obtained in abovementioned steps 103 from color image.Details are not described herein again.
Step M4: training set is obtained using the target color training image.
It should be noted that in the present embodiment for training pattern be trained used in training set have it is a variety of can The way of realization of energy.
As a kind of possible way of realization, if for directly being carried out to target color image after the model training Image segmentation, then step M4 is directly using target color training image as training set.It is understood that including more in training set The target color training image that width uses step M1-M3 mode to obtain.
It is understood that in practical applications, difference according to demand, trained Image Segmentation Model is possible to need Processing is split to by the pretreated image of certain form.Such as: normalized or cutting processing etc..This implementation For pretreated concrete mode without limiting in example.As alternatively possible way of realization, if the model training is complete For carrying out image segmentation by the pretreated image of certain mode to target color figure after finishing, then step M4 is by colored mesh Training image is marked by the pretreated image of same way as training set.It is understood that including several in training set The image obtained after aforementioned pretreatment again using the target color image that step M1-M3 mode obtains.
Step M5: treating training pattern using the training set and majorized function and be trained, and obtains described image segmentation Model.
In practical applications, the majorized function specifically used for step M5 is without limiting, as an example, majorized function (Root Mean Square Propagation, RMS Prop) majorized function, stochastic gradient descent can be propagated for root mean square (Stochastic Gradient Descent, SGD) majorized function etc..
When specifically train, step M5 can refer to mode shown in Fig. 4 and execute.Fig. 4 show the embodiment of the present application A kind of model training flow chart provided.In trained process shown in Fig. 4, the majorized function of use is specifically included: RMS Prop majorized function and SGD majorized function.
M501: it is trained to described to training pattern using the training set and the RMS Prop majorized function.
It should be noted that the training objective to training pattern is can to carry out image segmentation using target color image. It also include: that depth separates convolutional coding structure, reversed residual error and linear bottleneck convolutional coding structure and Xception net to training pattern The residual error structure of network.The design parameter of above-mentioned each structure needs constantly to adjust and optimize by the training of M501-M504.Work as model Training terminates, and the parameter of each structure is also fixed in model.
M502: judge whether current intersection union ratio (Intersection over Union, IOU) is less than default the One threshold value, if it is, M503 is executed, if it is not, then repeating M501.
M503: it is instructed to described to training pattern using the training set and the stochastic gradient descent majorized function Practice.
M504: judging whether current IOU is less than default second threshold, if it is, terminating training;If it is not, then weight M503 is executed again.
In the present embodiment, for preset first threshold value and default second threshold specific value without limit.Default the One threshold value and default second threshold can rule of thumb be set.
As a kind of possible implementation, presets second threshold and be less than preset first threshold value.M501-M502 is initial instruction It drills work, M503-M504 is tuning training operation.When IOU is less than preset first threshold value, indicate that initial training is relatively stable, Less, the fineness and stability to be further ensured that network segmentation result can enter tuning training at this time for IOU variation.Work as IOU Less than default second threshold, indicate that tuning training tends towards stability, model training finishes, and obtains the figure for being applied to real image segmentation As parted pattern.
Above step M1-M5 is the embodiment of the present application for the acquisition of Image Segmentation Model training set and model training Detailed process.By executing above-mentioned steps M1-M5 description operation, the Image Segmentation Model for obtaining training and preceding method are implemented The practical application request of Image Segmentation Model is more adapted to and is matched in example.
It is understood that in practical applications, for the real-time for promoting image segmentation, Image Segmentation Model may have Handle the size requirement of image.Therefore, the size for the image for allowing to divide when Image Segmentation Model is less than target color image Size.In addition, Image Segmentation Model allows the length-width ratio for the image divided may be inconsistent with the length-width ratio of target color image. Based on problem above, another image partition method that the application further provides for adjusts target color image accordingly After whole, Image Segmentation Model is recycled to be split.The specific implementation of this method is retouched below with reference to embodiment and attached drawing It states and illustrates.
Referring to Fig. 5, which is the flow chart of another image partition method provided by the embodiments of the present application.
As shown in figure 5, image partition method provided in this embodiment, comprising:
Step 501: obtaining the color image and depth image of target object in the scene.
The implementation of step 501 is identical as the implementation of step 101 in previous embodiment in the present embodiment, for The associated description of step 501 can refer to previous embodiment, and details are not described herein again.
Step 502: the corresponding minimum circumscribed rectangle of the target object is determined from the depth image.
Below with reference to step 5021-5023, to one kind of this step 502 by taking target object is the object with face as an example Specific implementation form is described and is illustrated.As an example, target object is personage.
Step 5021: the facial positions of target object described in the depth image are determined by type of face detection method.
There are a variety of type of face detection method in practical applications can realize face recognition and detection in image, therefore, this The specific type of face detection method that place uses step 5021 is without limiting.
Step 5022: growing the depth areas of the target object using region growing method according to the facial positions Block.
To those skilled in the art, region growing method belongs to a kind of technological means of comparative maturity, therefore this Place will not be repeated here region growing method.
It, can according to step 5021-5022 it is found that in the present embodiment to determine the corresponding minimum circumscribed rectangle of target object To determine the approximate region in depth image where target image, i.e., deeply by way of first face detection rear region growth Spend region unit.Thereafter minimum circumscribed rectangle is determined further according to depth areas block.
Step 5023: obtaining the minimum circumscribed rectangle of the depth areas block.
Since depth areas block includes the target object in depth image, the present embodiment is by the minimum of depth areas block Boundary rectangle is as the corresponding minimum circumscribed rectangle of target image.
Step 503: using the mapping relations of each pixel between the depth image and the color image, from the coloured silk Chromatic graph picture obtains the corresponding target color image of the minimum circumscribed rectangle.
The implementation of step 503 is identical as the implementation of step 103 in previous embodiment in the present embodiment, for The associated description of step 503 can refer to previous embodiment, and details are not described herein again.
Allow the size of image divided and the size of target color image not to be inconsistent in view of Image Segmentation Model, allows to divide Image length-width ratio and target color image length-width ratio it is inconsistent, following step is executed to target color image in the present embodiment The operation of rapid 504-506.
Step 504: determining the size and length-width ratio of the image that described image parted pattern allows to divide.
Step 505: the target color image being cut according to the length-width ratio, the image after being cut.
It, can be with when the length-width ratio of target color image and Image Segmentation Model allow the length-width ratio for the image divided not to be inconsistent It is that length-width ratio is consistent with the length-width ratio of image for allowing to divide by target color image cropping by way of cutting.For example, color The length-width ratio of Semu logo image is 4:3, and the size for the image that Image Segmentation Model allows to divide is 16:9, by executing this step Suddenly the image length-width ratio after the cutting obtained is 16:9.
Step 506: the image after the cutting being carried out by diminution processing according to the size, the image after being reduced.
For image after cutting although length-width ratio meets the demand for allowing to divide of Image Segmentation Model, size may be still So it is greater than the size for allowing the image divided.In response to this problem, diminution processing further can be carried out to the image after cutting.Contracting The lateral dimension of image after small is consistent with the lateral dimension of image for allowing to divide, meanwhile, the longitudinal ruler of the image after diminution It is very little also consistent with the longitudinal size of image for allowing to divide.
It is understood that the image after the diminution that target color image obtains after cutting and diminution is handled, still Include colour information.In practical applications, the pixel pixel value value range of the image after diminution may be between 0~255. And the pixel pixel value value range of the image after different diminutions may be different.To guarantee to scheme handled by Image Segmentation Model The consistency of the pixel pixel value value range of picture can further execute following steps 507 in the present embodiment.
Step 507: the image after the diminution being normalized, the image after being normalized.
As an example, pixel pixel value value range can be [- 1,1] in the image after normalization;As another Example, pixel pixel value value range can be [0,1] in the image after normalization.
It should be noted that if not executing step 507 in the present embodiment, then it is subsequent to utilize Image Segmentation Model pair Image after diminution is split, the image after being divided.
And if executing step 507, follow-up process is referring to following step 508.
Step 508: the image after the normalization being split using Image Segmentation Model, image after being divided.
It should be noted that illustrating two kinds of possible pretreatment operations for target color image in the present embodiment. I.e. using Image Segmentation Model carry out image segmentation before, use pretreatment mode 1) step 504-506 description cutting It is handled with diminution;Or use pretreatment mode 2) step 504-507 description cutting, reduce processing and normalized. For the possible pretreatment mode of both the above, in preparatory training pattern, there is also differences for training set.Preceding step M1-M5 The training process of Image Segmentation Model is described, wherein M4 is specially to obtain training set using target color training image.
If the present embodiment in application image parted pattern, be to target color image carry out pretreatment mode 1) after obtain Image after the diminution obtained is split processing, then M4 specifically: target color training image is carried out pretreatment mode 1) it retouches The image obtained after the operation stated is as training set.In turn, the Image Segmentation Model and image to be processed that training obtains, that is, contract Image after small is mutually adapted.
If the present embodiment in application image parted pattern, be to target color image carry out pretreatment mode 2) after obtain Image after the normalization obtained is split processing, then M4 specifically: target color training image is carried out pretreatment mode 2) The image obtained after the operation of description is as training set.In turn, the Image Segmentation Model and image to be processed that training obtains, i.e., Image after normalization is mutually adapted.
Step 509: image spreading after the segmentation is extremely consistent with the size of the color image, according to the figure after extension As obtaining foreground and background.
It is understood that the size of image is less than the size for the color image that step 501 obtains after segmentation.Therefore, divide Cut the use demand that the fine target object of segmentation contour shown in rear image may be unable to satisfy user.For example, user is uncommon Prospect X (i.e. target object) in image after segmentation is added on another piece image Y by prestige, the size of image Y and color image Unanimously, therefore prospect X and image Y and it is not suitable for, prospect X is too small relative to image Y.Certainly, user there may also be others and answer With demand, it is not limited herein.
Based on the above issues, this step 509 can be executed to be extended model after segmentation.When extension, can first into The length-width ratio of amplified image is restored to the length-width ratio of color image using " mending 0 " mode again thereafter by row amplification." mending 0 " It is that the pixel that pixel value is 0 is filled up into the region other than amplified image, the image after constituting extension.After the extension The size of image and the size of color image are consistent.In image after extension, prospect X ' (i.e. target object) size can meet use The application demand at family.
Based on previous embodiment, correspondingly, the application furthermore provides a kind of image segmentation device.Below with reference to implementation The specific implementation of the device is described in example and attached drawing.
Referring to Fig. 6, which is a kind of structural schematic diagram of image segmentation device provided by the embodiments of the present application.
As shown in fig. 6, image segmentation device provided in this embodiment, comprising:
Image first obtains module 601, for obtaining the color image and depth image of target object in the scene;
The first determining module of rectangle 602, for determining that the target object is corresponding minimum outer from the depth image Connect rectangle;
Image second obtains module 603, for utilizing each pixel between the depth image and the color image Mapping relations obtain the corresponding target color image of the minimum circumscribed rectangle from the color image;
Image segmentation module 604 is divided for being split using Image Segmentation Model to the target color image Cut rear image.
Wherein, described image parted pattern, comprising: depth separates convolutional coding structure, reversed residual error and linear bottleneck convolution The residual error structure of structure and Xception network.
Image segmentation device provided by the present application utilizes target object color image in the scene and depth image and pre- First trained Image Segmentation Model realizes image segmentation.It, can be by target using the depth information of target object in depth image Object identification distinguishes over other objects or background, to primarily determine the corresponding external square of minimum of target object in depth image Shape.Depth image is corresponded to each other with target object in the scene of color image, and each pixel also has mapping between two images Relationship, therefore the corresponding target color image of minimum circumscribed rectangle can be obtained from color image.For by target object from colour Fine segmentation comes out in target image, and device is split target color image using Image Segmentation Model.Minimum external square The size of shape is less than the size of depth image and color image, therefore target color image is compared to the cromogram initially obtained Picture, it is smaller, model data volume to be treated is reduced, the image segmentation efficiency of image processing model is effectively promoted.In addition, Depth, which separates convolutional coding structure, in Image Segmentation Model can effectively reduce model calculation amount.Because model calculation amount is few, and needs Picture size to be processed is small, so model running fast speed, it can be weaker in the CPU processing capacity such as mobile phone using the device Real-time perfoming image segmentation is realized on terminal device.
Further, since the main body in minimum circumscribed rectangle is target object, therefore the coloured silk obtained according to minimum circumscribed rectangle Main body in the logo image of Semu is also target object, compared to processing color image, is carried out using model to target color image Image segmentation correspondingly reduces model to the segmentation difficulty of image.Meanwhile reversed residual error and linear bottleneck in Image Segmentation Model Convolutional coding structure ensure that the ability to express of model;The residual error structure of Xception network can reduce the loss of detailed information.Because Target color image reduces the processing difficulty of model, and the ability to express of model is strong, and the detailed information of loss is few, so using The device can promote the fineness of image segmentation, realize that fine scratch is schemed.
The present embodiment further provides another image segmentation device, which can be found in Fig. 7 can by Fig. 7 Know, which is added to following multiple modules on the basis of aforementioned each module:
Image third obtains module 701, for obtaining colored training image and depth of first object in the first scene Training image;
The second determining module of rectangle 702, for determining first object corresponding the from the depth training image One minimum circumscribed rectangle;
Image the 4th obtains module 703, for obtaining first minimum circumscribed rectangle pair from the colored training image The target color training image answered;
Training set obtains module 704, for obtaining training set using the target color training image;
Model training module 705 is trained for treating training pattern using the training set and majorized function, obtains Described image parted pattern.
Image third obtains module 701, the second determining module of rectangle 702, the acquisition of image the 4th module 703, training set and obtains Modulus block 704 and model training module 705 realize the complete training for treating training pattern jointly, and final obtain is applied to Fig. 6 institute The Image Segmentation Model of image segmentation is carried out in the device shown.
It is understood that in practical applications, for the real-time for promoting image segmentation, Image Segmentation Model may have Handle the size requirement of image.Therefore, the size for the image for allowing to divide when Image Segmentation Model is less than target color image Size.In addition, Image Segmentation Model allows the length-width ratio for the image divided may be inconsistent with the length-width ratio of target color image. Based on problem above, in image segmentation device provided by the embodiments of the present application, may further include:
Size and length-width ratio determining module, for determining that described image parted pattern allows the size and length of the image of segmentation Wide ratio;
Cut module, for cutting according to the length-width ratio to the target color image, the figure after being cut Picture;
Processing module is reduced to be reduced for the image after the cutting to be carried out diminution processing according to the size Image afterwards;
Then described image divides module 604, specifically includes:
First cutting unit, for being split using Image Segmentation Model to the image after the diminution.
It is understood that the image after the diminution that target color image obtains after cutting and diminution is handled, still Include colour information.In practical applications, the pixel pixel value value range of the image after diminution may be between 0~255. And the pixel pixel value value range of the image after different diminutions may be different.To guarantee to scheme handled by Image Segmentation Model The consistency of the pixel pixel value value range of picture, device can further include in the present embodiment:
Normalized module, for the image after the diminution to be normalized, the figure after being normalized Picture;
Then described image divides module 604, specifically includes:
Second cutting unit, for being split using Image Segmentation Model to the image after the normalization.
Optionally, if target object is the object with face, such as people, then in the present embodiment device rectangle first Determining module 602, can specifically include:
Face-detecting unit, for determining the face of target object described in the depth image by type of face detection method Position;
Region growing unit, for growing the target object using region growing method according to the facial positions Depth areas block;
Rectangle acquiring unit, for obtaining the minimum circumscribed rectangle of the depth areas block.
On the basis of the image partition method and image segmentation device that previous embodiment provides, correspondingly, the application is also A kind of terminal device is provided.The specific implementation of terminal device is described below with reference to embodiment and attached drawing.
Referring to Fig. 8, which is a kind of structural schematic diagram of terminal device provided by the embodiments of the present application.
As shown in figure 8, terminal device provided in this embodiment, comprising:
Photographic device 801 and processor 802;
Wherein, the photographic device 801 is used for sampling depth image and color image, and by the depth image and institute It states color image and is sent to the processor 802;
The processor 802 is executed for running computer program, when described program is run as in preceding method embodiment The image partition method.
In practical applications, which can be the weaker equipment of the CPU processing capacity such as mobile phone or tablet computer.This For the concrete type of terminal device without limiting in embodiment.
Front referred to, image partition method provided by the present application utilizes target object color image in the scene and depth It spends image and trained Image Segmentation Model realizes image segmentation in advance.In depth image, believed using the depth of target object Breath, which can identify target object, distinguishes over other objects or background, to primarily determine that target object is corresponding in depth image Minimum circumscribed rectangle.Depth image is corresponded to each other with target object in the scene of color image, each pixel between two images Also there are mapping relations, therefore the corresponding target color image of minimum circumscribed rectangle can be obtained from color image.For by target Object fine segmentation from target color image comes out, and is split using Image Segmentation Model to target color image.It is minimum The size of boundary rectangle is less than the size of depth image and color image, therefore target color image is compared to the coloured silk initially obtained Chromatic graph picture, it is smaller, model data volume to be treated is reduced, the image segmentation efficiency of image processing model is effectively promoted. In addition, the separable convolutional coding structure of depth can effectively reduce model calculation amount in Image Segmentation Model.Because model calculation amount is few, and And picture size to be treated is small, so model running fast speed, using this method can the CPU such as mobile phone processing capacity compared with Real-time perfoming image segmentation is realized on weak terminal device.
Further, since the main body in minimum circumscribed rectangle is target object, therefore the coloured silk obtained according to minimum circumscribed rectangle Main body in the logo image of Semu is also target object, compared to processing color image, is carried out using model to target color image Image segmentation correspondingly reduces model to the segmentation difficulty of image.Meanwhile reversed residual error and linear bottleneck in Image Segmentation Model Convolutional coding structure ensure that the ability to express of model;The residual error structure of Xception network can reduce the loss of detailed information.Because Target color image reduces the processing difficulty of model, and the ability to express of model is strong, and the detailed information of loss is few, so using This method can promote the fineness of image segmentation, realize that fine scratch is schemed.
It can be realized in the weaker equipment of CPU processing capacity by image partition method in this present embodiment in real time and fine Stingy figure, therefore, correspondingly terminal device provided in this embodiment also can be realized corresponding effect.
As shown in figure 9, optionally, terminal device provided in this embodiment can also further comprise: display device 803.
As an example, display device 803 can be display screen.Processor 802, which is run after computer program is divided, schemes As after, image after segmentation can be sent to display device 803 and shown.
Optionally, terminal device provided in this embodiment can also further comprise: memory 804.Memory 804 is for depositing Store up aforementioned computer program.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment it Between same and similar part may refer to each other, each embodiment focuses on the differences from other embodiments. For equipment and system embodiment, since it is substantially similar to the method embodiment, so describe fairly simple, The relevent part can refer to the partial explaination of embodiments of method.Equipment and system embodiment described above is only schematic , wherein unit may or may not be physically separated as illustrated by the separation member, as unit prompt Component may or may not be physical unit, it can and it is in one place, or may be distributed over multiple networks On unit.Some or all of the modules therein can be selected to achieve the purpose of the solution of this embodiment according to the actual needs. Those of ordinary skill in the art can understand and implement without creative efforts.
The above, only a kind of specific embodiment of the application, but the protection scope of the application is not limited thereto, Within the technical scope of the present application, any changes or substitutions that can be easily thought of by anyone skilled in the art, Should all it cover within the scope of protection of this application.Therefore, the protection scope of the application should be with scope of protection of the claims Subject to.

Claims (10)

1. a kind of image partition method characterized by comprising
Obtain the color image and depth image of target object in the scene;
The corresponding minimum circumscribed rectangle of the target object is determined from the depth image;
Using the mapping relations of pixel each between the depth image and the color image, institute is obtained from the color image State the corresponding target color image of minimum circumscribed rectangle;
The target color image is split using Image Segmentation Model, image after being divided;Described image divides mould Type, comprising: depth separates the residual error of convolutional coding structure, reversed residual error and linear bottleneck convolutional coding structure and Xception network Structure.
2. the method according to claim 1, wherein utilizing Image Segmentation Model to the target color described Before image is split, the method also includes:
Determine the size and length-width ratio of the image that described image parted pattern allows to divide;
The target color image is cut according to the length-width ratio, the image after being cut;
The image after the cutting is subjected to diminution processing according to the size, the image after being reduced;
It is described that the target color image is split using Image Segmentation Model, it specifically includes:
The image after the diminution is split using Image Segmentation Model.
3. according to the method described in claim 2, it is characterized in that, it is described according to the size by the image after the cutting Diminution processing is carried out, after the image after being reduced, the method also includes:
Image after the diminution is normalized, the image after being normalized;
It is described that the image after the diminution is split using Image Segmentation Model, it specifically includes:
The image after the normalization is split using Image Segmentation Model.
4. according to the method described in claim 2, it is characterized in that, the method is also after described divided after image Include:
By image spreading after the segmentation to consistent with the size of the color image, according to the image after extension obtain prospect and Background.
5. the method according to claim 1, wherein described determine the target object from the depth image Corresponding minimum circumscribed rectangle, specifically includes:
The facial positions of target object described in the depth image are determined by type of face detection method;
The depth areas block of the target object is grown using region growing method according to the facial positions;
Obtain the minimum circumscribed rectangle of the depth areas block.
6. method according to claim 1-5, which is characterized in that utilize Image Segmentation Model to described described Before target color image is split, the method also includes:
Obtain colored training image and depth training image of first object in the first scene;
Corresponding first minimum circumscribed rectangle of first object is determined from the depth training image;
The corresponding target color training image of first minimum circumscribed rectangle is obtained from the colored training image;
Training set is obtained using the target color training image;
Training pattern is treated using the training set and majorized function to be trained, and obtains described image parted pattern.
7. according to the method described in claim 6, it is characterized in that, the majorized function includes: that root mean square propagates majorized function With stochastic gradient descent majorized function;It is described to treat training pattern using the training set and majorized function and be trained, specifically Include:
Majorized function is propagated using the training set and the root mean square to be trained to described to training pattern;
Judge current intersection union than whether being less than preset first threshold value, if it is, using the training set and it is described with Machine gradient decline majorized function is trained to described to training pattern;
Current intersection union is judged than whether being less than default second threshold, if it is, terminating training.
8. a kind of image segmentation device characterized by comprising
Image first obtains module, for obtaining the color image and depth image of target object in the scene;
The first determining module of rectangle, for determining the corresponding minimum circumscribed rectangle of the target object from the depth image;
Image second obtains module, for being closed using the mapping of each pixel between the depth image and the color image System, obtains the corresponding target color image of the minimum circumscribed rectangle from the color image;
Image segmentation module is schemed after being divided for being split using Image Segmentation Model to the target color image Picture;Described image parted pattern, comprising: depth separates convolutional coding structure, reversed residual error and linear bottleneck convolutional coding structure, and The residual error structure of Xception network.
9. device according to claim 8, which is characterized in that further include:
Image third obtains module, for obtaining colored training image and depth training figure of first object in the first scene Picture;
The second determining module of rectangle, for determining that corresponding first minimum of first object is outer from the depth training image Connect rectangle;
Image the 4th obtains module, for obtaining the corresponding colour of first minimum circumscribed rectangle from the colored training image Target training image;
Training set obtains module, for obtaining training set using the target color training image;
Model training module is trained for treating training pattern using the training set and majorized function, obtains the figure As parted pattern.
10. a kind of terminal device characterized by comprising photographic device and processor;
The photographic device is used for sampling depth image and color image, and the depth image and the color image is sent out It send to the processor;
The processor is executed for running computer program, when described program is run as claim 1-7 is described in any item Image partition method.
CN201910355864.8A 2019-04-29 2019-04-29 A kind of image partition method, device and terminal device Pending CN110084828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910355864.8A CN110084828A (en) 2019-04-29 2019-04-29 A kind of image partition method, device and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910355864.8A CN110084828A (en) 2019-04-29 2019-04-29 A kind of image partition method, device and terminal device

Publications (1)

Publication Number Publication Date
CN110084828A true CN110084828A (en) 2019-08-02

Family

ID=67417580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910355864.8A Pending CN110084828A (en) 2019-04-29 2019-04-29 A kind of image partition method, device and terminal device

Country Status (1)

Country Link
CN (1) CN110084828A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210436A (en) * 2019-12-24 2020-05-29 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Lens segmentation method, device and storage medium
CN111539937A (en) * 2020-04-24 2020-08-14 北京海益同展信息科技有限公司 Object index detection method and livestock weight detection method and device
CN111724338A (en) * 2020-03-05 2020-09-29 中冶赛迪重庆信息技术有限公司 Turntable abnormity identification method, system, electronic equipment and medium
CN112115913A (en) * 2020-09-28 2020-12-22 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384353A (en) * 2016-09-12 2017-02-08 佛山市南海区广工大数控装备协同创新研究院 Target positioning method based on RGBD
CN107563388A (en) * 2017-09-18 2018-01-09 东北大学 A kind of convolutional neural networks object identification method based on depth information pre-segmentation
CN108764072A (en) * 2018-05-14 2018-11-06 浙江工业大学 A kind of blood cell subsets image classification method based on Multiscale Fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384353A (en) * 2016-09-12 2017-02-08 佛山市南海区广工大数控装备协同创新研究院 Target positioning method based on RGBD
CN107563388A (en) * 2017-09-18 2018-01-09 东北大学 A kind of convolutional neural networks object identification method based on depth information pre-segmentation
CN108764072A (en) * 2018-05-14 2018-11-06 浙江工业大学 A kind of blood cell subsets image classification method based on Multiscale Fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FRANCOIS CHOLLET 等: "Xception: Deep Learning with Depthwise Separable Convolutions", 《HTTPS://ARXIV.ORG/PDF/1610.02357.PDF》 *
MARK SANDLER 等: "MobileNetV2: Inverted Residuals and Linear Bottlenecks", 《HTTPS://ARXIV.ORG/PDF/1801.04381.PDF》 *
YUZHIJIEDINGZHE: "Xception: Deep Learning with Depthwise Separable Convolutions个人理解", 《HTTPS://BLOG.CSDN.NET/YUZHIJIEDINGZHE/ARTICLE/DETAILS/78231942》 *
大师兄: "Xception详解", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/50897945》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210436A (en) * 2019-12-24 2020-05-29 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Lens segmentation method, device and storage medium
CN111210436B (en) * 2019-12-24 2022-11-11 中国科学院宁波工业技术研究院慈溪生物医学工程研究所 Lens segmentation method, device and storage medium
CN111724338A (en) * 2020-03-05 2020-09-29 中冶赛迪重庆信息技术有限公司 Turntable abnormity identification method, system, electronic equipment and medium
CN111539937A (en) * 2020-04-24 2020-08-14 北京海益同展信息科技有限公司 Object index detection method and livestock weight detection method and device
CN112115913A (en) * 2020-09-28 2020-12-22 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN112115913B (en) * 2020-09-28 2023-08-25 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110084828A (en) A kind of image partition method, device and terminal device
US11361585B2 (en) Method and system for face recognition via deep learning
CN107291945B (en) High-precision clothing image retrieval method and system based on visual attention model
CN109376596B (en) Face matching method, device, equipment and storage medium
US9798774B1 (en) Graph data search method and apparatus
Ružić et al. Context-aware patch-based image inpainting using Markov random field modeling
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
US20150326845A1 (en) Depth value restoration method and system
CN110378338A (en) A kind of text recognition method, device, electronic equipment and storage medium
CN102186067B (en) Image frame transmission method, device, display method and system
CN114186632B (en) Method, device, equipment and storage medium for training key point detection model
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
CN103810490A (en) Method and device for confirming attribute of face image
CN105096353B (en) Image processing method and device
CN110136144A (en) A kind of image partition method, device and terminal device
CN108734126B (en) Beautifying method, beautifying device and terminal equipment
US10949991B2 (en) Method and apparatus for determining position of pupil
CN106570909A (en) Skin color detection method, device and terminal
CN105046661A (en) Method, apparatus and intelligent terminal for improving video beautification efficiency
CN111178514A (en) Neural network quantification method and system
CN111814744A (en) Face detection method and device, electronic equipment and computer storage medium
CN101339661A (en) Real time human-machine interaction method and system based on moving detection of hand held equipment
CN109325903A (en) The method and device that image stylization is rebuild
CN113421204A (en) Image processing method and device, electronic equipment and readable storage medium
CN105701775B (en) A kind of image de-noising method based on improvement self-adapting dictionary study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190802

RJ01 Rejection of invention patent application after publication