CN110349165A - Image processing method, device, electronic equipment and computer-readable medium - Google Patents

Image processing method, device, electronic equipment and computer-readable medium Download PDF

Info

Publication number
CN110349165A
CN110349165A CN201810290778.9A CN201810290778A CN110349165A CN 110349165 A CN110349165 A CN 110349165A CN 201810290778 A CN201810290778 A CN 201810290778A CN 110349165 A CN110349165 A CN 110349165A
Authority
CN
China
Prior art keywords
image
processing
specified region
gradient
background area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810290778.9A
Other languages
Chinese (zh)
Inventor
毛伟
刘享军
杨超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810290778.9A priority Critical patent/CN110349165A/en
Publication of CN110349165A publication Critical patent/CN110349165A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This disclosure relates to a kind of image processing method, device, electronic equipment and computer-readable medium.It is related to field of image processing, this method comprises: being split processing to image to obtain specified region and the background area in described image;Colouring processing is carried out to the specified region;Calculate the gradient distribution information in the specified region and the background area after carrying out colouring processing;And Poisson reconstruction is carried out to described image by the gradient distribution information of the specified region and the background area, to obtain displaying image.Image processing method, device, electronic equipment and the computer-readable medium of the disclosure can efficiently accurately restore virtual hair dyeing effect, promote image display effect.

Description

Image processing method, device, electronic equipment and computer-readable medium
Technical field
This disclosure relates to field of image processing, in particular to a kind of image processing method, device, electronic equipment and Computer-readable medium.
Background technique
Hair dyeing has become people and has changed moulding common method, due to not knowing to the effect after hair dyeing, so that greatly Majority take careful attitude to hair dyeing, in order to preferably for user provide selection reference, also for increase image interest, The image processing techniques virtually being had hair dyed to user images starts to emerge in large numbers.Existing two-dimentional hair dyeing scheme in the prior art, i.e., For by image procossing, different dyeing effects are presented to user after user is by upload pictures.Although prior art is even The bridge that user virtually beautifies Yu really beautifies has been connect, but has divided inaccurate, dyeing distortion etc. there are non real-time, hair and lacks Point.
Therefore, it is necessary to a kind of new image processing method, device, electronic equipment and computer-readable mediums.
Above- mentioned information are only used for reinforcing the understanding to the background of the disclosure, therefore it disclosed in the background technology part It may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
In view of this, the disclosure provides a kind of image processing method, device, electronic equipment and computer-readable medium, energy It is enough that efficiently accurately virtual hair dyeing effect is restored, promote image display effect.
Other characteristics and advantages of the disclosure will be apparent from by the following detailed description, or partially by the disclosure Practice and acquistion.
According to the one side of the disclosure, a kind of image processing method is proposed, this method comprises: being split processing to image To obtain specified region and the background area in described image;Colouring processing is carried out to the specified region;Calculating is painted The gradient distribution information in specified region and the background area after processing;And pass through the specified region and the background The gradient distribution information in region carries out Poisson reconstruction to described image, to obtain displaying image.
In a kind of exemplary embodiment of the disclosure, further includes: established by deep learning algorithm for the figure Image Segmentation Model as being split processing;Processing is split to obtain the specified region in described image and back to image Scene area includes: by described image input picture parted pattern to obtain specified region and the background area in described image.
In a kind of exemplary embodiment of the disclosure, established by deep learning algorithm for dividing described image The Image Segmentation Model for cutting processing includes: that sample image is carried out image preprocessing, generates sample data;To the sample data It is marked;And deep learning algorithm model is trained by the sample data, obtain described image parted pattern.
In a kind of exemplary embodiment of the disclosure, it includes: by labelme pairs that the sample data, which is marked, The sample data carries out polygonal mark.
It include: to the sample by sample image progress image preprocessing in a kind of exemplary embodiment of the disclosure Image carries out the processing of image augmentation.
In a kind of exemplary embodiment of the disclosure, the deep learning algorithm model includes: full convolutional neural networks Model;Deep learning algorithm model is trained by the sample data, obtaining described image parted pattern includes: by institute Sample data is stated to input in full convolutional neural networks model;Full convolutional neural networks model handles the sample by 8 convolutional layers Notebook data obtains convolved data;The described image parted pattern is determined by the label and the convolved data of sample data.
In a kind of exemplary embodiment of the disclosure, the specified region after carrying out colouring processing and the background are calculated The gradient distribution information in region includes: the first ladder that the specified region after colouring processing is obtained by Laplace transform Spend distributed intelligence;The second gradient distribution information of the background area is obtained by Laplace transform;And pass through described the Gradient distribution information described in one gradient distribution information and the second gradient distribution acquisition of information.
In a kind of exemplary embodiment of the disclosure, pass through the first gradient distributed intelligence and second gradient point Gradient distribution information described in cloth acquisition of information includes: to cover second ladder for the first gradient distributed intelligence as exposure mask It spends in distributed intelligence, to obtain the gradient distribution information.
In a kind of exemplary embodiment of the disclosure, pass through the gradient distribution in the specified region and the background area It includes: by the gradient distribution acquisition of information image divergence information that information, which carries out Poisson reconstruction to described image,;By described Image divergence information solves the coefficient matrix in Poisson reconstruction processing to carry out Poisson reconstruction to described image.
According to the one side of the disclosure, a kind of image processing apparatus is proposed, which includes: image segmentation module, is used for Processing is split to obtain specified region and the background area in described image to image;Region colouring module, for institute It states specified region and carries out colouring processing;Gradient information module, for calculate the specified region after carrying out colouring processing with it is described The gradient distribution information of background area;Image reconstruction module, for the gradient by the specified region and the background area Distributed intelligence carries out Poisson reconstruction to described image, to obtain displaying image.
In a kind of exemplary embodiment of the disclosure, further includes: model module, for being established by deep learning algorithm For being split the Image Segmentation Model of processing to described image.
According to the one side of the disclosure, a kind of electronic equipment is proposed, which includes: one or more processors; Storage device, for storing one or more programs;When one or more programs are executed by one or more processors, so that one A or multiple processors realize such as methodology above.
According to the one side of the disclosure, it proposes a kind of computer-readable medium, is stored thereon with computer program, the program Method as mentioned in the above is realized when being executed by processor.
, can be efficient according to the image processing method of the disclosure, device, electronic equipment and computer-readable medium, precisely Virtual hair dyeing effect is restored, promoted image display effect.
It should be understood that the above general description and the following detailed description are merely exemplary, this can not be limited It is open.
Detailed description of the invention
Its example embodiment is described in detail by referring to accompanying drawing, above and other target, feature and the advantage of the disclosure will It becomes more fully apparent.Drawings discussed below is only some embodiments of the present disclosure, for the ordinary skill of this field For personnel, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the system block diagram of a kind of image processing method and device shown according to an exemplary embodiment.
Fig. 2 is a kind of flow chart of image processing method shown according to an exemplary embodiment.
Fig. 3 is a kind of flow chart of the image processing method shown according to another exemplary embodiment.
Fig. 4 is the schematic diagram in a kind of image processing method shown according to another exemplary embodiment.
Fig. 5 is the schematic diagram of full convolutional neural networks in a kind of image processing method shown according to an exemplary embodiment
Fig. 6 is a kind of flow chart of the image processing method shown according to another exemplary embodiment.
Fig. 7 A- Fig. 7 C is the schematic diagram in a kind of image processing method shown according to another exemplary embodiment.
Fig. 8 is a kind of block diagram of image processing apparatus shown according to an exemplary embodiment.
Fig. 9 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
Figure 10 is that a kind of computer readable storage medium schematic diagram is shown according to an exemplary embodiment.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be real in a variety of forms It applies, and is not understood as limited to embodiment set forth herein;On the contrary, thesing embodiments are provided so that the disclosure will be comprehensively and complete It is whole, and the design of example embodiment is comprehensively communicated to those skilled in the art.Identical appended drawing reference indicates in figure Same or similar part, thus repetition thereof will be omitted.
In addition, described feature, structure or characteristic can be incorporated in one or more implementations in any suitable manner In example.In the following description, many details are provided to provide and fully understand to embodiment of the disclosure.However, It will be appreciated by persons skilled in the art that can with technical solution of the disclosure without one or more in specific detail, Or it can be using other methods, constituent element, device, step etc..In other cases, it is not shown in detail or describes known side Method, device, realization or operation are to avoid fuzzy all aspects of this disclosure.
Block diagram shown in the drawings is only functional entity, not necessarily must be corresponding with physically separate entity. I.e., it is possible to realize these functional entitys using software form, or realized in one or more hardware modules or integrated circuit These functional entitys, or these functional entitys are realized in heterogeneous networks and/or processor device and/or microcontroller device.
Flow chart shown in the drawings is merely illustrative, it is not necessary to including all content and operation/step, It is not required to execute by described sequence.For example, some operation/steps can also decompose, and some operation/steps can close And or part merge, therefore the sequence actually executed is possible to change according to the actual situation.
It should be understood that although herein various assemblies may be described using term first, second, third, etc., these groups Part should not be limited by these terms.These terms are to distinguish a component and another component.Therefore, first group be discussed herein below Part can be described as the second component without departing from the teaching of disclosure concept.As used herein, term " and/or " include associated All combinations for listing any of project and one or more.
It will be understood by those skilled in the art that attached drawing is the schematic diagram of example embodiment, module or process in attached drawing Necessary to not necessarily implementing the disclosure, therefore it cannot be used for the protection scope of the limitation disclosure.
Fig. 1 is the system block diagram of a kind of image processing method and device shown according to an exemplary embodiment.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications, such as the application of shopping class, net can be installed on terminal device 101,102,103 The application of page browsing device, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be the various electronic equipments with display screen and supported web page browsing, packet Include but be not limited to smart phone, tablet computer, pocket computer on knee and desktop computer etc..
Server 105 can be to provide the server of various services, such as utilize terminal device 101,102,103 to user The website browsed provides the back-stage management server supported.Back-stage management server can be to the image processing requests received Etc. data analyze etc. processing, and processing result (such as treated image) is fed back into terminal device.
Server 105 for example can be split processing to image to obtain specified region and the background area in described image Domain;Server 105 for example can carry out colouring processing to the specified region;Server 105 can be calculated for example and carry out colouring processing The gradient distribution information in specified region and the background area later;Server 105 can for example by the specified region with The gradient distribution information of the background area carries out Poisson reconstruction to described image, to obtain displaying image.Server 105 may be used also Such as the image of displaying is pushed in the terminal device 101,102,103 at user, so as to terminal device 101,102,103 into Row is shown.
Server 105 can be the server of an entity, also may be, for example, multiple server compositions, needs to illustrate It is that image processing method provided by the embodiment of the present disclosure is generally executed by server 105, correspondingly, image processing apparatus one As be set to server 105 execution.And be supplied to user carry out picture browsing page end be normally at terminal device 101, 102, in 103.
Fig. 2 is a kind of flow chart of image processing method shown according to an exemplary embodiment.In the application at image Reason method includes step S202 to S208.
As shown in Fig. 2, being split processing to image in S202 to obtain specified region and the background in described image Region.Can for example, user by webpage or mobile terminal uploading pictures, after background server obtains the picture, to image into Row dividing processing, to separate user's hair zones to be dyed and other regions without dyeing processing.
In one embodiment, the image point for being split processing to described image is established by deep learning algorithm Cut model;By described image input picture parted pattern to obtain specified region and the background area in described image.Wherein, deep Spending learning algorithm includes full convolutional neural networks (Region-based Fully Convolutional Networks, R- FCN).The inventor of the present application discovered that: the segmentation of the hair of user and other parts belongs to the segmentation of Pixel-level in image, belongs to Semantic segmentation and edge detection problem.For semantic segmentation and edge detection problem, classical way is exactly with some pixel Centered on take an image block, then take the feature of image block as sample go train classifier.In test phase, similarly exist Test chart on piece is adopted an image block centered on each pixel and classified, prediction of the classification results as the pixel Value.However, this method for taking image block to classify pixel-by-pixel is very time-consuming, another in conventional images cutting techniques is not Foot is the limitation by image block, biggish contextual information (context) can not be modeled, to influence the performance of algorithm.Institute In the present embodiment, to establish Image Segmentation Model by R-FCN to be separated to described image.
Make after convolutional layer with classical convolutional neural networks (Convolutional Neural Network, CNN) Classification difference is carried out with the feature vector that full articulamentum obtains regular length, R-FCN can receive the input picture of arbitrary dimension, It is up-sampled using feature map of the warp lamination to the last one convolutional layer, so that it is restored to input picture identical Size so as to produce a prediction to each pixel, while also remaining the spatial information in original input picture, Finally classified pixel-by-pixel on the characteristic pattern of up-sampling.
In S204, colouring processing is carried out to the specified region.It can be for example, depending on the user's operation, receiving user's choosing Fixed color data is that specified region carries out colouring processing with selected color.
In S206, the gradient distribution information in the specified region and the background area after carrying out colouring processing is calculated. Such as: the first gradient distributed intelligence in the specified region after colouring processing is obtained by Laplace transform;Pass through drawing Laplace transform obtains the second gradient distribution information of the background area;And pass through the first gradient distributed intelligence and institute State gradient distribution information described in the second gradient distribution acquisition of information.
In one embodiment, by described in the first gradient distributed intelligence and the second gradient distribution acquisition of information Gradient distribution information includes: to cover the first gradient distributed intelligence as exposure mask in the second gradient distribution information, To obtain the gradient distribution information.
The concept of exposure mask in Digital Image Processing is to be referred from the process of PCB plate-making, in semiconductor fabrication, many cores Blade technolgy step uses photoetching technique, and the figure " egative film " for these steps is known as exposure mask, and image masks are similar with its, with choosing Fixed image, figure or object blocks the image (all or part) of processing, to control the region or place of image procossing Reason process.
In S208, described image is moored by the gradient distribution information in the specified region and the background area Pine is rebuild, to obtain displaying image.For example, passing through the gradient distribution acquisition of information image divergence information;Pass through described image Divergence information solves the coefficient matrix in Poisson reconstruction processing to carry out Poisson reconstruction to described image.
Poisson reconstruction is a kind of method of image co-registration processing, and Poisson reconstruction is a kind of implicit function method of surface reconstruction, example It such as, is 1 outside object available object, the indicator function for being 0 outside object indicates, by solving this function and then carrying out etc. Value face is extracted, to obtain surface, solves the process of this function, is exactly constructed a Poisson's equation and is carried out to Poisson's equation The process of solution.It is a kind of image interfusion method that can possess surface details that Poisson, which is rebuild,.
According to the image processing method of the disclosure, after being partitioned into hair zones, by calculating hair zones texture ladder Degree variation carries out color treatments to each pixel to paint, is rebuild by Poisson obtain display diagram after painting The mode of picture can efficiently accurately restore virtual hair dyeing effect, promote image display effect.
It will be clearly understood that the present disclosure describes how to form and use particular example, but the principle of the disclosure is not limited to These exemplary any details.On the contrary, the introduction based on disclosure disclosure, these principles can be applied to many other Embodiment.
Fig. 3 is a kind of flow chart of the image processing method shown according to another exemplary embodiment.Fig. 3 is to passing through depth The Image Segmentation Model that learning algorithm establishes for being split processing to described image is described in detail.
Wherein, in S302, sample image is subjected to image preprocessing, generates sample data.It can be for example, to the sample Image carries out the processing of image augmentation.
In one embodiment, sample image data is made into predefined type, may be, for example, in PASCALVOC 2012 The format of segementClass, the type of image data collection may include self-timer small video, the number with face hair complete area According to collection.After collecting certain amount sample, the processing of image augmentation is carried out so as to sample data increase to sample.Picture can for example be passed through It cuts out, small rotation is carried out to picture, mirror image is carried out to picture, changes channel color in image, and slightly obscure to picture To generate new picture.
In S304, the sample data is marked.It can be for example, being carried out by labelme to the sample data Polygonal mark.Marking effect can be for example shown in Fig. 4.
In S306, deep learning algorithm model is trained by the sample data, obtains described image segmentation Model.The deep learning algorithm model includes: full convolutional neural networks model;Deep learning is calculated by the sample data Method model is trained, and obtaining described image parted pattern includes: that the sample data is inputted full convolutional neural networks model In;Full convolutional neural networks model handles the sample data by 8 convolutional layers, obtains convolved data;Pass through sample data Label and the convolved data determine the described image parted pattern.
Fig. 5 is the schematic diagram of full convolutional neural networks, as shown in figure 5, the full articulamentum in traditional CNN is converted to by FCN Convolutional layer one by one.In traditional CNN structure, first 5 layers are convolutional layers, and the 6th layer and the 7th layer is that a length is respectively 4096 one-dimensional vector, the 8th layer be length be 1000 one-dimensional vector, respectively correspond the probability of 1000 classifications.FCN by this 3 Layer is expressed as convolutional layer, the size of convolution kernel (port number, wide, high) be respectively (4096,1,1), (4096,1,1), (1000,1, 1).All layers are all convolutional layers, and after multiple convolution (there are also pooling), obtained image is smaller and smaller, resolution ratio It is lower and lower.In order to be restored to the resolution ratio of original image from the low rough image of this resolution ratio, FCN has used up-sampling.Such as After 5 convolution (and pooling), the resolution ratio of image successively reduces 2,4,8,16,32 times.For the last layer Image is exported, needs to carry out 32 times of up-sampling, to obtain the same size of original image.
This up-sampling is realized by deconvolution (deconvolution).It is anti-to the 5th layer of output (32 times of amplifications) Convolution is to original image size, obtained result or inaccurate, and some details can not be restored.Can for example by the 4th layer of output and 3rd layer of output also successively deconvolution is respectively necessary for 16 times and 8 times up-samplings, as a result can be more fine.
As shown in figure 5, the first row corresponds to FCN-32s, the second row corresponds to FCN-16s, and the third line corresponds to FCN-8s.From FCN- 32s starts to illustrate a liter sampling process, has 5 pool inside network, so the characteristic pattern of conv7 is original image 1/32, it is most left That side image is 32x32, and the convolution in FCN will not change image size, and pool1 is 16x16, and pool2 is 8x8, and pool3 is 4x4, pool4 are 2x2, and pool5 is 1x1, so conv7 character pair figure size is 1x1.Then scheme after 32x rises sampling Piece becomes 32x32 again.FCN increases a convolutional layer herein, and the size after convolution is 32 times of input picture, can for example this roll up Product core size is also 32, needs that image realization can be allowed to guard both ends by 32x32 weight variable of feedback training, completes one The liter of a 32s samples.
The liter sampling process for illustrating FCN-16s below first carries out a 2x conv7 operation in conv7, in fact here Only increase by 1 convolutional layer, the size of characteristic pattern is 2 times of conv7 after current convolution, can be from pool5 and 2x conv7 It obtains, the size of 2x conv7 and pool4 is the same at this time, and fuse result carries out 16x and rises sampling prediction, with FCN-32s mono- Sample, and increase a convolutional layer, the size after convolution is 16 times of input picture, and the size of pool4 is 2x2, amplify 16 times, It is exactly 32x32, image size last so also becomes original size, so far completes the liter sampling of a 16s.In FCN Sampling is risen really by increasing convolutional layer, is reached by the training method training convolutional layer of feedback and is guarded both ends, at this moment convolution The effect of layer can be regarded as the inverse process of pool.
According to the image processing method of the disclosure, the image segmentation module obtained by full convolutional neural networks model, energy Enough accurate hair zones extracted in user images.And image segmentation module can receive the input picture of arbitrary size, Without requiring all training image and test image that there is same size.And the also more efficient place of image segmentation module Image is managed, is avoided due to the problem of using block of pixels and bring repetition storage and calculating convolution.
Fig. 6 is a kind of flow chart of the image processing method shown according to another exemplary embodiment.Fig. 6 is to step S206 It is described in detail with S208.
In S602, it is distributed by the first gradient that Laplace transform obtains the specified region after colouring processing Information.The distortion of hair-dyeing region is due to directly having carried out the processing of pixel, but hair has gradient under different illumination Variation then carry out color treatments, just can protrude head therefore by the gradient distributions of Induction Solved by Laplace Transformation solution hair zones The texture information of hair, it is seen that the information of hair.It can be for example, described specified after obtaining colouring processing by Laplace transform The first gradient distributed intelligence V in region.
In S604, the second gradient distribution information of the background area, the second gradient are obtained by Laplace transform Distributed intelligence is denoted as S.
In S606, the first gradient distributed intelligence is covered into second gradient distribution as exposure mask (mask) and is believed On breath, to obtain the gradient distribution information.The second gradient distribution information is directly as covered by first gradient distributed intelligence V S corresponding part.
In S608, pass through the gradient distribution acquisition of information image divergence information.It is available every by step S605 The gradient value of a pixel, that is, the gradient information of image to be reconstructed, continue to seek local derviation to gradient information, to be dissipated Degree.It can be for example, seeking local derviation in the x and y direction to image gradient information.
In S610, the coefficient matrix in Poisson reconstruction processing is solved to described image by described image divergence information Carry out Poisson reconstruction.Poisson Reconstructed equation be Ax=b, A be image coefficient sparse matrix, b be image divergence information, by with Upper information seeks x to carry out Poisson reconstruction.
In this application, the problem unnatural for edge transition, is solved using graph cut, the core of this method Mathematical thought is that have the Poisson partial differential equation of Dirichlet boundary condition, it specifies the unknown function of area-of-interest Laplace operator, while giving the borderline unknown function value of function field.
The reason of the application is rebuild using Poisson has two o'clock: firstly, inhibiting Gradient variation slower in Laplace operator Intensity can be added to the unconspicuous place of image.And in contrast, the second order that Laplace operator extracts changes It is mostly important perception.Secondly, the scalar function on bounded domain is calculated by its borderline value and internal Laplce What son uniquely determined.Therefore Poisson's equation has unique solution.Therefore, the Laplce for giving unknown function on the certain domains of construction calculates The problem of method of son and its boundary condition, Poisson's equation just can solve seamless filled domain.The algorithm thinking of graph cut is just It is to be rebuild and schemed with Poisson's equation according to the gradient map relationship between Background and target figure, figure gradient and divergence after reconstruction Fusion figure can be characterized.
According to the image processing method in the application, hair image and then the schematic diagram such as Fig. 7 A- Fig. 7 C institute of acquisition are calculated Show.Fig. 7 A is user images, and Fig. 7 B is effect picture after image segmentation, and Fig. 7 C is effect picture after hair-dyeing.Such as Fig. 7 A- Fig. 7 C institute Show, the hair dyeing effect that the image processing method in the application can be experienced online to user, true presentation is popular at present, with one kind Mode true to nature is all high-visible come the effect for adjusting a thread silk of color development or even root of hair, utmostly simulates real effect.According to The problem of image processing method in the application, the segmentation of very good solution hair and hair dyeing.
It will be appreciated by those skilled in the art that realizing that all or part of the steps of above-described embodiment is implemented as being executed by CPU Computer program.When the computer program is executed by CPU, above-mentioned function defined by the above method that the disclosure provides is executed Energy.The program can store in a kind of computer readable storage medium, which can be read-only memory, magnetic Disk or CD etc..
Further, it should be noted that above-mentioned attached drawing is only the place according to included by the method for disclosure exemplary embodiment Reason schematically illustrates, rather than limits purpose.It can be readily appreciated that above-mentioned processing shown in the drawings is not indicated or is limited at these The time sequencing of reason.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
Following is embodiment of the present disclosure, can be used for executing embodiments of the present disclosure.It is real for disclosure device Undisclosed details in example is applied, embodiments of the present disclosure is please referred to.
Fig. 8 is a kind of block diagram of image processing apparatus shown according to an exemplary embodiment.Image processing apparatus 80 wraps It includes: image segmentation module 802, region colouring module 804, gradient information module 806, image reconstruction module 808, model module 810。
Image segmentation module 802 is used to be split image processing to obtain specified region and the background in described image Region.
Region colouring module 804 is for carrying out colouring processing to the specified region.
Gradient information module 806 is used to calculate the gradient in specified region and the background area after carrying out colouring processing Distributed intelligence.
Image reconstruction module 808 is used for the gradient distribution information by the specified region and the background area to described Image carries out Poisson reconstruction, to obtain displaying image.
In a kind of exemplary embodiment of the disclosure, image processing apparatus 80 further include: model module 810 is for passing through Deep learning algorithm establishes the Image Segmentation Model for being split processing to described image.
Fig. 9 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment.
The electronic equipment 200 of this embodiment according to the disclosure is described referring to Fig. 9.The electronics that Fig. 9 is shown Equipment 200 is only an example, should not function to the embodiment of the present disclosure and use scope bring any restrictions.
As shown in figure 9, electronic equipment 200 is showed in the form of universal computing device.The component of electronic equipment 200 can wrap It includes but is not limited to: at least one processing unit 210, at least one storage unit 220, (including the storage of the different system components of connection Unit 220 and processing unit 210) bus 230, display unit 240 etc..
Wherein, the storage unit is stored with program code, and said program code can be held by the processing unit 210 Row, so that the processing unit 210 executes described in this specification above-mentioned electronic prescription circulation processing method part according to this The step of disclosing various illustrative embodiments.For example, the processing unit 210 can be executed such as Fig. 2, Fig. 3, shown in Fig. 6 The step of.
The storage unit 220 may include the readable medium of volatile memory cell form, such as random access memory Unit (RAM) 2201 and/or cache memory unit 2202 can further include read-only memory unit (ROM) 2203.
The storage unit 220 can also include program/practical work with one group of (at least one) program module 2205 Tool 2204, such program module 2205 includes but is not limited to: operating system, one or more application program, other programs It may include the realization of network environment in module and program data, each of these examples or certain combination.
Bus 230 can be to indicate one of a few class bus structures or a variety of, including storage unit bus or storage Cell controller, peripheral bus, graphics acceleration port, processing unit use any bus structures in a variety of bus structures Local bus.
Electronic equipment 200 can also be with one or more external equipments 300 (such as keyboard, sensing equipment, bluetooth equipment Deng) communication, can also be enabled a user to one or more equipment interact with the electronic equipment 200 communicate, and/or with make Any equipment (such as the router, modulation /demodulation that the electronic equipment 200 can be communicated with one or more of the other calculating equipment Device etc.) communication.This communication can be carried out by input/output (I/O) interface 250.Also, electronic equipment 200 can be with By network adapter 260 and one or more network (such as local area network (LAN), wide area network (WAN) and/or public network, Such as internet) communication.Network adapter 260 can be communicated by bus 230 with other modules of electronic equipment 200.It should Understand, although not shown in the drawings, other hardware and/or software module can be used in conjunction with electronic equipment 200, including but unlimited In: microcode, device driver, redundant processing unit, external disk drive array, RAID system, tape drive and number According to backup storage system etc..
Through the above description of the embodiments, those skilled in the art is it can be readily appreciated that example described herein is implemented Mode can also be realized by software realization in such a way that software is in conjunction with necessary hardware.Therefore, according to the disclosure The technical solution of embodiment can be embodied in the form of software products, which can store non-volatile at one Property storage medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) in or network on, including some instructions are so that a calculating Equipment (can be personal computer, server or network equipment etc.) executes the above method according to disclosure embodiment.
Figure 10 schematically shows a kind of computer readable storage medium schematic diagram in disclosure exemplary embodiment.
Refering to what is shown in Fig. 10, describing the program product for realizing the above method according to embodiment of the present disclosure 400, can using portable compact disc read only memory (CD-ROM) and including program code, and can in terminal device, Such as it is run on PC.However, the program product of the disclosure is without being limited thereto, in this document, readable storage medium storing program for executing can be with To be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or It is in connection.
Described program product can be using any combination of one or more readable mediums.Readable medium can be readable letter Number medium or readable storage medium storing program for executing.Readable storage medium storing program for executing for example can be but be not limited to electricity, magnetic, optical, electromagnetic, infrared ray or System, device or the device of semiconductor, or any above combination.The more specific example of readable storage medium storing program for executing is (non exhaustive List) include: electrical connection with one or more conducting wires, portable disc, hard disk, random access memory (RAM), read-only Memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read only memory (CD-ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.
The computer readable storage medium may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying readable program code.The data-signal of this propagation can take various forms, including but not limited to electromagnetism Signal, optical signal or above-mentioned any appropriate combination.Readable storage medium storing program for executing can also be any other than readable storage medium storing program for executing Readable medium, the readable medium can send, propagate or transmit for by instruction execution system, device or device use or Person's program in connection.The program code for including on readable storage medium storing program for executing can transmit with any suitable medium, packet Include but be not limited to wireless, wired, optical cable, RF etc. or above-mentioned any appropriate combination.
Can with any combination of one or more programming languages come write for execute the disclosure operation program Code, described program design language include object oriented program language-Java, C++ etc., further include conventional Procedural programming language-such as " C " language or similar programming language.Program code can be fully in user It calculates and executes in equipment, partly executes on a user device, being executed as an independent software package, partially in user's calculating Upper side point is executed on a remote computing or is executed in remote computing device or server completely.It is being related to far Journey calculates in the situation of equipment, and remote computing device can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to user calculating equipment, or, it may be connected to external computing device (such as utilize ISP To be connected by internet).
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are by one When the equipment executes, so that the computer-readable medium implements function such as: being split processing to image to obtain the figure Specified region and background area as in;Colouring processing is carried out to the specified region;Calculate the finger carried out after colouring processing Determine the gradient distribution information in region Yu the background area;And pass through the gradient of the specified region and the background area point Cloth information carries out Poisson reconstruction to described image, to obtain displaying image.
It will be appreciated by those skilled in the art that above-mentioned each module can be distributed in device according to the description of embodiment, it can also Uniquely it is different from one or more devices of the present embodiment with carrying out corresponding change.The module of above-described embodiment can be merged into One module, can also be further split into multiple submodule.
By the description of above embodiment, those skilled in the art is it can be readily appreciated that example embodiment described herein It can also be realized in such a way that software is in conjunction with necessary hardware by software realization.Therefore, implemented according to the disclosure The technical solution of example can be embodied in the form of software products, which can store in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.) or on network, including some instructions are so that a calculating equipment (can To be personal computer, server, mobile terminal or network equipment etc.) it executes according to the method for the embodiment of the present disclosure.
It is particularly shown and described the exemplary embodiment of the disclosure above.It should be appreciated that the present disclosure is not limited to Detailed construction, set-up mode or implementation method described herein;On the contrary, disclosure intention covers included in appended claims Various modifications and equivalence setting in spirit and scope.
In addition, structure shown by this specification Figure of description, ratio, size etc., only to cooperate specification institute Disclosure, for skilled in the art realises that be not limited to the enforceable qualifications of the disclosure with reading, therefore Do not have technical essential meaning, the modification of any structure, the change of proportionate relationship or the adjustment of size are not influencing the disclosure Under the technical effect and achieved purpose that can be generated, it should all still fall in technology contents disclosed in the disclosure and obtain and can cover In the range of.Meanwhile cited such as "upper" in this specification, " first ", " second " and " one " term, be also only and be convenient for Narration is illustrated, rather than to limit the enforceable range of the disclosure, relativeness is altered or modified, without substantive change Under technology contents, when being also considered as the enforceable scope of the disclosure.

Claims (13)

1. a kind of image processing method characterized by comprising
Processing is split to obtain specified region and the background area in described image to image;
Colouring processing is carried out to the specified region;
Calculate the gradient distribution information in the specified region and the background area after carrying out colouring processing;And
Poisson reconstruction is carried out to described image by the gradient distribution information of the specified region and the background area, to obtain Show image.
2. the method as described in claim 1, which is characterized in that further include:
The Image Segmentation Model for being split processing to described image is established by deep learning algorithm;
Be split processing to image with the specified region obtained in described image includes: with background area
By described image input picture parted pattern to obtain specified region and the background area in described image.
3. method according to claim 2, which is characterized in that established by deep learning algorithm for being carried out to described image The Image Segmentation Model of dividing processing includes:
Sample image is subjected to image preprocessing, generates sample data;
The sample data is marked;And
Deep learning algorithm model is trained by the sample data, obtains described image parted pattern.
4. method as claimed in claim 3, which is characterized in that be marked to the sample data and include:
Polygonal mark is carried out to the sample data by labelme.
5. method as claimed in claim 3, which is characterized in that include: by sample image progress image preprocessing
The processing of image augmentation is carried out to the sample image.
6. method as claimed in claim 3, which is characterized in that the deep learning algorithm model includes: full convolutional Neural net Network model;
Deep learning algorithm model is trained by the sample data, obtaining described image parted pattern includes:
The sample data is inputted in full convolutional neural networks model;
Full convolutional neural networks model handles the sample data by 8 convolutional layers, obtains convolved data;
The described image parted pattern is determined by the label and the convolved data of sample data.
7. the method as described in claim 1, which is characterized in that calculate the specified region after carrying out colouring processing and the back The gradient distribution information of scene area includes:
The first gradient distributed intelligence in the specified region after colouring processing is obtained by Laplace transform;
The second gradient distribution information of the background area is obtained by Laplace transform;And
Pass through gradient distribution information described in the first gradient distributed intelligence and the second gradient distribution acquisition of information.
8. the method for claim 7, which is characterized in that pass through the first gradient distributed intelligence and second gradient Distributed intelligence obtains the gradient distribution information
It is covered the first gradient distributed intelligence as exposure mask in the second gradient distribution information, to obtain the gradient Distributed intelligence.
9. the method as described in claim 1, which is characterized in that pass through the gradient of the specified region and the background area point Cloth information carries out Poisson reconstruction to described image
Pass through the gradient distribution acquisition of information image divergence information;
The coefficient matrix in Poisson reconstruction processing is solved by described image divergence information to carry out Poisson reconstruction to described image.
10. a kind of image processing apparatus characterized by comprising
Image segmentation module, for being split processing to image to obtain specified region and the background area in described image;
Region colouring module, for carrying out colouring processing to the specified region;
Gradient information module, the gradient distribution for calculating specified region and the background area after carrying out colouring processing are believed Breath;
Image reconstruction module, for the gradient distribution information by the specified region and the background area to described image into Row Poisson is rebuild, to obtain displaying image.
11. device as claimed in claim 10, which is characterized in that further include:
Model module, for establishing the image segmentation mould for being split processing to described image by deep learning algorithm Type.
12. a kind of electronic equipment characterized by comprising
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now method as described in any in claim 1-9.
13. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that described program is held by processor The method as described in any in claim 1-9 is realized when row.
CN201810290778.9A 2018-04-03 2018-04-03 Image processing method, device, electronic equipment and computer-readable medium Pending CN110349165A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810290778.9A CN110349165A (en) 2018-04-03 2018-04-03 Image processing method, device, electronic equipment and computer-readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810290778.9A CN110349165A (en) 2018-04-03 2018-04-03 Image processing method, device, electronic equipment and computer-readable medium

Publications (1)

Publication Number Publication Date
CN110349165A true CN110349165A (en) 2019-10-18

Family

ID=68172820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810290778.9A Pending CN110349165A (en) 2018-04-03 2018-04-03 Image processing method, device, electronic equipment and computer-readable medium

Country Status (1)

Country Link
CN (1) CN110349165A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402114A (en) * 2020-03-10 2020-07-10 云南大学 Wax printing multi-dyeing method based on convolutional neural network
CN112967338A (en) * 2019-12-13 2021-06-15 宏达国际电子股份有限公司 Image processing system and image processing method
CN113888560A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080226190A1 (en) * 2007-03-16 2008-09-18 Massachusetts Institute Of Technology System and method for providing gradient preservation for image processing
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
WO2016121329A1 (en) * 2015-01-29 2016-08-04 パナソニックIpマネジメント株式会社 Image processing device, stylus, and image processing method
CN105894470A (en) * 2016-03-31 2016-08-24 北京奇艺世纪科技有限公司 Image processing method and device
CN106203399A (en) * 2016-07-27 2016-12-07 厦门美图之家科技有限公司 A kind of image processing method, device and calculating equipment
EP3139341A1 (en) * 2015-09-02 2017-03-08 Thomson Licensing Methods, systems and apparatus for specular highlight reconstruction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080226190A1 (en) * 2007-03-16 2008-09-18 Massachusetts Institute Of Technology System and method for providing gradient preservation for image processing
US20160154993A1 (en) * 2014-12-01 2016-06-02 Modiface Inc. Automatic segmentation of hair in images
WO2016121329A1 (en) * 2015-01-29 2016-08-04 パナソニックIpマネジメント株式会社 Image processing device, stylus, and image processing method
EP3139341A1 (en) * 2015-09-02 2017-03-08 Thomson Licensing Methods, systems and apparatus for specular highlight reconstruction
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN105894470A (en) * 2016-03-31 2016-08-24 北京奇艺世纪科技有限公司 Image processing method and device
CN106203399A (en) * 2016-07-27 2016-12-07 厦门美图之家科技有限公司 A kind of image processing method, device and calculating equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAEL KAZHDAN等: "Poisson Surface Reconstruction", EUROGRAPHICS SYMPOSIUM ON GEOMETRY PROCESSING, pages 1 - 10 *
凌瑞: "基于GPU的快速三维人脸重建的研究与实现", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967338A (en) * 2019-12-13 2021-06-15 宏达国际电子股份有限公司 Image processing system and image processing method
CN112967338B (en) * 2019-12-13 2024-05-31 宏达国际电子股份有限公司 Image processing system and image processing method
CN111402114A (en) * 2020-03-10 2020-07-10 云南大学 Wax printing multi-dyeing method based on convolutional neural network
CN111402114B (en) * 2020-03-10 2023-03-24 云南大学 Wax printing multi-dyeing method based on convolutional neural network
CN113888560A (en) * 2021-09-29 2022-01-04 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for processing image

Similar Documents

Publication Publication Date Title
US10373312B2 (en) Automated skin lesion segmentation using deep side layers
CN109034206A (en) Image classification recognition methods, device, electronic equipment and computer-readable medium
CN114187633B (en) Image processing method and device, and training method and device for image generation model
CN109902672A (en) Image labeling method and device, storage medium, computer equipment
CN111742345A (en) Visual tracking by coloring
US20210279279A1 (en) Automated graph embedding recommendations based on extracted graph features
CN110349165A (en) Image processing method, device, electronic equipment and computer-readable medium
CN112215171A (en) Target detection method, device, equipment and computer readable storage medium
CN116229056A (en) Semantic segmentation method, device and equipment based on double-branch feature fusion
JP2024519443A (en) Method and system for action recognition using bidirectional space-time transformer
Liu et al. Painting completion with generative translation models
CN112132770A (en) Image restoration method and device, computer readable medium and electronic equipment
Daihong et al. Facial expression recognition based on attention mechanism
CN115482324A (en) Multimedia resource generation method and device, electronic equipment and storage medium
Gao et al. DAFuse: a fusion for infrared and visible images based on generative adversarial network
CN114998583A (en) Image processing method, image processing apparatus, device, and storage medium
Timchenko et al. The method of parallel-hierarchical transformation for rapid recognition of dynamic images using GPGPU technology
CN112115744A (en) Point cloud data processing method and device, computer storage medium and electronic equipment
CN112069412B (en) Information recommendation method, device, computer equipment and storage medium
Li et al. [Retracted] Deep‐Learning‐Based 3D Reconstruction: A Review and Applications
Fridman et al. Sideeye: A generative neural network based simulator of human peripheral vision
Ardhianto et al. Generative deep learning for visual animation in landscapes design
CN111898544A (en) Character and image matching method, device and equipment and computer storage medium
CN114373215A (en) Image processing method and device, electronic equipment and storage medium
CN116721185A (en) Image processing method, apparatus, device, storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination