CN108665475A - Neural metwork training, image processing method, device, storage medium and electronic equipment - Google Patents

Neural metwork training, image processing method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108665475A
CN108665475A CN201810463674.3A CN201810463674A CN108665475A CN 108665475 A CN108665475 A CN 108665475A CN 201810463674 A CN201810463674 A CN 201810463674A CN 108665475 A CN108665475 A CN 108665475A
Authority
CN
China
Prior art keywords
segmentation result
convolution
image
training
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810463674.3A
Other languages
Chinese (zh)
Inventor
王嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810463674.3A priority Critical patent/CN108665475A/en
Publication of CN108665475A publication Critical patent/CN108665475A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of neural metwork training of offer of the embodiment of the present invention, image processing method, device, computer readable storage medium and electronic equipment.Neural network training method includes:The first process of convolution is carried out to training sample image by nerve network system, obtains the first segmentation result of the preceding background segment of the training sample image;And the second process of convolution is carried out to first segmentation result, obtain the second segmentation result of the preceding background segment of the training sample image;According to the difference between the difference marked between segmentation result and first segmentation result and the mark segmentation result and second segmentation result of the preceding background segment of the training sample image, the training nerve network system.Using the technical solution of the embodiment of the present invention, background segment accuracy before image can be effectively improved, and ensure higher processing speed, so as to effectively promote the effect for carrying out monocular virtualization processing.

Description

Neural metwork training, image processing method, device, storage medium and electronic equipment
Technical field
The present embodiments relate to technical field of computer vision more particularly to a kind of neural network training method, device, Computer readable storage medium and electronic equipment and a kind of image processing method, device, computer readable storage medium and electricity Sub- equipment.
Background technology
The background blurring of image can be such that shooting main body clearly shows, deep to be liked by shutterbugs.Due to image It blurs effect and mainly utilizes optical imaging concept, realized using big lens aperture on hardware, therefore, the virtualization of image Function is mainly integrated on the video camera of the profession such as slr camera, for smart mobile phone, tablet computer equal thickness it is limited and can only The mobile terminal device of small aperture camera lens is installed, for user when taking pictures using mobile terminal device, can be only generated does not have void Change effect or only with faint virtualization effect image.
Currently, enabling to the mobile end with single camera lens using single-lens background blurring technology (monocular virtualization technology) End equipment simulates the background blurring effect of taking pictures of slr camera.In monocular blurs technology application process, need to carry out image Preceding background segment processing, extracts target prospect region or background area.But before existing preceding background segment treatment technology exists The problem that background segment accuracy is relatively low or processing speed is relatively low, in the case where processing speed is limited, preceding background segment is accurate Exactness can be also limited, and cause final monocular to blur effect poor.
Invention content
The purpose of the embodiment of the present invention is, provides a kind of neural metwork training technology and a kind of image processing techniques.
According to a first aspect of the embodiments of the present invention, a kind of neural network training method is provided, including:Pass through neural network System carries out the first process of convolution to training sample image, obtains the first segmentation of the preceding background segment of the training sample image As a result;And the second process of convolution is carried out to first segmentation result, obtain the preceding background segment of the training sample image The second segmentation result;It is tied according to the mark segmentation result of the preceding background segment of the training sample image and first segmentation The difference between difference and the mark segmentation result and second segmentation result between fruit, the training nerve net Network system.
Optionally, the mark segmentation result of the preceding background segment according to the training sample image with described first point Cut the difference between difference and the mark segmentation result and second segmentation result between result, the training god Through network system, including:The first variance data between the mark segmentation result and first segmentation result is obtained, and The second variance data between the mark segmentation result and second segmentation result;According to first variance data and institute State the second variance data and the adjustment nerve network system network parameter.
Optionally, the neural network includes system the first convolution sub-network and the second convolution sub-network;Pass through described One convolution sub-network carries out first process of convolution to the training sample image, obtains first segmentation result;Pass through The second convolution sub-network carries out second process of convolution to first segmentation result, obtains the second segmentation knot Fruit.
Optionally, the first convolution sub-network includes multiple convolutional layers positioned at different depth;It is described to described first Segmentation result carries out the second process of convolution, including:Obtain the shallow-layer of the convolutional layer in the multiple convolutional layer positioned at the first depth The deep layer for exporting result and/or the convolutional layer positioned at the second depth exports result;Wherein, first depth is less than described second Depth;First segmentation result and shallow-layer output result and/or deep layer output result are merged, obtain the One amalgamation result;Second process of convolution is carried out to first amalgamation result.
Optionally, the nerve network system includes at least two the second convolution sub-networks positioned at different depth;Pass through In at least two second convolution sub-networks positioned at minimum-depth the second convolution sub-network to first segmentation result into The second process of convolution of row obtains second segmentation result;By the second convolution sub-network positioned at third depth to being located at The second segmentation result that second convolution sub-network of four depth obtains carries out the second new process of convolution, obtains the second new segmentation As a result;Wherein, the third depth is more than the 4th depth;The mark of the foreground segmentation according to the training sample image Difference and the mark segmentation result between note segmentation result and first segmentation result and second segmentation result Between difference, the training nerve network system, including:Divided according to the mark of the foreground segmentation of the training sample image As a result the difference between first segmentation result, the difference between the mark segmentation result and second segmentation result Difference between the different and described mark segmentation result and the second new segmentation result, the training nerve network system.
Optionally, the second convolution sub-network by positioned at third depth is to the second convolution positioned at the 4th depth The second segmentation result that network obtains carries out the second new process of convolution, including:It will be positioned at the second convolution subnet of the 4th depth The second segmentation result that network obtains exports result with the shallow-layer and/or deep layer output result merges, and obtains second Amalgamation result;The new volume Two is carried out to second amalgamation result by the second convolution sub-network positioned at third depth Product processing.
Optionally, the mark segmentation result of the foreground segmentation according to the training sample image is divided with described first As a result the difference between, difference and the mark between the mark segmentation result and second segmentation result are divided As a result the difference between the second new segmentation result, the training nerve network system, including:Obtain the mark point Cut the first variance data between result and first segmentation result, the mark segmentation result and second segmentation result Between the second variance data and the mark segmentation result and the second new segmentation result between third difference number According to;According to first variance data, second variance data and the third variance data and the adjustment nerve The network parameter of network system.
According to a second aspect of the embodiments of the present invention, a kind of image processing method is provided, including:Pending image is inputted Nerve network system, background segment processing before being carried out to the pending image by the nerve network system;It obtains and carries out The background parts of pending image after the preceding background segment processing carry out segmentation virtualization processing to the background parts.
Optionally, described that preceding background segment processing, packet are carried out to the pending image by the nerve network system It includes:The first process of convolution is carried out to pending image by nerve network system, obtains the preceding background point of the pending image The first segmentation result cut;Second process of convolution is carried out to first segmentation result, obtains the preceding back of the body of the pending image Second segmentation result of scape segmentation;Background segment processing before being carried out to the pending image according to second segmentation result, To carry out handling relevant processing with preceding background segment to the pending image.
Optionally, the nerve network system is using the neural metwork training as described in any one of claim 1 to 7 The nerve network system that method training obtains.
Optionally, it is handled to the pending preceding background segment of image progress by the nerve network system described Afterwards, the method further includes:To carry out the pending image after the preceding background segment processing carry out Steerable filter processing and/ Or alpha matting processing.
Optionally, described that segmentation virtualization processing is carried out to the background parts, including:
According to the distance of designated edge of each pixel to the pending image in the background parts, to described every A pixel carries out Fuzzy Processing.
According to a third aspect of the embodiments of the present invention, a kind of neural metwork training device is provided, including:First segmentation mould Block obtains the training sample image for carrying out the first process of convolution to training sample image by nerve network system First segmentation result of preceding background segment;And the second process of convolution is carried out to first segmentation result, obtain the training Second segmentation result of the preceding background segment of sample image;Training module, for the preceding background according to the training sample image Difference between the mark segmentation result of segmentation and first segmentation result and the mark segmentation result and described second Difference between segmentation result, the training nerve network system.
Optionally, the training module is used to obtain the between the mark segmentation result and first segmentation result The second variance data between one variance data and the mark segmentation result and second segmentation result;According to described First variance data and second variance data and the adjustment nerve network system network parameter.
Optionally, the neural network includes system the first convolution sub-network and the second convolution sub-network;The segmentation mould Block includes:First cutting unit, for carrying out described first to the training sample image by the first convolution sub-network Process of convolution obtains first segmentation result;Second cutting unit, for by the second convolution sub-network to described the One segmentation result carries out second process of convolution, obtains second segmentation result.
Optionally, the first convolution sub-network includes multiple convolutional layers positioned at different depth;The segmentation module is also Including:First combining unit, the shallow-layer for obtaining the convolutional layer in the multiple convolutional layer positioned at the first depth export result And/or the deep layer of the convolutional layer positioned at the second depth exports result;Wherein, first depth is less than second depth;With And merge first segmentation result and shallow-layer output result and/or deep layer output result, obtain first Amalgamation result;Second cutting unit is used to carry out the second process of convolution to first amalgamation result.
Optionally, the nerve network system includes at least two the second convolution sub-networks positioned at different depth;It is described Second cutting unit is used for the second convolution sub-network by being located at minimum-depth at least two second convolution sub-networks Second process of convolution is carried out to first segmentation result, obtains second segmentation result;The first segmentation module is also wrapped Third cutting unit is included, for sub to the second convolution positioned at the 4th depth by the second convolution sub-network positioned at third depth The second segmentation result that network obtains carries out the second new process of convolution, obtains the second new segmentation result;Wherein, the third Depth is more than the 4th depth;The training module is used to be divided according to the mark of the foreground segmentation of the training sample image As a result the difference between first segmentation result, the difference between the mark segmentation result and second segmentation result Difference between the different and described mark segmentation result and the second new segmentation result, the training nerve network system.
Optionally, the first segmentation module further includes the second combining unit, for that will be located at the volume Two of the 4th depth The second segmentation result that product sub-network obtains exports result with the shallow-layer and/or deep layer output result merges, and obtains Obtain the second amalgamation result;The third cutting unit is used for through the second convolution sub-network positioned at third depth to stating the second conjunction And result carries out the second new process of convolution.
Optionally, the training module is used to obtain the between the mark segmentation result and first segmentation result One variance data, the second variance data between the mark segmentation result and second segmentation result and the mark Third variance data between segmentation result and the second new segmentation result;According to first variance data, described The network parameter with the adjustment nerve network system of two variance datas and the third variance data.
According to a fourth aspect of the embodiments of the present invention, a kind of image processing apparatus is provided, including:Second segmentation module, For pending image to be inputted nerve network system, the back of the body before being carried out to the pending image by the nerve network system Scape dividing processing;Blurring module, the background parts for obtaining the pending image after carrying out the preceding background segment processing, Segmentation virtualization processing is carried out to the background parts.
Optionally, the second segmentation module includes:Third cutting unit, for passing through nerve network system to pending Image carries out the first process of convolution, obtains the first segmentation result of the preceding background segment of the pending image;4th segmentation is single Member obtains the of the preceding background segment of the pending image for carrying out the second process of convolution to first segmentation result Two segmentation results;5th cutting unit, for background point before being carried out to the pending image according to second segmentation result Processing is cut, to carry out handling relevant processing with preceding background segment to the pending image.
Optionally, the nerve network system is using any neural metwork training device instruction provided in an embodiment of the present invention Practice the nerve network system obtained.
Optionally, described device further includes:Processing module, for carrying out waiting locating after the preceding background segment is handled It manages image and carries out Steerable filter processing and/or alpha matting processing.
Optionally, the blurring module is used for according to each pixel in the background parts to the pending image The distance of designated edge carries out Fuzzy Processing to each pixel.
According to a fifth aspect of the embodiments of the present invention, a kind of computer program is provided comprising have computer program to refer to It enables, for realizing any neural network training method pair provided in an embodiment of the present invention when described program instruction is executed by processor The step of answering.
According to a sixth aspect of the embodiments of the present invention, a kind of computer program is provided comprising have computer program to refer to It enables, described program instruction is corresponding for realizing any image processing method provided in an embodiment of the present invention when being executed by processor Step.
According to a seventh aspect of the embodiments of the present invention, a kind of computer readable storage medium is provided, meter is stored thereon with Calculation machine program instruction, for realizing any neural network provided in an embodiment of the present invention when described program instruction is executed by processor Step corresponding to training method.
According to a eighth aspect of the embodiments of the present invention, a kind of computer readable storage medium is provided, meter is stored thereon with Calculation machine program instruction, for realizing any image provided in an embodiment of the present invention processing when described program instruction is executed by processor Step corresponding to method.
According to a ninth aspect of the embodiments of the present invention, a kind of electronic equipment is provided, including:Processor, memory, communication Element and communication bus, the processor, the memory and the communication device are completed mutual by the communication bus Communication;The memory makes the processor execute this hair for storing an at least executable instruction, the executable instruction Step corresponding to any neural network training method that bright embodiment provides.
According to a tenth aspect of the embodiments of the present invention, a kind of electronic equipment is provided, including:Processor, memory, communication Element and communication bus, the processor, the memory and the communication device are completed mutual by the communication bus Communication;The memory makes the processor execute this hair for storing an at least executable instruction, the executable instruction Step corresponding to any image processing method that bright embodiment provides.
Image procossing scheme according to the ... of the embodiment of the present invention, by being carried out to training sample image in training neural network Preliminary process of convolution, and the primary segmentation result to obtaining carries out further process of convolution, obtains training sample image The Optimized Segmentation of preceding background segment is as a result, to according between training sample image and primary segmentation result and Optimized Segmentation result Difference train nerve network system, to carry out secondary process of convolution to pending image by nerve network system, preliminary Segmentation result is advanced optimized on the basis of segmentation result, background segment accuracy before can effectively improving, and ensure higher Processing speed;Monocular virtualization processing is carried out on this basis, can effectively promote virtualization effect.
Description of the drawings
Fig. 1 is the flow chart for the neural network training method for showing according to embodiments of the present invention one;
Fig. 2 is the flow chart for the neural network training method for showing according to embodiments of the present invention two;
Fig. 3 is the schematic diagram for the neural metwork training system for showing that according to embodiments of the present invention two provide;
Fig. 4 shows the flow chart of according to embodiments of the present invention three image processing method;
Fig. 5 is the first structure block diagram for the neural metwork training device for showing according to embodiments of the present invention four;
Fig. 6 is the second structure diagram of the neural metwork training device for showing according to embodiments of the present invention four;
Fig. 7 is the structure diagram for the image processing apparatus for showing according to embodiments of the present invention five;
Fig. 8 is the structural schematic diagram of according to embodiments of the present invention six a kind of electronic equipment;
Fig. 9 is the structural schematic diagram of according to embodiments of the present invention seven a kind of electronic equipment.
Specific implementation mode
(identical label indicates identical element in several attached drawings) and embodiment below in conjunction with the accompanying drawings, implement the present invention The specific implementation mode of example is described in further detail.Following embodiment is not limited to the present invention for illustrating the present invention Range.
It will be understood by those skilled in the art that the terms such as " first ", " second " in the embodiment of the present invention are only used for distinguishing Different step, equipment or module etc. neither represent any particular technology meaning, also do not indicate that the inevitable logic between them is suitable Sequence.
Embodiment one
Fig. 1 is the flow chart for the neural network training method for showing according to embodiments of the present invention one.
Referring to Fig.1, in step S110, the first process of convolution is carried out to training sample image by nerve network system, is obtained Take the first segmentation result of the preceding background segment of training sample image;And the second process of convolution is carried out to the first segmentation result, Obtain the second segmentation result of the preceding background segment of training sample image.
In embodiments of the present invention, the nerve net of background segment processing before training sample image carries out image for training Network system.Training sample image can be the image that arbitrary camera is shot under arbitrary scene.Optionally, neural network System is deep neural network system.The setting of concrete structure can be by those skilled in the art according to reality in nerve network system Border demand is suitably set, such as the number of plies of convolutional layer, the size of convolution kernel in depth convolutional neural networks, the embodiment of the present invention This is not restricted.
According to an illustrative embodiment of the invention, after training sample image being inputted nerve network system, pass through nerve Network system carries out the first process of convolution to training sample image, obtains the first segmentation result.Here, the first process of convolution can be with Including multiple convolution operations, for example, multiple concatenated convolutional layers carry out convolution operation respectively forms the first process of convolution, for pair Training sample image carries out feature extraction, feature learning etc..First segmentation result is used to indicate foreground portion in training sample image (foreground area) and/or background parts (background area) is divided to be used for for example, the first segmentation result is specifically as follows a matrix Each pixel belongs to the probability of foreground part or background parts in mark training sample image.It can according to the first segmentation result The dividing processing (preceding background segment) of foreground part and background area is carried out to training sample image.
After obtaining the first segmentation result, carried out at the second convolution by the first segmentation result of nerve network system pair Reason.Here, the second process of convolution can include identical operation with the first process of convolution, and the second segmentation result can be with first point Cut result expression form having the same.Second process of convolution is used for the process of convolution to the progress of the first segmentation result again, right Pending image carries out deeper feature learning and extraction, further to obtain higher second segmentation result of accuracy.
That is, the first process of convolution is to handle the preliminary preceding background segment that training sample image carries out, first point Cut the primary segmentation result that result is;Second process of convolution is to advanced optimize processing to what primary segmentation result carried out, The Optimized Segmentation result that second segmentation result is.
In step S120, according to the mark segmentation result of the preceding background segment of training sample image and the first segmentation result it Between difference, and mark segmentation result and the second segmentation result between difference, training nerve network system.
Optionally, obtain mark segmentation result and the first segmentation result between the first variance data, and with second point The second variance data between result is cut, to adjust the net of nerve network system according to the first variance data and the second variance data Network parameter.For example, according to preset loss function or departure function, calculate between mark segmentation result and the first segmentation result Penalty values either deviation and the penalty values between the second segmentation result or deviation, and according to being calculated The network parameter of penalty values or deviation adjustment nerve network system.
In practical applications, above-mentioned neural network can be repeated based on multiple identical or different training sample images Training method is adjusted the network parameter of nerve network system, until nerve network system is preferably restrained.
The nerve network system that training is completed can be used for the processing of background segment before image carries out, and pass through nerve network system First process of convolution is carried out to pending image and obtains primary segmentation as a result, and carrying out further second to primary segmentation result Process of convolution, obtained Optimized Segmentation result have the higher accuracy of separation;Also, the back of the body before being carried out by nerve network system Scape dividing processing, the concrete operation step that can also include by controlling the second process of convolution, to ensure higher processing speed; To realize the preceding background segment processing of pinpoint accuracy in the case where processing speed is limited.
According to embodiments of the present invention one neural network training method, by carrying out preliminary convolution to training sample image Processing, obtains the primary segmentation result of the preceding background segment of training sample image;And by primary segmentation result into advance one The process of convolution of step obtains the Optimized Segmentation result of the preceding background segment of training sample image;To according to training sample image Difference between primary segmentation result and Optimized Segmentation result trains nerve network system.Using the neural network of the present embodiment The nerve network system that training method is trained, by carrying out secondary process of convolution to training sample image, in primary segmentation As a result segmentation result is advanced optimized on the basis of, background segment accuracy before can effectively improving, and ensure higher processing Speed.
The neural network training method of the present embodiment can have corresponding image and data processing energy by any suitable The equipment of power executes, including but not limited to:Such as the terminal devices such as computer, and integrated computer journey on the terminal device Sequence, processor etc..
Embodiment two
Fig. 2 is the flow chart for the neural network training method for showing according to embodiments of the present invention two.
Training sample image is carried out by the first convolution sub-network of nerve network system in step S210 with reference to Fig. 2 First process of convolution obtains the first segmentation result of the preceding background segment of training sample image.
In the embodiment of the present invention, nerve network system is used to carry out preceding background segment to image, and nerve network system includes First convolution sub-network obtains the first segmentation result for carrying out the first process of convolution to image.Here, the first convolution subnet Network may include that multiple convolutional layers, the first process of convolution include the convolution operation that multiple convolutional layers execute respectively.In the first convolution In network, the design parameters such as the number of convolutional layer, the size of convolution kernel (filter), port number and down-sampling position are set It sets, can be suitably set according to actual demand by those skilled in the art, the embodiment of the present invention is not restricted this.
In a kind of optional embodiment, multiple convolutional layers that the first convolution sub-network includes are located at nerve network system Different depth, as shown in figure 3, being provided with multiple concatenated Conv (Convolution, convolutional layer) in nerve network system.It will After training sample image inputs nervous system, convolution operation is executed respectively by multiple convolutional layers, wherein multiple convolutional layers are held Capable convolution operation is used to extract the feature of different levels of abstraction in training sample image.
In step S220, the shallow-layer output result and/or deep layer that obtain in the first convolution sub-network export as a result, by first Segmentation result exports result with shallow-layer and/or deep layer output result merges, and obtains the first amalgamation result.
Wherein, shallow-layer output result is the shallow-layer convolution positioned at the first depth in multiple convolutional layers of the first convolution sub-network The output of layer is as a result, deep layer output result is the output result of the deep layer convolutional layer positioned at the second depth.Here, the first depth is small In the second depth, that is, shallow-layer convolutional layer can be any of the convolutional layer that depth is smaller or shallower in multiple convolutional layers, Deep layer convolutional layer can be that depth is larger or any of deeper convolutional layer.
According to an illustrative embodiment of the invention, to the first segmentation result of acquisition carry out the second process of convolution it Before, the first segmentation result and shallow-layer output result and/or deep layer output result are merged into processing, obtain the first merging knot Fruit.Wherein, shallow-layer segmentation result is the output of the shallow-layer convolutional layer in the first convolution sub-network as a result, containing shallow-layer convolutional layer The feature such as image texture, material extracted from training sample image;Deep layer segmentation result is in the first convolution sub-network Deep layer convolutional layer output as a result, containing such as image subject details that deep layer convolutional layer is extracted from training sample image Deng more abstract, advanced information.Preferably, by shallow-layer export result and deep layer output result simultaneously with the first segmentation result into Row merging treatment so that the first amalgamation result includes the detailed information of training sample image and whole semantic information.
Merging treatment mode is shown in Fig. 3, is to tie the first segmentation result and shallow-layer output result and/or deep layer output Fruit inputs the second convolution sub-network Refine Conv together, and merging treatment is executed by the second convolution sub-network.In practical application In, the second convolution subnet can also be inputted after executing merging treatment and obtaining the first amalgamation result, then by the first amalgamation result Network carries out the second process of convolution.
In step S230, carried out by the first amalgamation result of at least one second convolution sub-network pair of nerve network system Second process of convolution obtains the second segmentation result of the preceding background segment of training sample image.
In the embodiment of the present invention, the first convolution sub-network of nerve network system is the convolutional network of first stage, is used for The first process of convolution of preliminary preceding background segment is carried out to training sample image;At least one second convolution sub-network is second-order The convolutional network of section, the second process of convolution for background segment accuracy before being optimized to the first segmentation result.
Nerve network system shown in Fig. 3 includes a second convolution sub-network, passes through the second convolution sub-network pair First amalgamation result carries out the second process of convolution, obtains the second segmentation result.Wherein, the first amalgamation result includes training sample figure The detailed information of picture and whole semantic information, the second process of convolution is carried out based on the first amalgamation result, can further increase the The accuracy of two segmentation results.
Optionally, when nerve network system includes at least two the second convolution sub-networks, pass through at least two volume Twos Second the first segmentation result of convolution sub-network pair for being located at minimum-depth in product sub-network carries out the second process of convolution, obtains second Segmentation result.Here, the second convolution sub-network for being located at minimum-depth can be the second convolution sub-network shown in Fig. 3, use In the second process of convolution to the progress of the first segmentation result for the first time.
And the second convolution sub-network positioned at the 4th depth is obtained by the second convolution sub-network positioned at third depth The second segmentation result taken carries out the second new process of convolution, obtains the second new segmentation result;Wherein, third depth is more than the Four depth.That is, being obtained to the second convolution sub-network positioned at smaller depth by the second convolution sub-network positioned at larger depth The second segmentation result taken, carries out the second process of convolution again, to obtain higher the second new segmentation result of accuracy.This In, the second segmentation result for being located at the second convolution sub-network output of depth capacity is preceding background segment result finally.
Optionally, the shallow-layer output result and/or deep layer output in the first convolution sub-network are obtained as a result, the 4th will be located at The second segmentation result that second convolution sub-network of depth obtains is closed with the first output result and/or the second output result And obtain the second amalgamation result;By carrying out new the positioned at second the second amalgamation result of convolution sub-network pair of third depth Two process of convolution realize the second process of convolution for being executed to the second segmentation result and optimizing preceding background segment into accuracy.Here, it obtains The deep layer output result taken can also be second of the second convolution sub-network output positioned at smaller depth (being less than the 4th depth) Segmentation result.
Illustrate herein, the quantity of the second convolution sub-network can be according to actual processing speed in the processing of preceding background segment Demand is set.For example, if the processing speed of demand is higher, 2 or 3 the second convolution sub-networks can be set, to not Increase excessive processing time, improves processing speed.
That is, carrying out primary segmentation processing to training sample image by the first convolution sub-network, obtain preliminary After segmentation result (the first segmentation result), primary segmentation result can be optimized by the way that a second convolution sub-network is arranged Processing, to improve the accuracy of separation;And by the way that multiple second convolution sub-networks are arranged, to the second segmentation result after optimization into The multiple optimization processing of row, to further increase the accuracy of separation.
In step S240, obtain the preceding background segment of training sample image mark segmentation result and the first segmentation result it Between the first variance data, and the second variance data between the second segmentation result, according to the first variance data and second Variance data and adjustment nerve network system network parameter.
Here, according to the difference between mark segmentation result and the first segmentation result and the second segmentation result, training god When through network system, by the first variance data and the second variance data and, the final segmentation as nerve network system acquisition As a result the metric of the difference between object segmentation result, and then joined according to the network of metric adjustment nerve network system Number.
Wherein, if nerve network system includes multiple second convolution sub-networks, according to mark segmentation result and first Difference between segmentation result, and the difference between the second segmentation result of each second convolution sub-network acquisition, training god Through network system.Optionally, the first variance data between mark segmentation result and the first segmentation result is obtained, with the second segmentation As a result the second variance data between, and each third variance data between each the second new segmentation result;According to first Variance data, the second variance data and each third variance data and adjustment nerve network system network parameter.
The neural network training method that the embodiment of the present invention is executed based on a large amount of training sample image, according to obtaining every time Metric adjustment nerve network system network parameter, until nerve network system preferably restrains, that is, until obtaining Metric is reduced to preset discrepancy threshold or is not reducing.
According to embodiments of the present invention two neural network training method, by carrying out preliminary convolution to training sample image Processing, obtains the primary segmentation result of the preceding background segment of training sample image;And by primary segmentation result into advance one The process of convolution of step obtains the Optimized Segmentation result of the preceding background segment of training sample image;To according to training sample image Difference between primary segmentation result and Optimized Segmentation result trains nerve network system, improves the neural network that training is completed The accuracy of background segment before system carries out;And by when optimizing processing to primary segmentation result, it is contemplated that preliminary Shallow-layer output result and deep layer in deconvolution process export result, that is to say, that consider the details letter of training sample image Breath and whole semantic information further improve the accuracy of background segment before the nerve network system that training is completed carries out;With And by carrying out again multiple optimization processing to the segmentation result after optimization, further improve the Optimized Segmentation knot of acquisition The accuracy of fruit, the accuracy of background segment before the nerve network system to further increase training completion carries out.Using this The nerve network system that the neural network training method of embodiment is trained, background segment is accurate before can effectively improving Degree, and ensure higher processing speed.
The neural network training method of the present embodiment can have corresponding image or data processing energy by any suitable The equipment of power executes, including but not limited to:Such as the terminal device of computer, and integrated computer journey on the terminal device Sequence, processor etc..
Embodiment three
Fig. 4 is the flow chart for the image processing method for showing according to embodiments of the present invention three.
With reference to Fig. 4, in step S410, pending image is inputted into nerve network system, is treated by nerve network system Handle background segment processing before image carries out.
In the embodiment of the present invention, nerve network system be used for image carry out before background segment processing, be specifically as follows through Cross the nerve network system that the neural network training method of the embodiment of the present invention one or embodiment two is trained.
Optionally, the first process of convolution is carried out to pending image by nerve network system, obtains pending image First segmentation result of preceding background segment;And the second process of convolution is carried out to the first segmentation result, obtain pending image Second segmentation result of preceding background segment;And background segment processing before being carried out to pending image according to the second segmentation result, with Pending image is carried out to handle relevant processing with preceding background segment.
Wherein, the first process of convolution, which can be considered, carries out at preliminary convolution pending image by nerve network system Reason, obtains the primary segmentation result of the preceding background segment of pending image;Second process of convolution can be considered by primary segmentation As a result the process of convolution advanced optimized obtains the Optimized Segmentation of the preceding background segment of pending image as a result, to carry The accuracy of background segment before height, and ensure higher processing speed.
Also, it can be obtained when carrying out the second process of convolution to the first segmentation result for carrying out the first process of convolution The output result of multiple convolutional layer middle-shallow layer convolutional layers and the output of deep layer convolutional layer are as a result, that is, obtain at the first convolution Shallow-layer output result and deep layer output during reason is as a result, and export the first segmentation result and shallow-layer output result and deep layer As a result merge, to by obtain amalgamation result carry out the second process of convolution, further increase nerve network system into The accuracy of background segment before row.
And the second segmentation result can again be carried out for excellent after carrying out the second process of convolution to the first segmentation result The process of convolution for changing accuracy, further to improve the accuracy of background segment before nerve network system carries out.
In step S420, the background parts of the pending image before carrying out after background segment processing are obtained, to background portion Divide and carries out segmentation virtualization processing.
For example, being based on preceding background segment handling result, background parts in pending image are determined, it will be each in background parts The fuzzy dynamics of pixel is set as directly proportional at a distance from pixel to image lower edge, and the fuzzy dynamics based on setting Fuzzy Processing is carried out to background parts, so that the virtualization degree of the virtualization degree of background parts and foreground area carries out area Point, improve the validity of background blurring processing.
Optionally, before executing this step, the pending image after background segment processing before progress can be led To being filtered and/or alpha matting processing.Wherein, foreground part of the Steerable filter processing for being partitioned into image Smooth noise reduction is carried out with the edge of background parts, improves reservation degree of the segmentation result to edge details.Alpha matting processing For making the transitional region between foreground part and background parts more natural.
That is, the image processing method of the present embodiment to image carry out before background segment processing after, can be right Image carries out including but not limited to Steerable filter processing, background segmentation virtualization processing etc. and the relevant processing of preceding background segment.
The image processing method of the present embodiment can be by any suitable with corresponding image and data-handling capacity Equipment executes, including but not limited to:Terminal device and integrated computer program, processor etc. on the terminal device, for example, The mobile terminal devices such as smart mobile phone, tablet computer.
Example IV
With reference to Fig. 5, a kind of structure diagram of according to embodiments of the present invention four neural metwork training device is shown.
The neural metwork training device of the embodiment of the present invention includes:First segmentation module 502, for passing through neural network system System carries out the first process of convolution to training sample image, obtains the first segmentation knot of the preceding background segment of the training sample image Fruit;And the second process of convolution is carried out to first segmentation result, obtain the preceding background segment of the training sample image Second segmentation result;Training module 504, for according to the mark segmentation result of the preceding background segment of the training sample image with The difference between difference and the mark segmentation result and second segmentation result between first segmentation result, The training nerve network system.
Optionally, the training module 504 is for obtaining between the mark segmentation result and first segmentation result The first variance data and it is described mark segmentation result and second segmentation result between the second variance data;According to First variance data and second variance data and the adjustment nerve network system network parameter.
Optionally, the neural network includes system the first convolution sub-network and the second convolution sub-network;Dress shown in Fig. 5 On the basis of setting, the first segmentation module 502 includes:First cutting unit 5021, for passing through the first convolution subnet Network carries out first process of convolution to the training sample image, obtains first segmentation result;Second cutting unit 5022, for carrying out second process of convolution to first segmentation result by the second convolution sub-network, obtain institute State the second segmentation result.
Optionally, the first convolution sub-network includes multiple convolutional layers positioned at different depth;The first segmentation mould Block 502 further includes:First combining unit 5023, for obtaining the shallow of the convolutional layer in the multiple convolutional layer positioned at the first depth Layer exports result and/or the deep layer of the convolutional layer positioned at the second depth exports result;Wherein, first depth is less than described the Two depth;And first segmentation result and shallow-layer output result and/or deep layer output result are closed And obtain the first amalgamation result;Second cutting unit 5022 is used to carry out at the second convolution first amalgamation result Reason.
Optionally, the nerve network system includes at least two the second convolution sub-networks positioned at different depth;It is described Second cutting unit 5022 is used for the second convolution by being located at minimum-depth at least two second convolution sub-networks Network carries out the second process of convolution to first segmentation result, obtains second segmentation result;The first segmentation module 502 further include third cutting unit 5024, for by the second convolution sub-network positioned at third depth to being located at the 4th depth The second segmentation result for obtaining of the second convolution sub-network carry out the second new process of convolution, obtain the second new segmentation result; Wherein, the third depth is more than the 4th depth;Before the training module 504 is used for according to the training sample image Difference between the mark segmentation result and first segmentation result of scape segmentation, the mark segmentation result and described second point Cut the difference between difference and the mark segmentation result and the second new segmentation result between result, training institute State nerve network system.
Optionally, the first segmentation module 502 further includes the second combining unit 5025, for that will be located at the 4th depth The second segmentation result that second convolution sub-network obtains exports result with the shallow-layer and/or deep layer output result is closed And obtain the second amalgamation result;The third cutting unit 5024 is for passing through the second convolution sub-network positioned at third depth The second new process of convolution is carried out to stating the second amalgamation result.
Optionally, the training module 504 is for obtaining between the mark segmentation result and first segmentation result The first variance data, the second variance data and described between the mark segmentation result and second segmentation result Mark the third variance data between segmentation result and the second new segmentation result;According to first variance data, institute State the network parameter with the adjustment nerve network system of the second variance data and the third variance data.
The neural metwork training device of the present embodiment is for realizing corresponding neural metwork training in preceding method embodiment Method, and the advantageous effect with corresponding embodiment of the method, details are not described herein.
The present embodiment also provides a kind of computer program comprising has computer program instructions, described program instruction to be located The step of managing when device executes for realizing any neural network training method provided in an embodiment of the present invention.
The present embodiment also provides a kind of computer readable storage medium, is stored thereon with computer program instructions, the program The step of instruction realizes any neural network training method provided in an embodiment of the present invention when being executed by processor.
Embodiment five
With reference to Fig. 7, a kind of structure diagram of according to embodiments of the present invention five image processing apparatus is shown.
The image processing apparatus of the embodiment of the present invention includes:Second segmentation module 702, for pending image to be inputted god Through network system, background segment processing before being carried out to the pending image by the nerve network system;Blurring module 704, for obtaining the background parts for carrying out the pending image after the preceding background segment is handled, to the background parts into Row segmentation virtualization processing.
Optionally, the second segmentation module 702 includes:Third cutting unit 7021, for passing through nerve network system First process of convolution is carried out to pending image, obtains the first segmentation result of the preceding background segment of the pending image;The Four cutting units 7022, for carrying out the second process of convolution to first segmentation result, before obtaining the pending image Second segmentation result of background segment;5th cutting unit 7023 is used for according to second segmentation result to described pending Background segment processing before image carries out, to carry out handling relevant processing with preceding background segment to the pending image.
Optionally, the nerve network system is to train to obtain using the neural metwork training device of the embodiment of the present invention four Nerve network system.
Optionally, described device further includes:Processing module 706, after to the progress preceding background segment processing Pending image carries out Steerable filter processing and/or alpha matting processing.
Optionally, the blurring module 704 is used for according to each pixel in the background parts to the pending figure The distance of the designated edge of picture carries out Fuzzy Processing to each pixel.
The neural metwork training device of the present embodiment for realizing corresponding image processing method in preceding method embodiment, And the advantageous effect with corresponding embodiment of the method, details are not described herein.
The present embodiment also provides a kind of computer program comprising has computer program instructions, described program instruction to be located The step of managing when device executes for realizing any image processing method provided in an embodiment of the present invention.
The present embodiment also provides a kind of computer readable storage medium, is stored thereon with computer program instructions, the program The step of instruction realizes any image processing method provided in an embodiment of the present invention when being executed by processor.
Embodiment six
The embodiment of the present invention six provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down Plate computer, server etc..Below with reference to Fig. 8, it illustrates suitable for for realizing terminal device or the service of the embodiment of the present invention The structural schematic diagram of the electronic equipment 800 of device:As shown in figure 8, electronic equipment 800 includes one or more processors, communication member Part etc., one or more of processors are for example:One or more central processing unit (CPU) 801, and/or it is one or more Image processor (GPU) 813 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 802 or From the executable instruction that storage section 808 is loaded into random access storage device (RAM) 803 execute it is various it is appropriate action and Processing.Communication device includes communication component 812 and/or communication interface 809.Wherein, communication component 812 may include but be not limited to net Card, the network interface card may include but be not limited to IB (Infiniband) network interface card, and communication interface 809 includes such as LAN card, modulation /demodulation The communication interface of the network interface card of device etc., communication interface 809 execute communication process via the network of such as internet.
Processor can be communicated with read-only memory 802 and/or random access storage device 803 to execute executable instruction, It is connected with communication component 812 by communication bus 804 and is communicated with other target devices through communication component 812, to completes this The corresponding operation of any one neural network training method that inventive embodiments provide, for example, by nerve network system to training Sample image carries out the first process of convolution, obtains the first segmentation result of the preceding background segment of the training sample image;And Second process of convolution is carried out to first segmentation result, obtains the second segmentation of the preceding background segment of the training sample image As a result;According to the difference between the mark segmentation result of the preceding background segment of the training sample image and first segmentation result Difference different and between the mark segmentation result and second segmentation result, the training nerve network system.
In addition, in RAM 803, it can also be stored with various programs and data needed for device operation.CPU801 or GPU813, ROM802 and RAM803 are connected with each other by communication bus 804.In the case where there is RAM803, ROM802 is can Modeling block.RAM803 stores executable instruction, or executable instruction is written into ROM802 at runtime, and executable instruction makes place It manages device and executes the corresponding operation of above-mentioned communication means.Input/output (I/O) interface 805 is also connected to communication bus 804.Communication Component 812 can be integrally disposed, may be set to be with multiple submodule (such as multiple IB network interface cards), and in communication bus chain It connects.
It is connected to I/O interfaces 805 with lower component:Importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 808 including hard disk etc.; And the communication interface 809 of the network interface card including LAN card, modem etc..Driver 810 is also according to needing to connect It is connected to I/O interfaces 805.Detachable media 811, such as disk, CD, magneto-optic disk, semiconductor memory etc. are pacified as needed On driver 810, in order to be mounted into storage section 808 as needed from the computer program read thereon.
It should be noted that framework as shown in Figure 8 is only a kind of optional realization method, it, can root during concrete practice The component count amount and type of above-mentioned Fig. 8 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component It sets, separately positioned or integrally disposed and other implementations, such as separable settings of GPU and CPU or can be by GPU collection can also be used At on CPU, the separable setting of communication device, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiment party Formula each falls within protection scope of the present invention.
Particularly, according to embodiments of the present invention, it is soft to may be implemented as computer for the process above with reference to flow chart description Part program.For example, the embodiment of the present invention includes a kind of computer program products comprising be tangibly embodied in machine readable media On computer program, computer program includes the program code for method shown in execution flow chart, and program code can wrap The corresponding instruction of corresponding execution neural network training method step provided in an embodiment of the present invention is included, for example, passing through neural network System carries out the first process of convolution to training sample image, obtains the first segmentation of the preceding background segment of the training sample image As a result;And the second process of convolution is carried out to first segmentation result, obtain the preceding background segment of the training sample image The second segmentation result;It is tied according to the mark segmentation result of the preceding background segment of the training sample image and first segmentation The difference between difference and the mark segmentation result and second segmentation result between fruit, the training nerve net Network system.In such embodiments, which can be downloaded and installed by communication device from network, and/ Or it is mounted from detachable media 811.When the computer program is executed by processor, in the method for executing the embodiment of the present invention The above-mentioned function of limiting.
Embodiment seven
The embodiment of the present invention seven provides a kind of electronic equipment, such as can be mobile terminal, personal computer (PC), put down Plate computer, server etc..Below with reference to Fig. 9, it illustrates suitable for for realizing terminal device or the service of the embodiment of the present invention The structural schematic diagram of the electronic equipment 900 of device:As shown in figure 9, electronic equipment 900 includes one or more processors, communication member Part etc., one or more of processors are for example:One or more central processing unit (CPU) 901, and/or it is one or more Image processor (GPU) 913 etc., processor can according to the executable instruction being stored in read-only memory (ROM) 902 or From the executable instruction that storage section 908 is loaded into random access storage device (RAM) 903 execute it is various it is appropriate action and Processing.Communication device includes communication component 912 and/or communication interface 909.Wherein, communication component 912 may include but be not limited to net Card, the network interface card may include but be not limited to IB (Infiniband) network interface card, and communication interface 909 includes such as LAN card, modulation /demodulation The communication interface of the network interface card of device etc., communication interface 909 execute communication process via the network of such as internet.
Processor can be communicated with read-only memory 902 and/or random access storage device 903 to execute executable instruction, It is connected with communication component 912 by communication bus 904 and is communicated with other target devices through communication component 912, to completes this The corresponding operation of any one image processing method that inventive embodiments provide, for example, by pending image input neural network system System, background segment processing before being carried out to the pending image by the nerve network system;It obtains and carries out the preceding background The background parts of pending image after dividing processing carry out segmentation virtualization processing to the background parts.
In addition, in RAM 903, it can also be stored with various programs and data needed for device operation.CPU901 or GPU913, ROM902 and RAM903 are connected with each other by communication bus 904.In the case where there is RAM903, ROM902 is can Modeling block.RAM903 stores executable instruction, or executable instruction is written into ROM902 at runtime, and executable instruction makes place It manages device and executes the corresponding operation of above-mentioned communication means.Input/output (I/O) interface 905 is also connected to communication bus 904.Communication Component 912 can be integrally disposed, may be set to be with multiple submodule (such as multiple IB network interface cards), and in communication bus chain It connects.
It is connected to I/O interfaces 905 with lower component:Importation 906 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 907 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 908 including hard disk etc.; And the communication interface 909 of the network interface card including LAN card, modem etc..Driver 910 is also according to needing to connect It is connected to I/O interfaces 905.Detachable media 911, such as disk, CD, magneto-optic disk, semiconductor memory etc. are pacified as needed On driver 910, in order to be mounted into storage section 908 as needed from the computer program read thereon.
It should be noted that framework as shown in Figure 9 is only a kind of optional realization method, it, can root during concrete practice The component count amount and type of above-mentioned Fig. 9 are selected, are deleted, increased or replaced according to actual needs;It is set in different function component It sets, separately positioned or integrally disposed and other implementations, such as separable settings of GPU and CPU or can be by GPU collection can also be used At on CPU, the separable setting of communication device, can also be integrally disposed on CPU or GPU, etc..These interchangeable embodiment party Formula each falls within protection scope of the present invention.
Particularly, according to embodiments of the present invention, it is soft to may be implemented as computer for the process above with reference to flow chart description Part program.For example, the embodiment of the present invention includes a kind of computer program products comprising be tangibly embodied in machine readable media On computer program, computer program includes the program code for method shown in execution flow chart, and program code can wrap The corresponding instruction of corresponding execution image processing method step provided in an embodiment of the present invention is included, for example, pending image is inputted Nerve network system, background segment processing before being carried out to the pending image by the nerve network system;It obtains and carries out The background parts of pending image after the preceding background segment processing carry out segmentation virtualization processing to the background parts. In such embodiments, which can be downloaded and installed by communication device from network, and/or from removable Medium 911 is unloaded to be mounted.When the computer program is executed by processor, execute limited in the method for the embodiment of the present invention it is upper State function.
It may be noted that according to the needs of implementation, all parts/step described in the embodiment of the present invention can be split as more The part operation of two or more components/steps or components/steps can be also combined into new component/step by multi-part/step Suddenly, to realize the purpose of the embodiment of the present invention.
It is above-mentioned to be realized in hardware, firmware according to the method for the embodiment of the present invention, or be implemented as being storable in note Software or computer code in recording medium (such as CD ROM, RAM, floppy disk, hard disk or magneto-optic disk), or it is implemented through net The original storage that network is downloaded in long-range recording medium or nonvolatile machine readable media and will be stored in local recording medium In computer code, can be stored in using all-purpose computer, application specific processor or can compile to method described here Such software processing in journey or the recording medium of specialized hardware (such as ASIC or FPGA).It is appreciated that computer, processing Device, microprocessor controller or programmable hardware include can store or receive software or computer code storage assembly (for example, RAM, ROM, flash memory etc.), when the software or computer code are by computer, processor or hardware access and execute, realize Processing method described here.In addition, when all-purpose computer accesses the code for realizing the processing being shown here, code Execute special purpose computer all-purpose computer is converted to for executing the processing being shown here.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and method and step can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The range of the embodiment of the present invention.
Embodiment of above is merely to illustrate the embodiment of the present invention, and is not the limitation to the embodiment of the present invention, related skill The those of ordinary skill in art field can also make various in the case where not departing from the spirit and scope of the embodiment of the present invention Variation and modification, therefore all equivalent technical solutions also belong to the scope of the embodiment of the present invention, the patent of the embodiment of the present invention Protection domain should be defined by the claims.

Claims (10)

1. a kind of neural network training method, including:
The first process of convolution is carried out to training sample image by nerve network system, obtains the preceding back of the body of the training sample image First segmentation result of scape segmentation;And the second process of convolution is carried out to first segmentation result, obtain the training sample Second segmentation result of the preceding background segment of image;
According to the difference between the mark segmentation result of the preceding background segment of the training sample image and first segmentation result Difference different and between the mark segmentation result and second segmentation result, the training nerve network system.
2. a kind of image processing method, including:
Pending image is inputted into nerve network system, the back of the body before being carried out to the pending image by the nerve network system Scape dividing processing;
The background parts for carrying out the pending image after the preceding background segment processing are obtained, the background parts are divided Section virtualization processing.
3. a kind of neural metwork training device, including:
First segmentation module, for carrying out the first process of convolution to training sample image by nerve network system, described in acquisition First segmentation result of the preceding background segment of training sample image;And first segmentation result is carried out at the second convolution Reason, obtains the second segmentation result of the preceding background segment of the training sample image;
Training module, the mark segmentation result for the preceding background segment according to the training sample image are divided with described first As a result the difference between difference and the mark segmentation result and second segmentation result between, the training nerve Network system.
4. a kind of image processing apparatus, including:
Second segmentation module, for pending image to be inputted nerve network system, by the nerve network system to described Background segment processing before pending image carries out;
Blurring module, the background parts for obtaining the pending image after carrying out the preceding background segment processing, to described Background parts carry out segmentation virtualization processing.
5. a kind of computer program comprising there is computer program instructions, wherein described program instruction is used when being executed by processor In step corresponding to realization neural network training method described in claim 1.
6. a kind of computer program comprising there is computer program instructions, wherein described program instruction is used when being executed by processor In realizing step corresponding to image processing method described in claim 2.
7. a kind of computer readable storage medium, is stored thereon with computer program instructions, wherein described program instruction is handled For realizing step corresponding to neural network training method described in claim 1 when device executes.
8. a kind of computer readable storage medium, is stored thereon with computer program instructions, wherein described program instruction is handled For realizing step corresponding to the image processing method described in claim 2 when device executes.
9. a kind of electronic equipment, including:Processor, memory, communication device and communication bus, the processor, the storage Device and the communication device complete mutual communication by the communication bus;
The memory makes the processor execute as right is wanted for storing an at least executable instruction, the executable instruction The step for asking 1 neural network training method corresponding.
10. a kind of electronic equipment, including:Processor, memory, communication device and communication bus, the processor, the storage Device and the communication device complete mutual communication by the communication bus;
The memory makes the processor execute as right is wanted for storing an at least executable instruction, the executable instruction The step for asking 2 described image processing methods corresponding.
CN201810463674.3A 2018-05-15 2018-05-15 Neural metwork training, image processing method, device, storage medium and electronic equipment Pending CN108665475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810463674.3A CN108665475A (en) 2018-05-15 2018-05-15 Neural metwork training, image processing method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810463674.3A CN108665475A (en) 2018-05-15 2018-05-15 Neural metwork training, image processing method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN108665475A true CN108665475A (en) 2018-10-16

Family

ID=63779599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810463674.3A Pending CN108665475A (en) 2018-05-15 2018-05-15 Neural metwork training, image processing method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108665475A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491784A (en) * 2018-10-18 2019-03-19 北京旷视科技有限公司 Reduce method, apparatus, the electronic equipment, readable storage medium storing program for executing of EMS memory occupation amount
CN110110778A (en) * 2019-04-29 2019-08-09 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110175966A (en) * 2019-05-30 2019-08-27 上海极链网络科技有限公司 Non-mated images generation method, system, server and storage medium
CN111583264A (en) * 2020-05-06 2020-08-25 上海联影智能医疗科技有限公司 Training method for image segmentation network, image segmentation method, and storage medium
CN112330709A (en) * 2020-10-29 2021-02-05 奥比中光科技集团股份有限公司 Foreground image extraction method and device, readable storage medium and terminal equipment
CN112749801A (en) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 Neural network training and image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243053A1 (en) * 2016-02-18 2017-08-24 Pinscreen, Inc. Real-time facial segmentation and performance capture from rgb input
CN107392933A (en) * 2017-07-12 2017-11-24 维沃移动通信有限公司 A kind of method and mobile terminal of image segmentation
CN107545571A (en) * 2017-09-22 2018-01-05 深圳天琴医疗科技有限公司 A kind of image detecting method and device
CN107749046A (en) * 2017-10-27 2018-03-02 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170243053A1 (en) * 2016-02-18 2017-08-24 Pinscreen, Inc. Real-time facial segmentation and performance capture from rgb input
CN107392933A (en) * 2017-07-12 2017-11-24 维沃移动通信有限公司 A kind of method and mobile terminal of image segmentation
CN107545571A (en) * 2017-09-22 2018-01-05 深圳天琴医疗科技有限公司 A kind of image detecting method and device
CN107749046A (en) * 2017-10-27 2018-03-02 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JONATHAN LONG,ET AL: ""Fully Convolutional Networks"", 《IEEE COMPUTER VISION AND PATTERN RECOGNITION》 *
SHENXIAOLU1984: ""【图像分割】Fully Convolutional Networks for Semantic Segmentation"", 《CSDN》 *
刘晓阳,薛纯: ""基于多尺度的水下图像显著性区域检测"", 《微型机与应用》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109491784A (en) * 2018-10-18 2019-03-19 北京旷视科技有限公司 Reduce method, apparatus, the electronic equipment, readable storage medium storing program for executing of EMS memory occupation amount
CN109491784B (en) * 2018-10-18 2021-01-22 北京旷视科技有限公司 Method and device for reducing memory occupation amount, electronic equipment and readable storage medium
CN110110778A (en) * 2019-04-29 2019-08-09 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and computer readable storage medium
WO2020220858A1 (en) * 2019-04-29 2020-11-05 腾讯科技(深圳)有限公司 Image processing method and apparatus, electronic device, and computer-readable storage medium
CN110175966A (en) * 2019-05-30 2019-08-27 上海极链网络科技有限公司 Non-mated images generation method, system, server and storage medium
CN111583264A (en) * 2020-05-06 2020-08-25 上海联影智能医疗科技有限公司 Training method for image segmentation network, image segmentation method, and storage medium
CN111583264B (en) * 2020-05-06 2024-02-27 上海联影智能医疗科技有限公司 Training method for image segmentation network, image segmentation method, and storage medium
CN112330709A (en) * 2020-10-29 2021-02-05 奥比中光科技集团股份有限公司 Foreground image extraction method and device, readable storage medium and terminal equipment
CN112749801A (en) * 2021-01-22 2021-05-04 上海商汤智能科技有限公司 Neural network training and image processing method and device

Similar Documents

Publication Publication Date Title
CN108665475A (en) Neural metwork training, image processing method, device, storage medium and electronic equipment
Zhu et al. A benchmark for edge-preserving image smoothing
CN105184249B (en) Method and apparatus for face image processing
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
US8692830B2 (en) Automatic avatar creation
CN106971165B (en) A kind of implementation method and device of filter
CN108229279A (en) Face image processing process, device and electronic equipment
CN109325988A (en) A kind of facial expression synthetic method, device and electronic equipment
CN110378235A (en) A kind of fuzzy facial image recognition method, device and terminal device
CN109712144A (en) Processing method, training method, equipment and the storage medium of face-image
CN108647634A (en) Framing mask lookup method, device, computer equipment and storage medium
CN107395960A (en) Photographic method and device, computer installation and computer-readable recording medium
CN111127309B (en) Portrait style migration model training method, portrait style migration method and device
CN108234858A (en) Image virtualization processing method, device, storage medium and electronic equipment
CN111091493B (en) Image translation model training method, image translation method and device and electronic equipment
CN107610046A (en) Background-blurring method, apparatus and system
CN107464217A (en) A kind of image processing method and device
CN108230384A (en) Picture depth computational methods, device, storage medium and electronic equipment
CN106650795A (en) Sorting method of hotel room type images
CN108596070A (en) Character recognition method, device, storage medium, program product and electronic equipment
CN110009573A (en) Model training, image processing method, device, electronic equipment and computer readable storage medium
CN107610675A (en) A kind of image processing method and device based on dynamic level
CN115100334B (en) Image edge tracing and image animation method, device and storage medium
CN108769545A (en) A kind of image processing method, image processing apparatus and mobile terminal
CN107851309A (en) A kind of image enchancing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181016

RJ01 Rejection of invention patent application after publication