CN108154465A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN108154465A
CN108154465A CN201711377564.7A CN201711377564A CN108154465A CN 108154465 A CN108154465 A CN 108154465A CN 201711377564 A CN201711377564 A CN 201711377564A CN 108154465 A CN108154465 A CN 108154465A
Authority
CN
China
Prior art keywords
image
depth
portrait
rgb
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711377564.7A
Other languages
Chinese (zh)
Inventor
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201711377564.7A priority Critical patent/CN108154465A/en
Publication of CN108154465A publication Critical patent/CN108154465A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/0012Context preserving transformation, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23229Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor comprising further processing of the captured image without influencing the image pickup process
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The disclosure is directed to image processing method and devices.The image processing method includes:Obtain the RGB image and depth image of photographic subjects;Portrait segmentation is carried out to RGB image according to depth image, determines portrait area and background area in RGB image;Background blurring operation is carried out to background area in RGB image according to depth image, obtains the corresponding background blurring image of RGB image.The depth information that the disclosure can be provided based on depth image, portrait area in RGB image and background area are accurately divided, and then ensure background blurring effect and shooting quality, utmostly the background blurring effect of the portrait of simulation slr camera, improves user experience.

Description

Image processing method and device
Technical field
This disclosure relates to technical field of image processing more particularly to image processing method and device.
Background technology
Slr camera can take the portrait photo with background blurring effect, visually with extremely strong impact force.The back of the body Scape virtualization effect has following features:1) the prospect portrait imaging being focused is clear;2) the background scenery imaging except portrait It is fuzzy;3) background scenery is more remote apart from portrait, and fog-level is bigger, otherwise smaller, i.e., fog-level is with depth of field difference It is and different;4) afocal imaging two wires sex chromosome mosaicism.
In the relevant technologies, RGB (RGB) camera is configured in mobile phone, is divided using the portrait based on RGB image and calculated Human body parts in RGB image and ambient background part are split by method, after the completion of segmentation, are carried out background blurring operation, are obtained Obtain the background blurring effect of portrait similar to slr camera.
Invention content
To overcome the problems in correlation technique, the embodiment of the present disclosure provides a kind of image processing method and device.Institute It is as follows to state technical solution:
According to the embodiment of the present disclosure in a first aspect, provide a kind of image processing method, including:
Obtain the RGB RGB image and depth image of photographic subjects;
Portrait segmentation is carried out to the RGB image according to the depth image, determine in the RGB image portrait area and Background area;
Background blurring operation is carried out to background area in the RGB image according to the depth image, obtains the RGB figures As corresponding background blurring image.
In one embodiment, portrait segmentation is carried out to the RGB image according to the depth image, determines the RGB Portrait area and background area in image, including:
Using the depth image and obtained the first depth convolutional neural networks of training in advance, to the RGB image into Pedestrian obtains portrait segmentation image as segmentation;Wherein, the portrait segmentation image is used to the RGB image being divided into portrait Region and background area;
Image is divided according to the portrait, determines portrait area and background area in the RGB image.
In one embodiment, the input of the first depth convolutional neural networks is the depth image and the RGB Image, the output of the first depth convolutional neural networks divide image for the portrait.
In one embodiment, background blurring behaviour is carried out to background area in the RGB image according to the depth image Make, obtain the corresponding background blurring image of the RGB image, including:
The second obtained depth convolutional neural networks are trained using the depth image and in advance, in the RGB image Background area carries out background blurring operation, obtains the corresponding background blurring image of the RGB image.
In one embodiment, the input of the second depth convolutional neural networks is the depth image and the RGB Image, the output of the second depth convolutional neural networks is the corresponding background blurring image of the RGB image.
According to the second aspect of the embodiment of the present disclosure, a kind of image processing apparatus is provided, including:
Acquisition module, for obtaining the RGB RGB image of photographic subjects and depth image;
Portrait divides module, for carrying out portrait segmentation to the RGB image according to the depth image, determines described Portrait area and background area in RGB image;
Background blurring module, it is background blurring for being carried out according to the depth image to background area in the RGB image Operation, obtains the corresponding background blurring image of the RGB image.
In one embodiment, the portrait segmentation module, including:
Portrait divides submodule, for the first depth convolutional Neural net obtained using the depth image and advance training Network carries out portrait segmentation to the RGB image, obtains portrait segmentation image;Wherein, the portrait segmentation image is used for by described in RGB image is divided into portrait area and background area;
Determination sub-module for dividing image according to the portrait, determines portrait area and background area in the RGB image Domain.
In one embodiment, the input of the first depth convolutional neural networks is the depth image and the RGB Image, the output of the first depth convolutional neural networks divide image for the portrait.
In one embodiment, the background blurring module trains using the depth image and in advance second obtained deeply Convolutional neural networks are spent, background blurring operation is carried out to background area in the RGB image, it is corresponding to obtain the RGB image Background blurring image.
In one embodiment, the input of the second depth convolutional neural networks is the depth image and the RGB Image, the output of the second depth convolutional neural networks is the corresponding background blurring image of the RGB image.
According to the third aspect of the embodiment of the present disclosure, a kind of image processing apparatus is provided, including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Obtain the RGB RGB image and depth image of photographic subjects;
Portrait segmentation is carried out to the RGB image according to the depth image, determine in the RGB image portrait area and Background area;
Background blurring operation is carried out to background area in the RGB image according to the depth image, obtains the RGB figures As corresponding background blurring image.
According to the fourth aspect of the embodiment of the present disclosure, a kind of computer readable storage medium is provided, is stored thereon with calculating The step of machine instructs, which realizes any one of above-mentioned first aspect the method embodiment when being executed by processor.
The technical scheme provided by this disclosed embodiment can include the following benefits:The technical solution is by being based on depth The depth information that image provides is spent, portrait area in RGB image and background area are accurately divided, and then ensures that background is empty Change effect and shooting quality, be capable of the background blurring effect of portrait of utmostly simulation slr camera, improve user experience.
It should be understood that above general description and following detailed description are only exemplary and explanatory, not The disclosure can be limited.
Description of the drawings
Attached drawing herein is incorporated into specification and forms the part of this specification, shows the implementation for meeting the disclosure Example, and for explaining the principle of the disclosure together with specification.
Fig. 1 is the flow chart according to the image processing method shown in an exemplary embodiment.
Fig. 2 is the flow chart according to the image processing method shown in an exemplary embodiment.
Fig. 3 is the flow chart according to the image processing method shown in an exemplary embodiment.
Fig. 4 is the block diagram according to the image processing apparatus shown in an exemplary embodiment.
Fig. 5 is the block diagram according to the image processing apparatus shown in an exemplary embodiment.
Fig. 6 is the block diagram according to the image processing apparatus shown in an exemplary embodiment.
Fig. 7 is the block diagram according to the image processing apparatus shown in an exemplary embodiment.
Fig. 8 is the block diagram according to the image processing apparatus shown in an exemplary embodiment.
Specific embodiment
Here exemplary embodiment will be illustrated in detail, example is illustrated in the accompanying drawings.Following description is related to During attached drawing, unless otherwise indicated, the same numbers in different attached drawings represent the same or similar element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the disclosure.
In the relevant technologies, RGB cameras are configured in mobile phone, using the portrait partitioning algorithm based on RGB image, by RGB Human body parts and ambient background part in image are split, and after the completion of segmentation, are carried out background blurring operation, are similar to The background blurring effect of portrait of slr camera.However, in the case where the degrees of fusion of portrait clothes texture and ambient background is higher, Such as subject wear in the wild camouflage fatigue shooting photo when, using the portrait partitioning algorithm based on RGB image, it is difficult to essence Really portrait and ambient background are split, cause the boundary segmentation of portrait and background that can usually malfunction, this can be seriously affected The virtualization effect of background blurring operation reduces shooting quality, influences user experience.
To solve the above-mentioned problems, the embodiment of the present disclosure provides a kind of image processing method, including:Obtain photographic subjects RGB image and depth image;Portrait segmentation is carried out to RGB image according to depth image, determine in RGB image portrait area and Background area;Background blurring operation is carried out to background area in RGB image according to depth image, obtains the corresponding back of the body of RGB image Scape blurs image.The depth information that above-mentioned technical proposal can be provided based on depth image, to portrait area in RGB image and the back of the body Scene area is accurately divided, and then ensures the people of background blurring effect and shooting quality, utmostly simulation slr camera As background blurring effect, user experience is improved.
Based on above-mentioned analysis, following specific embodiment is proposed.
Fig. 1 is according to a kind of flow chart of image processing method shown in an exemplary embodiment, the execution master of this method Body can be terminal, such as smart mobile phone, tablet computer, desktop computer, laptop etc.;As shown in Figure 1, this method include with Lower step 101-103:
In a step 101, the RGB image and depth image of photographic subjects are obtained.
It is exemplary, equipped with the terminal of 3 dimension (3D) structure light video camera heads, it can not only collect the RGB figures of photographic subjects Picture can also collect the depth image of photographic subjects.The depth information that depth image provides, refers to each in depth image A pixel represents the distance between camera of point and terminal in photographic subjects.Photographic subjects are included by the camera of terminal The personage and personage's ambient background that view-finder is chosen.
In a step 102, portrait segmentation is carried out to RGB image according to depth image, determine in RGB image portrait area and Background area.
Exemplary, in the case where the degrees of fusion of portrait clothes texture and ambient background is higher, such as subject exists When camouflage fatigue shooting photo is worn in field, it is contemplated that the depth of the depth information of portrait area and background area is believed in depth image The diversity ratio of breath is larger, the depth information of disclosure combination depth image and the color character of RGB image, accurate Ground Split portrait Region and background area.
It is exemplary, a large amount of training sample is collected in advance, each sample includes:Depth image, RGB image and portrait point Cut image;Portrait segmentation figure seems to refer to the image of characterization portrait segmentation result, and portrait segmentation image is used to RGB image being divided into Portrait area and background area.It is trained to obtain the first depth convolutional neural networks using these training samples.Wherein, first The input of depth convolutional neural networks is depth image and RGB image, and the output of the first depth convolutional neural networks is portrait point Cut image.
It is exemplary, after the RGB image and depth image for obtaining photographic subjects, use depth image and trained in advance The the first depth convolutional neural networks arrived carry out portrait segmentation to RGB image, obtain portrait segmentation image;Wherein, portrait is divided Image is used to RGB image being divided into portrait area and background area.It is for example, depth image and RGB image is deep as first The input of convolutional neural networks is spent, the output of the first depth convolutional neural networks is portrait segmentation image, for example, in characterization portrait In the portrait segmentation image of segmentation result, portrait area is represented with pixel value 1, represents background area with pixel value 0, then pixel The boundary of value 1 and pixel value 0 means that portrait area and the portrait segmenting edge of background area.Image is divided according to portrait, really Determine portrait area and background area in RGB image.
In step 103, background blurring operation is carried out to background area in RGB image according to depth image, obtains RGB figures As corresponding background blurring image.
The technical scheme provided by this disclosed embodiment, the depth information that can be provided based on depth image, to RGB image Middle portrait area and background area are accurately divided, and then ensure background blurring effect and shooting quality, can be utmostly The background blurring effect of portrait of simulation slr camera so, it is possible to improve user experience.
Using the background blurring algorithm based on RGB image in the relevant technologies, uniformly blurred there are background and it is background blurring not The defects of bilinear can be presented.For this problem, Fig. 2 proposes a kind of image processing method for solving the problems, such as this.Fig. 2 is root A kind of flow chart of image processing method shown according to an exemplary embodiment;As shown in Fig. 2, the base of embodiment shown in Fig. 1 On plinth, this disclosure relates to image processing method include the following steps 201-203:
In step 201, the RGB image and depth image of photographic subjects are obtained.
In step 202, portrait segmentation is carried out to RGB image according to depth image, determine in RGB image portrait area and Background area.
It should be noted that the explanation of step 201 and step 202 may refer in embodiment illustrated in fig. 1 to step 101 And the explanation of step 102, details are not described herein.
In step 203, the second depth convolutional neural networks obtained using depth image and advance training, scheme RGB Background area carries out background blurring operation as in, obtains the corresponding background blurring image of RGB image.
It is exemplary, simulate slr camera background blurring effect when, there are two need concern the problem of, first, background is empty Change degree is related to the distance of background object distance, second is that background blurring will appear bilinear phenomenon.The relevant technologies use parameter Fixed wave filter can only obtain the virtualization effect being evenly distributed to carry out virtualization filtering to image background part.The disclosure is instructed White silk the second depth convolutional network progress is background blurring, and the input of the second depth convolutional network is a pair of of depth image and has determined that people As the RGB image in region and background area, the output of the second depth convolutional neural networks is the corresponding background blurring figure of RGB image Picture.Depth image provides depth of view information, can realize the virtualization effect changed with the depth of field;Depth convolutional neural networks are powerful Nonlinear Modeling ability can realize that bilinear blurs effect.In order to train the second depth convolutional neural networks, collect in advance big The training sample of amount, each sample include:Depth image, the RGB image and background for having determined that portrait area and background area Blur image;It is trained to obtain the second depth convolutional neural networks using these training samples.
It should be noted that this disclosure relates to the second depth convolutional neural networks and the relevant technologies in depth convolution net Network is very different:(1) this disclosure relates to the second depth convolutional neural networks input there are two, be respectively depth image and RGB image;And the input of depth convolutional network is only there are one RGB image in the relevant technologies.(2) depth convolution net in the relevant technologies The output of network is the classification for exporting RGB image, and loss functions are the loss functions based on cross entropy;And this disclosure relates to The output of two depth convolutional neural networks is the RGB image after virtualization, and loss functions are the loss letters based on Mean Square Error Number.
It is exemplary, portrait segmentation is being carried out to RGB image according to depth image, is determining portrait area and the back of the body in RGB image After scene area, the second obtained depth convolutional neural networks are trained using depth image and in advance, to background area in RGB image Domain carries out background blurring operation, obtains the corresponding background blurring image of RGB image.For example, by depth image and have determined that portrait Input of the RGB image of region and background area as the second depth convolutional neural networks, the second depth convolutional neural networks Output is the corresponding background blurring image of RGB image.It is corresponding in RGB image by using the depth information in depth image In background blurring image, the background scenery in background area is more remote apart from portrait, and fog-level is bigger, can obtain becoming with the depth of field The background blurring effect changed.
The technical scheme provided by this disclosed embodiment, by combining depth image and RGB image, training depth convolution god Background blurring operation, the background blurring effect that can obtain changing with the depth of field by the depth information in depth image are carried out through network Fruit overcomes the defects of background uniformly blurs in the relevant technologies, meanwhile, pass through the powerful Nonlinear Modeling energy of depth convolutional network Power can realize that bilinear blurs effect, ensure background blurring effect and shooting quality, utmostly simulation slr camera The background blurring effect of portrait.
Fig. 3 is the flow chart according to a kind of image processing method shown in an exemplary embodiment;As shown in figure 3, in Fig. 1 On the basis of illustrated embodiment, this disclosure relates to image processing method include the following steps 301-304:
In step 301, the RGB image and depth image of photographic subjects are obtained.
In step 302, the first depth convolutional neural networks obtained using depth image and advance training, scheme RGB As carrying out portrait segmentation, portrait segmentation image is obtained;Wherein, portrait segmentation image is used to RGB image being divided into portrait area And background area.
Exemplary, the inputs of the first depth convolutional neural networks is depth image and RGB image, the first depth convolutional Neural The output of network is portrait segmentation figure picture.
In step 303, image is divided according to portrait, determines portrait area and background area in RGB image.
In step 304, the second depth convolutional neural networks obtained using depth image and advance training, scheme RGB Background area carries out background blurring operation as in, obtains the corresponding background blurring image of RGB image.
Exemplary, the inputs of the second depth convolutional neural networks is depth image and RGB image, the second depth convolutional Neural The output of network is the corresponding background blurring image of RGB image.
The technical scheme provided by this disclosed embodiment, the first depth convolutional Neural net obtained by using advance training Network carries out RGB image portrait segmentation and the second depth convolutional neural networks obtained using advance training to being carried on the back in RGB image Scene area carries out background blurring operation, and portrait area in RGB image and background area accurately can be divided, meanwhile, the back of the body Background scenery in scene area is more remote apart from portrait, and fog-level is bigger, ensures background blurring effect and shooting quality, maximum journey The background blurring effect of portrait of simulation slr camera is spent, improves user experience.
Following is embodiment of the present disclosure, can be used for performing embodiments of the present disclosure.
Fig. 4 is the block diagram according to a kind of image processing apparatus shown in an exemplary embodiment;The device may be used respectively Kind of mode is implemented, such as all components of implementation in the terminal, alternatively, in end side implementation in a coupled manner In component;The device can by software, hardware or both be implemented in combination with it is above-mentioned this disclosure relates to method, such as Fig. 4 Shown, which includes:Acquisition module 401, portrait segmentation module 402 and background blurring module 403, wherein:
Acquisition module 401 is configured as obtaining the RGB image and depth image of photographic subjects;
Portrait segmentation module 402 is configured as carrying out portrait segmentation to RGB image according to depth image, determines RGB image Middle portrait area and background area;
Background blurring module 403 is configured as carrying out background blurring behaviour to background area in RGB image according to depth image Make, obtain the corresponding background blurring image of RGB image.
The device that the embodiment of the present disclosure provides can be used in performing the technical solution of embodiment illustrated in fig. 1, executive mode Similar with advantageous effect, details are not described herein again.
In a kind of possible embodiment, as shown in figure 5, the image processing apparatus shown in Fig. 4 can also include people As segmentation module 402 is configured to include:Portrait divides submodule 501 and determination sub-module 502, wherein:
The first depth convolutional Neural that portrait segmentation submodule 501 is configured with depth image and training obtains in advance Network carries out portrait segmentation to RGB image, obtains portrait segmentation image;Wherein, portrait segmentation image is used for RGB image point It is segmented into portrait area and background area;
Determination sub-module 502 is configured as dividing image according to portrait, determines portrait area and background area in RGB image Domain.
In a kind of possible embodiment, the inputs of the first depth convolutional neural networks is depth image and RGB image, The output of first depth convolutional neural networks is portrait segmentation figure picture.
In a kind of possible embodiment, background blurring module 403 obtained using depth image and in advance training the Two depth convolutional neural networks carry out background blurring operation to background area in RGB image, obtain the corresponding background of RGB image Blur image.
In a kind of possible embodiment, the inputs of the second depth convolutional neural networks is depth image and RGB image, The output of second depth convolutional neural networks is the corresponding background blurring image of RGB image.
Fig. 6 is according to a kind of block diagram of image processing apparatus shown in an exemplary embodiment, and image processing apparatus can be with It adopts and implements in various manners, such as all components of implementation in the terminal, alternatively, real in a coupled manner in end side Apply the component in device;Referring to Fig. 6, image processing apparatus 600 includes:
Processor 601;
For storing the memory 602 of processor-executable instruction;
Wherein, processor 601 is configured as:
Obtain the RGB image and depth image of photographic subjects;
Portrait segmentation is carried out to RGB image according to depth image, determines portrait area and background area in RGB image;
Background blurring operation is carried out to background area in RGB image according to depth image, obtains the corresponding background of RGB image Blur image.
In one embodiment, above-mentioned processor 601 is also configured to:
The first obtained depth convolutional neural networks are trained using depth image and in advance, portrait point is carried out to RGB image It cuts, obtains portrait segmentation image;Wherein, portrait segmentation image is used to RGB image being divided into portrait area and background area;
Image is divided according to portrait, determines portrait area and background area in RGB image.
In one embodiment, the input of the first depth convolutional neural networks be depth image and RGB image, the first depth The output of convolutional neural networks is portrait segmentation figure picture.
In one embodiment, above-mentioned processor 601 is also configured to:
The second obtained depth convolutional neural networks are trained using depth image and in advance, to background area in RGB image Background blurring operation is carried out, obtains the corresponding background blurring image of RGB image.
In one embodiment, the input of the second depth convolutional neural networks be depth image and RGB image, the second depth The output of convolutional neural networks is the corresponding background blurring image of RGB image.
About the device in above-described embodiment, wherein modules perform the concrete mode of operation in related this method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 7 is the block diagram according to a kind of image processing apparatus shown in an exemplary embodiment.For example, device 700 can be with It is terminal, such as mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, medical treatment Equipment or body-building equipment etc..
With reference to Fig. 7, device 700 can include following one or more components:Processing component 702, memory 704, power supply Component 706, multimedia component 708, audio component 710, input/output (I/O) interface 712, sensor module 714, Yi Jitong Believe component 716.
The integrated operation of 702 usual control device 700 of processing component, such as with display, call, data communication, phase Machine operates and record operates associated operation.Processing component 702 can refer to including one or more processors 720 to perform It enables, to perform all or part of the steps of the methods described above.In addition, processing component 702 can include one or more modules, just Interaction between processing component 702 and other assemblies.For example, processing component 702 can include multi-media module, it is more to facilitate Interaction between media component 708 and processing component 702.
Memory 704 is configured as storing various types of data to support the operation in device 700.These data are shown Example includes the instruction of any application program or method for being operated on device 700, contact data, and telephone book data disappears Breath, picture, video etc..Memory 704 can be by any kind of volatibility or non-volatile memory device or their group It closes and realizes, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable to compile Journey read-only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash Device, disk or CD.
Power supply module 706 provides electric power for the various assemblies of device 700.Power supply module 706 can include power management system System, one or more power supplys and other generate, manage and distribute electric power associated component with for device 700.
Multimedia component 708 is included in the screen of one output interface of offer between the device 700 and user.At some In embodiment, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Touch screen is may be implemented as, to receive input signal from the user.Touch panel includes one or more touch sensors To sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense the side of touch or sliding action Boundary, but also detect and the touch or the relevant duration and pressure of slide.In some embodiments, multimedia component 708 include a front camera and/or rear camera.When device 700 is in operation mode, such as screening-mode or video screen module During formula, front camera and/or rear camera can receive external multi-medium data.Each front camera and postposition are taken the photograph As head can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 710 is configured as output and/or input audio signal.For example, audio component 710 includes a Mike Wind (MIC), when device 700 is in operation mode, during such as call model, logging mode and speech recognition mode, microphone by with It is set to reception external audio signal.The received audio signal can be further stored in memory 704 or via communication set Part 716 is sent.In some embodiments, audio component 710 further includes a loud speaker, for exports audio signal.
I/O interfaces 712 provide interface between processing component 702 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor module 714 includes one or more sensors, and the state for providing various aspects for device 700 is commented Estimate.For example, sensor module 714 can detect opening/closed state of device 700, the relative positioning of component, such as the group Part is the display and keypad of device 700, and sensor module 714 can be with 700 1 components of detection device 700 or device Position change, the existence or non-existence that user contacts with device 700,700 orientation of device or acceleration/deceleration and the temperature of device 700 Degree variation.Sensor module 714 can include proximity sensor, be configured to detect without any physical contact attached The presence of nearly object.Sensor module 714 can also include optical sensor, such as CMOS or ccd image sensor, for being imaged It is used in.In some embodiments, the sensor module 714 can also include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 716 is configured to facilitate the communication of wired or wireless way between device 700 and other equipment.Device 700 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.In an exemplary implementation In example, communication component 716 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, which further includes near-field communication (NFC) module, to promote short range communication.Example Such as, NFC module can be based on radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 700 can be believed by one or more application application-specific integrated circuit (ASIC), number Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 704 of instruction, above-metioned instruction can be performed to complete the above method by the processor 720 of device 700.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and Optical data storage devices etc..
Fig. 8 is the block diagram according to a kind of image processing apparatus shown in an exemplary embodiment.For example, device 800 can be with It is provided as a server.Device 800 includes processing component 802, further comprises one or more processors and by depositing Memory resource representated by reservoir 803, can be by the instruction of the execution of processing component 802, such as application program for storing.It deposits The application program stored in reservoir 803 can include it is one or more each correspond to the module of one group of instruction.This Outside, processing component 802 is configured as execute instruction, to perform the above method.
Device 800 can also include the power supply pipe that a power supply module 806 is configured as performing image processing apparatus 800 Reason, a wired or wireless network interface 805 is configured as image processing apparatus 800 being connected to network and an input is defeated Go out (I/O) interface 808.Device 800 can be operated based on the operating system for being stored in memory 803, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
A kind of non-transitorycomputer readable storage medium, when the instruction in storage medium is by device 700 or device 800 When processor performs so that device 700 or device 800 are able to carry out following image processing method, and method includes:
Obtain the RGB image and depth image of photographic subjects;
Portrait segmentation is carried out to RGB image according to depth image, determines portrait area and background area in RGB image;
Background blurring operation is carried out to background area in RGB image according to depth image, obtains the corresponding background of RGB image Blur image.
In one embodiment, portrait segmentation is carried out to RGB image according to depth image, determines portrait area in RGB image Domain and background area, including:
The first obtained depth convolutional neural networks are trained using depth image and in advance, portrait point is carried out to RGB image It cuts, obtains portrait segmentation image;Wherein, portrait segmentation image is used to RGB image being divided into portrait area and background area;
Image is divided according to portrait, determines portrait area and background area in RGB image.
In one embodiment, the input of the first depth convolutional neural networks be depth image and RGB image, the first depth The output of convolutional neural networks is portrait segmentation figure picture.
In one embodiment, background blurring operation is carried out to background area in RGB image according to depth image, obtained The corresponding background blurring image of RGB image, including:
The second obtained depth convolutional neural networks are trained using depth image and in advance, to background area in RGB image Background blurring operation is carried out, obtains the corresponding background blurring image of RGB image.
In one embodiment, the input of the second depth convolutional neural networks be depth image and RGB image, the second depth The output of convolutional neural networks is the corresponding background blurring image of RGB image.
Those skilled in the art will readily occur to the disclosure its after considering specification and putting into practice disclosure disclosed herein Its embodiment.This application is intended to cover any variations, uses, or adaptations of the disclosure, these modifications, purposes or Person's adaptive change follows the general principle of the disclosure and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.Description and embodiments are considered only as illustratively, and the true scope and spirit of the disclosure are by following Claim is pointed out.
It should be understood that the present disclosure is not limited to the precise structures that have been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present disclosure is only limited by appended claim.

Claims (12)

1. a kind of image processing method, which is characterized in that including:
Obtain the RGB RGB image and depth image of photographic subjects;
Portrait segmentation is carried out to the RGB image according to the depth image, determines portrait area and background in the RGB image Region;
Background blurring operation is carried out to background area in the RGB image according to the depth image, obtains the RGB image pair The background blurring image answered.
2. according to the method described in claim 1, it is characterized in that, according to the depth image to the RGB image into pedestrian As segmentation, portrait area and background area in the RGB image are determined, including:
Using the depth image and obtained the first depth convolutional neural networks of training in advance, to the RGB image into pedestrian As segmentation, portrait segmentation image is obtained;Wherein, the portrait segmentation image is used to the RGB image being divided into portrait area And background area;
Image is divided according to the portrait, determines portrait area and background area in the RGB image.
3. according to the method described in claim 2, it is characterized in that, the input of the first depth convolutional neural networks is described Depth image and the RGB image, the output of the first depth convolutional neural networks divide image for the portrait.
4. according to the method described in claim 1, it is characterized in that, according to the depth image to background in the RGB image Region carries out background blurring operation, obtains the corresponding background blurring image of the RGB image, including:
The second obtained depth convolutional neural networks are trained using the depth image and in advance, to background in the RGB image Region carries out background blurring operation, obtains the corresponding background blurring image of the RGB image.
5. according to the method described in claim 4, it is characterized in that, the input of the second depth convolutional neural networks is described Depth image and the RGB image, the output of the second depth convolutional neural networks are empty for the corresponding background of the RGB image Change image.
6. a kind of image processing apparatus, which is characterized in that including:
Acquisition module, for obtaining the RGB RGB image of photographic subjects and depth image;
Portrait divides module, for carrying out portrait segmentation to the RGB image according to the depth image, determines the RGB figures Portrait area and background area as in;
Background blurring module, for carrying out background blurring operation to background area in the RGB image according to the depth image, Obtain the corresponding background blurring image of the RGB image.
7. device according to claim 6, which is characterized in that the portrait divides module, including:
Portrait divides submodule, for using the depth image and obtained the first depth convolutional neural networks of training in advance, Portrait segmentation is carried out to the RGB image, obtains portrait segmentation image;Wherein, the portrait segmentation image is used for the RGB Image is divided into portrait area and background area;
Determination sub-module for dividing image according to the portrait, determines portrait area and background area in the RGB image.
8. device according to claim 7, which is characterized in that the input of the first depth convolutional neural networks is described Depth image and the RGB image, the output of the first depth convolutional neural networks divide image for the portrait.
9. device according to claim 6, which is characterized in that the background blurring module is using the depth image and in advance The second depth convolutional neural networks that first training obtains carry out background blurring operation to background area in the RGB image, obtain The corresponding background blurring image of the RGB image.
10. device according to claim 9, which is characterized in that the input of the second depth convolutional neural networks is institute Depth image and the RGB image are stated, the output of the second depth convolutional neural networks is the corresponding background of the RGB image Blur image.
11. a kind of image processing apparatus, which is characterized in that including:
Processor;
For storing the memory of processor-executable instruction;
Wherein, the processor is configured as:
Obtain the RGB RGB image and depth image of photographic subjects;
Portrait segmentation is carried out to the RGB image according to the depth image, determines portrait area and background in the RGB image Region;
Background blurring operation is carried out to background area in the RGB image according to the depth image, obtains the RGB image pair The background blurring image answered.
12. a kind of computer readable storage medium, is stored thereon with computer instruction, which is characterized in that the instruction is by processor The step of any one of claim 1-5 the methods are realized during execution.
CN201711377564.7A 2017-12-19 2017-12-19 Image processing method and device Pending CN108154465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711377564.7A CN108154465A (en) 2017-12-19 2017-12-19 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711377564.7A CN108154465A (en) 2017-12-19 2017-12-19 Image processing method and device

Publications (1)

Publication Number Publication Date
CN108154465A true CN108154465A (en) 2018-06-12

Family

ID=62463982

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711377564.7A Pending CN108154465A (en) 2017-12-19 2017-12-19 Image processing method and device

Country Status (1)

Country Link
CN (1) CN108154465A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102460A (en) * 2018-08-28 2018-12-28 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109636814A (en) * 2018-12-18 2019-04-16 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN110276767A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542626B2 (en) * 2013-09-06 2017-01-10 Toyota Jidosha Kabushiki Kaisha Augmenting layer-based object detection with deep convolutional neural networks
CN106327448A (en) * 2016-08-31 2017-01-11 上海交通大学 Picture stylization processing method based on deep learning
US20170068849A1 (en) * 2015-09-03 2017-03-09 Korea Institute Of Science And Technology Apparatus and method of hand gesture recognition based on depth image
CN106683147A (en) * 2017-01-23 2017-05-17 浙江大学 Method of image background blur
CN106952222A (en) * 2017-03-17 2017-07-14 成都通甲优博科技有限责任公司 A kind of interactive image weakening method and device
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN107403430A (en) * 2017-06-15 2017-11-28 中山大学 A kind of RGBD image, semantics dividing method
CN107426493A (en) * 2017-05-23 2017-12-01 深圳市金立通信设备有限公司 A kind of image pickup method and terminal for blurring background

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542626B2 (en) * 2013-09-06 2017-01-10 Toyota Jidosha Kabushiki Kaisha Augmenting layer-based object detection with deep convolutional neural networks
US20170068849A1 (en) * 2015-09-03 2017-03-09 Korea Institute Of Science And Technology Apparatus and method of hand gesture recognition based on depth image
CN106327448A (en) * 2016-08-31 2017-01-11 上海交通大学 Picture stylization processing method based on deep learning
CN106683147A (en) * 2017-01-23 2017-05-17 浙江大学 Method of image background blur
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field
CN106952222A (en) * 2017-03-17 2017-07-14 成都通甲优博科技有限责任公司 A kind of interactive image weakening method and device
CN107426493A (en) * 2017-05-23 2017-12-01 深圳市金立通信设备有限公司 A kind of image pickup method and terminal for blurring background
CN107085825A (en) * 2017-05-27 2017-08-22 成都通甲优博科技有限责任公司 Image weakening method, device and electronic equipment
CN107403430A (en) * 2017-06-15 2017-11-28 中山大学 A kind of RGBD image, semantics dividing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
段炼: "基于深度信息的目标分割及跟踪技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102460A (en) * 2018-08-28 2018-12-28 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109636814A (en) * 2018-12-18 2019-04-16 联想(北京)有限公司 A kind of image processing method and electronic equipment
CN110276767A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Similar Documents

Publication Publication Date Title
CN105809704B (en) Identify the method and device of image definition
CN106339680B (en) Face key independent positioning method and device
CN108154465A (en) Image processing method and device
CN105512605B (en) Face image processing process and device
JP2016531362A (en) Skin color adjustment method, skin color adjustment device, program, and recording medium
CN105469356B (en) Face image processing process and device
CN105635567A (en) Shooting method and device
CN104219445B (en) Screening-mode method of adjustment and device
CN106204435A (en) Image processing method and device
CN106331504B (en) Shooting method and device
CN106408603A (en) Camera method and device
CN107832836B (en) Model-free deep reinforcement learning exploration method and device
CN105528078B (en) The method and device of controlling electronic devices
CN106548468B (en) The method of discrimination and device of image definition
CN106980840A (en) Shape of face matching process, device and storage medium
CN107730448B (en) Beautifying method and device based on image processing
CN109889724B (en) Image blurring method and device, electronic equipment and readable storage medium
CN105528765A (en) Method and device for processing image
CN107423699B (en) Biopsy method and Related product
CN109815844A (en) Object detection method and device, electronic equipment and storage medium
CN108985176A (en) image generating method and device
CN107992833A (en) Image-recognizing method, device and storage medium
CN108053371A (en) A kind of image processing method, terminal and computer readable storage medium
CN109784327B (en) Boundary box determining method and device, electronic equipment and storage medium
CN105426904B (en) Photo processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination