CN104902189A - Picture processing method and picture processing device - Google Patents

Picture processing method and picture processing device Download PDF

Info

Publication number
CN104902189A
CN104902189A CN201510354715.1A CN201510354715A CN104902189A CN 104902189 A CN104902189 A CN 104902189A CN 201510354715 A CN201510354715 A CN 201510354715A CN 104902189 A CN104902189 A CN 104902189A
Authority
CN
China
Prior art keywords
image
personage
character image
pixel
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510354715.1A
Other languages
Chinese (zh)
Inventor
刘洁
吴小勇
沈显超
茹忆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510354715.1A priority Critical patent/CN104902189A/en
Publication of CN104902189A publication Critical patent/CN104902189A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a picture processing method and a picture processing device. The method comprises acquiring a figure region image from a photographed figure image when receiving a background replacing command; and compositing the figure region image and an appointed background image, to obtain a composite image. According to the method, when a user inputs the background replacing command, the figure region image can be acquired from the figure image of the user and then can be composited with the background image appointed by the user. Therefore, the user can replace the background of an image at will to his/her preference and can flexibly adjust the imaging effect of the composite image by replacing different background images so as to ultimately acquire an ideal image, and the user experience is excellent.

Description

Image processing method and device
Technical field
The disclosure relates to technical field of image processing, particularly relates to image processing method and device.
Background technology
Along with the development of intelligent terminal, user can realize various application function by intelligent terminal, and wherein a kind of modal application function is camera function integrated on intelligent terminal, can be taken interested things whenever and wherever possible by camera function user.Under the environment that some is special, such as, when user takes character image at home, in the photo of shooting, background is single, and imaging effect is bad, and Consumer's Experience is poor.
Summary of the invention
Present disclose provides image processing method and device, when taking character image to solve in correlation technique, background is single, the problem that Consumer's Experience is poor.
According to the first aspect of disclosure embodiment, provide a kind of image processing method, described method comprises:
When receiving replacing background instructions, from the character image of shooting, obtain personage's area image;
Described personage's area image and the background image of specifying are synthesized, obtains composograph.
Optionally, obtaining personage's area image the described character image from taking, comprising:
Contrast the difference of described character image and default static background image, determine the difference pixel in described character image; Wherein, described static background image is the image with described character image with same background;
From described character image, be partitioned into described difference pixel, obtain described personage's area image.
Optionally, the difference of the described character image of described contrast and default static background image, determine the difference pixel in described character image, comprising:
Contrast the pixel value of the pixel of same position in described character image and described static background image;
Pixel value differences in described character image being greater than threshold value is defined as described difference pixel.
Optionally, describedly from described character image, be partitioned into described difference pixel, obtain described personage's area image, comprising:
According to described difference pixel, in described character image, determine personage region; Wherein, described personage region is the region that can surround all described difference pixels in described character image;
Face datection is carried out to described personage region, the face pixel detected is labeled as foreground pixel point, the central area of described personage region is labeled as possibility foreground pixel point, personage in described character image is occurred the area marking outside rectangular area is possible background pixel point, obtain the character image after marking;
According to described foreground pixel point, described possibility foreground pixel point and the described possibility position of background pixel point in described character image, the partitioning algorithm preset is utilized to be partitioned into described personage's area image from the character image after described mark.
Optionally, described according to described difference pixel, in described character image, determine personage region, comprising:
Detect the coordinate of each described difference pixel; Wherein, horizontal direction coordinate figure and vertical direction coordinate figure is comprised in described coordinate;
According to the minimum value in the minimum value in all horizontal direction coordinate figures and maximum, all vertical direction coordinate figures and maximum, determine described personage region.
Optionally, before the difference of described character image and default static background image, also comprise:
Export the first prompting message that prompting user enters default coverage;
The interval time of presetting after described first prompting message exports, when arriving, controls the first image that picture pick-up device gathers described coverage, obtains described character image.
Optionally, described when described first prompting message exports rear interval Preset Time, control the first image that picture pick-up device gathers described coverage, obtain described character image, comprising:
The interval time of presetting after described first prompting message exports, when arriving, carries out person detecting by person detecting algorithm to described first image;
When acquisition detects the testing result of personage, described the first object image is defined as described character image.
Optionally, before the difference of described character image and default static background image, also comprise:
Export the second prompting message that prompting user leaves default coverage;
The interval time of presetting after described second prompting message exports, when arriving, controls the second image that picture pick-up device gathers described coverage, obtains described static background image.
Optionally, when described interval time of presetting after described second prompting message exports arrives, control the second image that picture pick-up device gathers described coverage, obtain described static background image, comprising:
The interval time of presetting after described second prompting message exports, when arriving, carries out person detecting by person detecting algorithm to described second image;
When acquisition does not detect the testing result of personage, described second image is defined as described static background image.
Optionally, described method also comprises:
The filtering algorithm preset is utilized to carry out denoising to described difference pixel.
Optionally, described by described personage's area image with preset background image synthesize, obtain composograph, comprising:
The pixel of the target area of specifying in described background image is replaced with the pixel of described personage's area image, obtain described composograph.
According to the second aspect of disclosure embodiment, provide a kind of image processing apparatus, described device comprises:
Acquiring unit, for when receiving replacing background instructions, obtains personage's area image from the character image of shooting;
Synthesis unit, for described personage's area image and the background image of specifying being synthesized, obtains composograph.
Optionally, described acquiring unit, comprising:
Subelement is determined in contrast, for contrasting the difference of described character image and default static background image, determines the difference pixel in described character image; Wherein, described static background image is the image with described character image with same background;
Segmentation subelement, for being partitioned into described difference pixel from described character image, obtains described personage's area image.
Optionally, subelement is determined in described contrast, comprising:
Contrast module, for contrasting the pixel value of the pixel of same position in described character image and described static background image;
First determination module, is defined as described difference pixel for pixel value differences in described character image being greater than threshold value.
Optionally, described segmentation subelement, comprising:
Second determination module, for according to described difference pixel, determines personage region in described character image; Wherein, described personage region is the region that can surround all described difference pixels in described character image;
Labeling module, for carrying out Face datection to described personage region, the face pixel detected is labeled as foreground pixel point, the central area of described personage region is labeled as possibility foreground pixel point, personage in described character image is occurred the area marking outside rectangular area is possible background pixel point, obtain the character image after marking;
Segmentation module, for according to described foreground pixel point, described possibility foreground pixel point and the described possibility position of background pixel point in described character image, utilizes the partitioning algorithm preset to be partitioned into described personage's area image from the character image after described mark.
Optionally, described second determination module, comprising:
Detection sub-module, for detecting the coordinate of each described difference pixel; Wherein, horizontal direction coordinate figure and vertical direction coordinate figure is comprised in described coordinate;
Submodule is determined in region, for according to the minimum value in the minimum value in all horizontal direction coordinate figures and maximum, all vertical direction coordinate figures and maximum, determines described personage region.
Optionally, subelement is determined in described contrast, comprising:
First output module, enters the first prompting message of default coverage for exporting prompting user;
First control module, when the interval time for presetting after described first prompting message exports arriving, controlling the first image that picture pick-up device gathers described coverage, obtaining described character image.
Optionally, described first control module, comprising:
First detection sub-module, when the interval time for presetting after described first prompting message exports arrives, carries out person detecting by person detecting algorithm to described first image;
First determines submodule, for when acquisition detects the testing result of personage, described the first object image is defined as described character image.
Optionally, subelement is determined in described contrast, comprising:
Second output module, leaves the second prompting message of default coverage for exporting prompting user;
Second control module, when the interval time for presetting after described second prompting message exports arriving, controlling the second image that picture pick-up device gathers described coverage, obtaining described static background image.
Optionally, described second control module, comprising:
Second detection sub-module, when the interval time for presetting after described second prompting message exports arrives, carries out person detecting by person detecting algorithm to described second image;
Second determines submodule, for when acquisition does not detect the testing result of personage, described second image is defined as described static background image.
Optionally, subelement is determined in described contrast, comprising:
Denoising module, carries out denoising for utilizing default filtering algorithm to described difference pixel.
Optionally, described synthesis unit, comprising:
Replacing subelement, for the pixel of the target area of specifying in described background image being replaced with the pixel of described personage's area image, obtaining described composograph.
According to the third aspect of disclosure embodiment, a kind of image processing apparatus is provided, comprises:
Processor;
For the memory of storage of processor executable instruction;
Wherein, described processor is configured to:
When receiving replacing background instructions, from the character image of shooting, obtain personage's area image;
Described personage's area image and the background image of specifying are synthesized, obtains composograph.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect:
In the disclosure when user inputs replacing background instructions, personage's area image can be got from the character image of user, and the background image that personage's area image and user are specified is synthesized; Therefore arbitrarily can change image background according to the hobby of user, user adjusts the imaging effect of composograph neatly by changing different background images, finally obtain desirable image, and Consumer's Experience is good.
The disclosure obtains out personage's area image by contrast static background image from character image, because static background image and character image have same background, character image is compared to its many personage's part of static background image, when contrasting character image and static background image, find out the difference pixel of two width images, thus determine the difference of character image and static background image rapidly, and difference section is the personage's part photographed in character image, therefore the segmentation of personage's area image can be realized fast and accurately, improve image processing speed, improve the accuracy obtaining personage's part from image.Wherein, described character image and static background image, by exporting prompting message reminding user, to guarantee that user can photograph static background image and the character image of same background.
The disclosure is according to the position of difference pixel in character image, personage region is determined from character image, because personage region can surround all differences pixel, and difference pixel is the segmentation object of character image, therefore when carrying out Iamge Segmentation, the range and position of segmentation object can be determined according to personage region in character image, thus improve accuracy and the speed of Iamge Segmentation.
The disclosure marks out foreground pixel point, possibility foreground pixel point and possibility background pixel point in character image, that is to say in the character image after mark and specify target to be split, for marking out segmentation object utilizing partitioning algorithm to carry out segmentation to character image, improve the accuracy of Iamge Segmentation.
The disclosure, by the noise in filtering algorithm filtering difference pixel, can suppress the noise of target image, can improve the quality of composograph, ensure the validity and reliability of Images uniting under the condition as far as possible retaining image detail feature.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and is used from specification one and explains principle of the present disclosure.
Fig. 1 is the flow chart of a kind of image processing method of the disclosure according to an exemplary embodiment.
Fig. 2 A is the flow chart of the another kind of image processing method of the disclosure according to an exemplary embodiment.
Fig. 2 B is a kind of schematic diagram of determining personage region of the disclosure according to an exemplary embodiment.
Fig. 3 A is the flow chart of the another kind of image processing method of the disclosure according to an exemplary embodiment.
Fig. 3 B is that the utilize intelligent television of the disclosure according to an exemplary embodiment takes the schematic diagram of character image.
Fig. 3 C is that the utilize intelligent television of the disclosure according to an exemplary embodiment takes the schematic diagram of static background image.
Fig. 4 is the block diagram of a kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Fig. 5 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Fig. 6 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Fig. 7 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Fig. 8 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Fig. 9 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Figure 10 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Figure 11 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Figure 12 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Figure 13 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Figure 14 is the block diagram of the another kind of image processing apparatus of the disclosure according to an exemplary embodiment.
Figure 15 is a kind of structural representation for picture processing device of the disclosure according to an exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
The term used in the disclosure is only for the object describing specific embodiment, and the not intended to be limiting disclosure." one ", " described " and " being somebody's turn to do " of the singulative used in disclosure and the accompanying claims book is also intended to comprise most form, unless context clearly represents other implications.It is also understood that term "and/or" used herein refer to and comprise one or more project of listing be associated any or all may combine.
Term first, second, third, etc. may be adopted although should be appreciated that to describe various information in the disclosure, these information should not be limited to these terms.These terms are only used for the information of same type to be distinguished from each other out.Such as, when not departing from disclosure scope, the first information also can be called as the second information, and similarly, the second information also can be called as the first information.Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining ".
As shown in Figure 1, Fig. 1 is the flow chart of a kind of image processing method according to an exemplary embodiment, and the method may be used for, in terminal, comprising the following steps:
In a step 101, when receiving replacing background instructions, from the character image of shooting, personage's area image is obtained.
The terminal related in disclosure embodiment can be the various intelligent terminals being integrated with shoot function, such as, and intelligent television, smart mobile phone, panel computer, PDA (Personal Digital Assistant, personal digital assistant) etc.Intelligent terminal can built-in camera, or connects picture pick-up device and realize shoot function, and user can pass through intelligent terminal pictures taken.
Under the scene that the backgrounds such as home environment are single, in the character image that user is taken by intelligent terminal, background content is comparatively dull, and imaging effect is poor; In the disclosed embodiments, automatically image background can be changed to the character image of user's shooting.User can trigger replacing background instructions, and terminal, when receiving replacing background instructions, obtains out personage's area image from the character image of user's shooting.
In enforcement, user can input replacing background instructions in several ways.Such as: change background instructions by the specific physics key-press input pressed in intelligent terminal, or background instructions is changed in the virtual key input shown in touch control unit by touch intelligent terminal, or by voice mode input shooting instruction; For terminals such as intelligent televisions, can also be that background instructions is changed in the specific keys input in the connected remote control equipment of pressing.
In the present embodiment, in order to change the background of character image, personage's area image can be obtained from character image, that is to say the personage's part be partitioned in character image, personage's part is separated with other parts in character image; The difference of described character image and default static background image can be contrasted, determine the difference pixel in described character image; From described character image, be partitioned into described difference pixel, obtain described personage's area image; Wherein, described static background image is the image with described character image with same background.The present embodiment utilizes static background image and character image to have the feature of same background, can contrast the difference of static background image and character image exactly, thus obtains difference pixel.When practical application, can take two width images, a width is character image, and another width then can before or after user takes character image, and shooting is with the same background of character image but do not comprise the static background image of personage; When taking, the camera parameter position that is identical, capture apparatus that can arrange twice shooting is identical, has same background with guarantor's object image and static background image, thus ensures the accuracy of Iamge Segmentation.
As seen from the above-described embodiment, when obtaining out personage's area image from character image, because static background image and character image have same background, character image is compared to its many personage's part of static background image, when contrasting character image and static background image, find out the difference pixel of two width images, thus determine the difference of character image and static background image rapidly, and difference section is the personage's part photographed in character image, therefore the segmentation of personage's area image can be realized fast and accurately, improve image processing speed, improve the accuracy obtaining personage's part from image.
In a step 102, described personage's area image and the background image of specifying are synthesized, obtains composograph.
After the background image that acquisition user specifies, the people's object area be partitioned into and the background image of specifying can be synthesized, obtain composograph from character image; When synthesizing, the pixel of the target area of specifying in described background image can be replaced with the pixel of described personage's area image, obtaining described composograph.
When personage's area image is synthesized with the background image of specifying, can by user's desired target area in background image, for determining the synthesising position of personage's area image and background image, when implementing, terminal can generate the described background image of specifying of preview interface display, user can select any position in described background image of specifying, and is appointed as described target area.Or, also described target area can be set in advance as the center of described background image of specifying, or also described target area can be set in the golden section point region etc. of described background image of specifying.
When carrying out Images uniting, the pixel of the target area in the background image of specifying is replaced with the pixel of personage's area image, that is to say the pixel value of the pixel pixel value of the pixel of target area being changed to personage's area image, after the background image of specifying is replaced pixel, described composograph can be obtained.
As seen from the above-described embodiment, when changing background instructions owing to inputting user, personage's area image can be got from the character image of user, and the background image that personage's area image and user are specified is synthesized; Therefore arbitrarily can change image background according to the hobby of user, user adjusts the imaging effect of composograph neatly by changing different background images, finally obtain desirable image, and Consumer's Experience is good.
As shown in Figure 2 A, Fig. 2 A is the flow chart of a kind of image processing method according to an exemplary embodiment, and the method may be used in terminal, and the method is on the basis of previous embodiment, describe the process how obtaining personage's area image from character image, comprise the following steps:
In step 201, the pixel value of the pixel of same position in described character image and described static background image is contrasted.
Image comprises of pixels point is formed, and it is the colouring information of different brightness, tone, form and aspect, colour temperature, gray scale etc. that pixel can be regarded as, and is the elementary cell of composing images; Pixel in image has coordinate figure, represents pixel position in the picture; Pixel has pixel value, and pixel value generally adopts three primary colors rgb format or gray value to represent.For character image and static background image, because two parts of images have same background, relative to static background image, there is personage's part in character image; Therefore in character image, personage's appearance part just there are differences with the pixel value of the pixel of static background image, can be determined the difference of two width images by the difference contrasting pixel value.
In step 202., pixel value differences in described character image being greater than threshold value is defined as described difference pixel.
By reading the pixel value of each pixel in character image and static background image, compare the value differences of the pixel of same position in two parts of images, the pixel differed greatly then can be defined as described difference pixel.In disclosure embodiment, can by presetting threshold value, the value differences obtain contrast and this threshold value compare, and pixel value differences in described character image being greater than threshold value is defined as described difference pixel; Described threshold value can set according to actual needs, and the disclosure does not do concrete restriction to this.
In step 203, the filtering algorithm preset is utilized to carry out denoising to described difference pixel.
Due to the imperfection of imaging system, transmission medium and recording equipment etc., digital picture is often subject to the pollution of multiple noise in its formation, transmission log process.Further, search in the process of difference pixel carrying out above-mentioned image comparison, also may introduce noise in result images.These noises often show as the independent image vegetarian refreshments or block of pixels that cause stronger visual effect on image.Noise can cause bright, dim spot interference at image, greatly reduces picture quality, affects the carrying out of the follow-up work such as image restoration, segmentation, feature extraction, figure identification.In disclosure embodiment, described default filtering algorithm, can be nonlinear filtering algorithm, median filtering algorithm, Mean Filtering Algorithm or morphologic filtering algorithm etc.; By filtering algorithm, can suppress the noise of target image under the condition as far as possible retaining image detail feature.
In step 204, according to described difference pixel, determine personage region in described character image, described personage region is the region that can surround all described difference pixels in described character image.
Wherein, when determining personage region, the coordinate of each described difference pixel can be detected; Horizontal direction coordinate figure and vertical direction coordinate figure is comprised in described coordinate; According to the minimum value in the minimum value in all horizontal direction coordinate figures and maximum, all vertical direction coordinate figures and maximum, determine described personage region.
As shown in Figure 2 B, be the schematic diagram determining personage region a kind of in the disclosure, the multiple difference pixels found from character image, each pixel has coordinate (X on image, Y), wherein X is horizontal direction coordinate figure, and Y is vertical direction coordinate figure; Detect the coordinate of each difference pixel and compare, thus can know in all differences pixel, boundary point in the horizontal direction and the vertical direction, as shown in Figure 2 B, difference pixel P1, P2, P3 and P4 and described boundary point, thus personage region can be obtained; In order to improve efficiency during Iamge Segmentation, can determine that the minimum rectangular area surrounding all described difference pixels is as described personage region, the region be namely made up of summit T1, T2, T3 and T4 in figure.
As seen from the above-described embodiment, according to the position of difference pixel in character image, personage region is determined from character image, because personage region can surround all differences pixel, and difference pixel is the segmentation object of character image, therefore when carrying out Iamge Segmentation, the range and position of segmentation object can be determined according to personage region in character image, thus improve accuracy and the speed of Iamge Segmentation.
In step 205, Face datection is carried out to described personage region, the face pixel detected is labeled as foreground pixel point, the central area of described personage region is labeled as possibility foreground pixel point, personage in described character image is occurred the area marking outside rectangular area is possible background pixel point, obtain the character image after marking.
In disclosure embodiment, further Face datection is performed to personage region, accurately to detect human face region; Wherein, Face datection algorithm can be specially the AdaBoost detection of classifier algorithm based on Haar feature, or based on the second order Gauss Skin Color Mixture Model and face feature recognizer etc. of H-SV and C'bC'r; It should be noted that, by Face datection algorithm, Face datection is carried out to personage region, thus determine that the detailed process of face pixel see the Face datection process in correlation technique, no longer can repeat this disclosure embodiment.
Owing to accurately detected face pixel by Face datection algorithm in personage region, therefore in the present embodiment face pixel is labeled as foreground pixel point; Wherein, described foreground pixel point refers to the target to be split of character image; Due to the people face part that face pixel is whole personage region, but personage region also may comprise other parts such as character physical, consider that personage's part generally appears at the center of image simultaneously, therefore the central area of personage region is labeled as possibility foreground pixel point, and in described character image, personage occurs that the area marking outside rectangular area is possible background pixel point; Mark out in character image foreground pixel point, may foreground pixel point and may background pixel point, for marking out segmentation object, improving the accuracy of Iamge Segmentation utilizing partitioning algorithm to carry out segmentation to character image.Wherein, central area is the region divided according to the mid point of personage region, and its large I is arranged according to actual needs.
In step 206, according to described foreground pixel point, described possibility foreground pixel point and the described possibility position of background pixel point in described character image, the partitioning algorithm preset is utilized to be partitioned into described personage's area image from the character image after described mark;
In disclosure embodiment, owing to specifying foreground pixel point, described possibility foreground pixel point and described possibility background pixel point in character image, that is to say in the character image after mark and specify target to be split, therefore utilize default partitioning algorithm can accurately be partitioned into described personage's area image from the character image after described mark.
In disclosure embodiment, image segmentation algorithm can adopt Graph Cut (figure cuts) algorithm, Grab Cut (segmentation) algorithm etc.Such as, first Grab Cut algorithm needs user's selected prospect and background sample simply alternately, GMM (Gaussian Mixture Model is set up to prospect, background area, mixed Gauss model), and utilize k-means (k average) algorithm initialization GMM, respectively computing node to prospect or background Distance geometry adjacent node between distance, then segmentation energy weight is obtained, to zone of ignorance structure s-t network diagram, max-flow min-cut algorithm is then adopted to carry out cutting to it.The cutting procedure of Grab Cut algorithm is by iteration continuous renewal, revises GMM parameter, makes algorithm be tending towards convergence.Because optimize group parameter k, θ, α in iterative process, segmentation ENERGY E is reduced gradually, finally can ensure that E converges on minimum value, finally realizes Iamge Segmentation.
It should be noted that, image segmentation algorithm can select suitable algorithm according to the configuration of terminal during practical application, the disclosure does not do concrete restriction to this, and see the image segmentation process in correlation technique, this disclosure embodiment no longer can be repeated by the detailed process that image segmentation algorithm is partitioned into personage's area image from character image.
As shown in Figure 3A, Fig. 3 A is the flow chart of a kind of image processing method according to an exemplary embodiment, and the method may be used in terminal, and the method, on the basis of previous embodiment, describes the process from Images uniting of taking pictures, comprises the following steps:
In step 301, the first prompting message that prompting user enters default coverage is exported.
In step 302, when the interval time of presetting after described first prompting message exports arrives, control the first image that picture pick-up device gathers described coverage, obtain described character image.
In disclosure embodiment, the display interface of terminal exports the first prompting message, to point out user to enter coverage, user may need certain hour when entering the coverage of picture pick-up device, can photographic images again after separated in time; Concrete, after described first prompting message of output, timing can be started by timer, when arriving the interval time of presetting, control the first image that picture pick-up device gathers described coverage, obtaining described character image; Wherein, can be set in advance as 5 seconds described interval time, 10 seconds equal times, the disclosure does not do concrete restriction to this.In actual applications, can after the first prompting message export, output gap time countdown in the display interface of terminal, arrives the coverage advancing into picture pick-up device in interval time with further reminding user.
In the present embodiment, for guaranteeing to photograph character image, when the interval time also can preset after described first prompting message exports arrives, by the person detecting algorithm preset, person detecting is carried out to described first image; When acquisition detects the testing result of personage, described first image is defined as described character image; By carrying out person detecting further to the first image, can ensure to obtain described character image, improving the validity and reliability of image procossing.
In step 303, the second prompting message that prompting user leaves default coverage is exported.
In step 304, when the interval time of presetting after described second prompting message exports arrives, control the second image that picture pick-up device gathers described coverage, obtain described static background image.
In the present embodiment, the display interface of terminal exports the second prompting message, to point out user to leave coverage, user may need certain hour when entering the coverage of picture pick-up device, can take static background image again after separated in time; Concrete, after described second prompting message of output, timing can be started by timer, when arriving the interval time of presetting, control the second image that picture pick-up device gathers described coverage, obtaining described static background image; Wherein, can be set in advance as 5 seconds described interval time, 10 seconds equal times, the disclosure does not do concrete restriction to this.In actual applications, can after the second prompting message export, output gap time countdown in the display interface of terminal, leaves the coverage of picture pick-up device before arriving interval time with further reminding user.
In the present embodiment, for guaranteeing to photograph static background image, when the interval time also can preset after described second prompting message exports arrives, by person detecting algorithm, person detecting is carried out to described second image; When acquisition does not detect the testing result of personage, described second image is defined as described static background image; By carrying out person detecting further to the second image, can ensure to obtain described static background image, improving the validity and reliability of image procossing.
In step 305, contrast the difference of described character image and described static background image, determine the difference pixel in described character image; Wherein, described static background image is the image with described character image with same background.
By reading the pixel value of each pixel in character image and static background image, compare the value differences of the pixel of same position in two parts of images, the pixel differed greatly then can be defined as described difference pixel.
Be that in the disclosure, a kind of intelligent television that utilizes takes the schematic diagram of character image see Fig. 3 B, Fig. 3 B; Be that in the disclosure, a kind of intelligent television that utilizes takes the schematic diagram of static background image see Fig. 3 C, Fig. 3 C; In the present embodiment, described terminal adopts intelligent television, the image under user utilizes intelligent television to take home environment, and first user utilizes the picture pick-up device of intelligent television to take its character image under home environment; Keep afterwards the acquisition parameters of picture pick-up device and camera site constant, user leaves the coverage of the picture pick-up device of intelligent television, again shooting be fixed background image; Comparison diagram 3B and Fig. 3 C is known, because two width images have same background, therefore by the pixel value of compared pixels point, finds out difference pixel rapidly.
Within step 306, from described character image, be partitioned into described difference pixel, obtain described personage's area image.
When obtaining out personage's area image from character image, because static background image and character image have same background, character image is compared to its many personage's part of static background image, when contrasting character image and static background image, find out the difference pixel of two width images, thus determine the difference of character image and static background image rapidly, and difference section is the personage's part photographed in character image, therefore the segmentation of personage's area image can be realized fast and accurately, improve image processing speed, improve the accuracy obtaining personage's part from image.
In step 307, the background image that user specifies is obtained.
In enforcement, exportable background image selects interface, and described background image selects interface display to have several default background images; Obtain specified command, described specified command is that user selects described background image to select the order triggered during the background image that interface shows; According to described specified command, the background image that user selects is defined as the background image of specifying.
Described default background image, can comprise landscape painting picture, landscape image, art pattern or star image etc.; Background image can prestore in the terminal, also can be imported by user; Background image can be stored in identical file folder, when exporting background image and selecting interface, and all background images under centralized displaying file; Also the background image it can being selected to prestore from alternative document presss from both sides by user.
In step 308, described personage's area image and the background image of specifying are synthesized, obtains composograph.
After the background image that acquisition user specifies, the people's object area be partitioned into and the background image of specifying can be synthesized, obtain composograph from character image; In actual applications, exportable generation preview interface, shows described composograph in described preview interface, whether meets its requirement for user's preview composograph; If do not meet, other background images can be reassigned and obtain new composograph; Meanwhile, in described preview interface, the Save option is exported; When described the Save option is triggered, store described composograph.
Corresponding with the embodiment of aforementioned image processing method, the embodiment of terminal that the disclosure additionally provides image processing apparatus and applies.
As shown in Figure 4, Fig. 4 is a kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, and described device comprises: acquiring unit 410 and synthesis unit 420.
Wherein, acquiring unit 410, is configured to, when receiving replacing background instructions, from the character image of shooting, obtain personage's area image;
Synthesis unit 420, is configured to described personage's area image and the background image of specifying to synthesize, obtains composograph.
In above-described embodiment, when user inputs replacing background instructions, personage's area image can be got from the character image of user, and the background image that personage's area image and user are specified is synthesized; Therefore arbitrarily can change image background according to the hobby of user, user adjusts the imaging effect of composograph neatly by changing different background images, finally obtain desirable image, and Consumer's Experience is good.
As shown in Figure 5, Fig. 5 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described acquiring unit 410, comprising: subelement 411 and segmentation subelement 412 are determined in contrast.
Wherein, subelement 411 is determined in contrast, is configured to the difference contrasting described character image and default static background image, determines the difference pixel in described character image; Wherein, described static background image is the image with described character image with same background;
Segmentation subelement 412, is configured to from described character image, be partitioned into described difference pixel, obtains described personage's area image.
As shown in Figure 6, Fig. 6 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and subelement 411 is determined in described contrast, comprising: contrast module 4111 and the first determination module 4112.
Wherein, contrast module 4111, is configured to the pixel value of the pixel contrasting same position in described character image and described static background image;
First determination module 4112, the pixel being configured to value differences in described character image to be greater than threshold value is defined as described difference pixel.
In above-described embodiment, when obtaining out personage's area image from character image, because static background image and character image have same background, character image is compared to its many personage's part of static background image, when contrasting character image and static background image, find out the difference pixel of two width images, thus determine the difference of character image and static background image rapidly, and difference section is the personage's part photographed in character image, therefore the segmentation of personage's area image can be realized fast and accurately, improve image processing speed, improve the accuracy obtaining personage's part from image.
As shown in Figure 7, Fig. 7 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described segmentation subelement 412, comprising: the second determination module 4121, labeling module 4122 and segmentation module 4123.
Wherein, the second determination module 4121, is configured to, according to described difference pixel, determine personage region in described character image; Wherein, described personage region is the region that can surround all described difference pixels in described character image;
Labeling module 4122, be configured to carry out Face datection to described personage region, the face pixel detected is labeled as foreground pixel point, the central area of described personage region is labeled as possibility foreground pixel point, personage in described character image is occurred the area marking outside rectangular area is possible background pixel point, obtain the character image after marking;
Segmentation module 4123, be configured to, according to described foreground pixel point, described possibility foreground pixel point and the described possibility position of background pixel point in described character image, utilize the partitioning algorithm preset to be partitioned into described personage's area image from the character image after described mark.
In above-described embodiment, foreground pixel point, possibility foreground pixel point and possibility background pixel point is marked out in character image, that is to say in the character image after mark and specify target to be split, according to the target to be split of mark, character image being split for when utilizing partitioning algorithm, improving the accuracy of Iamge Segmentation.
As shown in Figure 8, Fig. 8 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described second determination module 4121, comprising: submodule 41212 is determined in detection sub-module 41211 and region.
Wherein, detection sub-module 41211, is configured to the coordinate detecting each described difference pixel; Wherein, horizontal direction coordinate figure and vertical direction coordinate figure is comprised in described coordinate;
Submodule 41212 is determined in region, is configured to, according to the minimum value in the minimum value in all horizontal direction coordinate figures and maximum, all vertical direction coordinate figures and maximum, determine described personage region.
As seen from the above-described embodiment, according to the position of difference pixel in character image, personage region is determined from character image, because personage region can surround all differences pixel, and difference pixel is the segmentation object of character image, therefore when carrying out Iamge Segmentation, the range and position of segmentation object can be determined according to personage region in character image, thus improve accuracy and the speed of Iamge Segmentation.
As shown in Figure 9, Fig. 9 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and subelement 411 is determined in described contrast, comprising: the first output module 4113 and the first control module 4114.
Wherein, the first output module 4113, is configured to export the first prompting message that prompting user enters default coverage;
First control module 4114, when the interval time being configured to preset after described first prompting message exports arrives, controls the first image that picture pick-up device gathers described coverage, obtains described character image.
As shown in Figure 10, Figure 10 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described first control module 4114, comprising: the first detection sub-module 41141 and first determines submodule 41142.
Wherein, the first detection sub-module 41141, when the interval time being configured to preset after described first prompting message exports arrives, carries out person detecting by person detecting algorithm to described first image;
First determines submodule 41142, is configured to, when acquisition detects the testing result of personage, described the first object image is defined as described character image.
In above-described embodiment, the display interface of terminal exports the first prompting message, enter coverage to point out user; Person detecting algorithm also by presetting carries out person detecting to described first image; By carrying out person detecting further to the first image, can ensure to obtain described character image, improving the validity and reliability of image procossing.
As shown in figure 11, Figure 11 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and subelement 411 is determined in described contrast, comprising: the second output module 4115 and the second control module 4116.
Wherein, the second output module 4115, is configured to export the second prompting message that prompting user leaves default coverage;
Second control module 4116, when the interval time being configured to preset after described second prompting message exports arrives, controls the second image that picture pick-up device gathers described coverage, obtains described static background image.
As shown in figure 12, Figure 12 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described second control module 4116, comprising: the second detection sub-module 41161 and second determines submodule 41162.
Wherein, the second detection sub-module 41161, when the interval time being configured to preset after described second prompting message exports arrives, carries out person detecting by person detecting algorithm to described second image;
Second determines submodule 41162, is configured to, when acquisition does not detect the testing result of personage, described second image is defined as described static background image.
In above-described embodiment, the display interface of terminal exports the second prompting message, leave coverage to point out user; Person detecting algorithm also by presetting carries out person detecting to described second image; By carrying out person detecting further to the second image, can ensure to obtain described static background image, improving the validity and reliability of image procossing.
As shown in figure 13, Figure 13 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and subelement 411 is determined in described contrast, also comprises: denoising module 4117.
Wherein, denoising module 4117, is configured to utilize the filtering algorithm preset to carry out denoising to described difference pixel.
In above-described embodiment, by the noise in filtering algorithm filtering difference pixel, can suppress the noise of target image under the condition as far as possible retaining image detail feature, improve the quality of composograph, ensure the validity and reliability of Images uniting.
As shown in figure 14, Figure 14 is the another kind of image processing apparatus block diagram of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 4, and described synthesis unit 420, comprising: replace subelement 421.Wherein, replace subelement 421, be configured to the pixel pixel of the target area of specifying in described background image being replaced with described personage's area image, obtain described composograph.
In above-described embodiment, when carrying out Images uniting, the pixel of the target area in the background image of specifying is replaced with the pixel of personage's area image, that is to say the pixel value of the pixel pixel value of the pixel of target area being changed to personage's area image, described composograph can be obtained fast.
Accordingly, the disclosure also provides another kind of image processing apparatus, and described device includes processor; Be configured to the memory of storage of processor executable instruction; Wherein, described processor is configured to: when receiving replacing background instructions, from the character image of shooting, obtain personage's area image; Described personage's area image and the background image of specifying are synthesized, obtains composograph.
In said apparatus, the implementation procedure of the function and efficacy of unit specifically refers to the implementation procedure of corresponding step in said method, does not repeat them here.
For device embodiment, because it corresponds essentially to embodiment of the method, so relevant part illustrates see the part of embodiment of the method.Device embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of disclosure scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
Accordingly, the disclosure also provides a kind of terminal, and described terminal includes processor; For the memory of storage of processor executable instruction; Wherein, described processor is configured to:
When receiving replacing background instructions, from the character image of shooting, obtain personage's area image;
Described personage's area image and the background image of specifying are synthesized, obtains composograph.
As shown in figure 15, Figure 15 is a kind of structural representation for picture classifier 1500 of the disclosure according to an exemplary embodiment.Such as, device 1500 can be the mobile phone with routing function, computer, digital broadcast terminal, messaging devices, game console, flat-panel devices, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Figure 15, device 1500 can comprise following one or more assembly: processing components 1502, memory 1504, power supply module 1506, multimedia groupware 1508, audio-frequency assembly 1510, the interface 1512 of I/O (I/O), sensor cluster 1514, and communications component 1516.
The integrated operation of the usual control device 1500 of processing components 1502, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 1502 can comprise one or more processor 1520 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 1502 can comprise one or more module, and what be convenient between processing components 1502 and other assemblies is mutual.Such as, processing components 1502 can comprise multi-media module, mutual with what facilitate between multimedia groupware 1508 and processing components 1502.
Memory 1504 is configured to store various types of data to be supported in the operation of device 1500.The example of these data comprises for any application program of operation on device 1500 or the instruction of method, contact data, telephone book data, message, picture, video etc.Memory 1504 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 1506 is device 1500 provide electric power.Power supply module 1506 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 1500 and be associated.
Multimedia groupware 1508 is included in the screen providing an output interface between described device 1500 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 1508 comprises a front-facing camera and/or post-positioned pick-up head.When device 1500 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 1510 is configured to export and/or input audio signal.Such as, audio-frequency assembly 1510 comprises a microphone (MIC), and when device 1500 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 1504 further or be sent via communications component 1516.In certain embodiments, audio-frequency assembly 1510 also comprises a loud speaker, for output audio signal.
I/O interface 1512 is for providing interface between processing components 1502 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 1514 comprises one or more transducer, for providing the state estimation of various aspects for device 1500.Such as, sensor cluster 1514 can detect the opening/closing state of device 1500, the relative positioning of assembly, such as described assembly is display and the keypad of device 1500, the position of all right checkout gear 1500 of sensor cluster 1514 or device 1500 assemblies changes, the presence or absence that user contacts with device 1500, the variations in temperature of device 1500 orientation or acceleration/deceleration and device 1500.Sensor cluster 1514 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 1514 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor cluster 1514 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor, microwave remote sensor or temperature sensor.
Communications component 1516 is configured to the communication being convenient to wired or wireless mode between device 1500 and other equipment.Device 1500 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 1516 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 1516 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 1500 can be realized, for performing said method by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 1504 of instruction, above-mentioned instruction can perform said method by the processor 1520 of device 1500.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of terminal, make terminal can perform a kind of method, described method comprises: when receiving replacing background instructions, from the character image of shooting, obtain personage's area image; Described personage's area image and the background image of specifying are synthesized, obtains composograph.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The disclosure is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (23)

1. an image processing method, is characterized in that, described method comprises:
When receiving replacing background instructions, from the character image of shooting, obtain personage's area image;
Described personage's area image and the background image of specifying are synthesized, obtains composograph.
2. method according to claim 1, is characterized in that, obtaining personage's area image, comprising the described character image from taking:
Contrast the difference of described character image and default static background image, determine the difference pixel in described character image; Wherein, described static background image is the image with described character image with same background;
From described character image, be partitioned into described difference pixel, obtain described personage's area image.
3. method according to claim 2, is characterized in that, the difference of the described character image of described contrast and default static background image, determines the difference pixel in described character image, comprising:
Contrast the pixel value of the pixel of same position in described character image and described static background image;
Pixel value differences in described character image being greater than threshold value is defined as described difference pixel.
4. method according to claim 2, is characterized in that, describedly from described character image, is partitioned into described difference pixel, obtains described personage's area image, comprising:
According to described difference pixel, in described character image, determine personage region; Wherein, described personage region is the region that can surround all described difference pixels in described character image;
Face datection is carried out to described personage region, the face pixel detected is labeled as foreground pixel point, the central area of described personage region is labeled as possibility foreground pixel point, personage in described character image is occurred the area marking outside rectangular area is possible background pixel point, obtain the character image after marking;
According to described foreground pixel point, described possibility foreground pixel point and the described possibility position of background pixel point in described character image, the partitioning algorithm preset is utilized to be partitioned into described personage's area image from the character image after described mark.
5. method according to claim 4, is characterized in that, described according to described difference pixel, determines personage region, comprising in described character image:
Detect the coordinate of each described difference pixel; Wherein, horizontal direction coordinate figure and vertical direction coordinate figure is comprised in described coordinate;
According to the minimum value in the minimum value in all horizontal direction coordinate figures and maximum, all vertical direction coordinate figures and maximum, determine described personage region.
6. method according to claim 2, is characterized in that, before the difference of described character image and default static background image, also comprises:
Export the first prompting message that prompting user enters default coverage;
The interval time of presetting after described first prompting message exports, when arriving, controls the first image that picture pick-up device gathers described coverage, obtains described character image.
7. method according to claim 6, is characterized in that, described when described first prompting message exports rear interval Preset Time, controls the first image that picture pick-up device gathers described coverage, obtains described character image, comprising:
The interval time of presetting after described first prompting message exports, when arriving, carries out person detecting by person detecting algorithm to described first image;
When acquisition detects the testing result of personage, described first image is defined as described character image.
8. method according to claim 2, is characterized in that, before the difference of described character image and default static background image, also comprises:
Export the second prompting message that prompting user leaves default coverage;
The interval time of presetting after described second prompting message exports, when arriving, controls the second image that picture pick-up device gathers described coverage, obtains described static background image.
9. method according to claim 8, is characterized in that, when described interval time of presetting after described second prompting message exports arrives, controls the second image that picture pick-up device gathers described coverage, obtains described static background image, comprising:
The interval time of presetting after described second prompting message exports, when arriving, carries out person detecting by person detecting algorithm to described second image;
When acquisition does not detect the testing result of personage, described second image is defined as described static background image.
10. method according to claim 2, is characterized in that, also comprises:
The filtering algorithm preset is utilized to carry out denoising to described difference pixel.
11. methods according to claim 1, is characterized in that, described by described personage's area image with preset background image synthesize, obtain composograph, comprising:
The pixel of the target area of specifying in described background image is replaced with the pixel of described personage's area image, obtain described composograph.
12. 1 kinds of image processing apparatus, is characterized in that, described device comprises:
Acquiring unit, for when receiving replacing background instructions, obtains personage's area image from the character image of shooting;
Synthesis unit, for described personage's area image and the background image of specifying being synthesized, obtains composograph.
13. devices according to claim 12, is characterized in that, described acquiring unit, comprising:
Subelement is determined in contrast, for contrasting the difference of described character image and default static background image, determines the difference pixel in described character image; Wherein, described static background image is the image with described character image with same background;
Segmentation subelement, for being partitioned into described difference pixel from described character image, obtains described personage's area image.
14. devices according to claim 13, is characterized in that, subelement is determined in described contrast, comprising:
Contrast module, for contrasting the pixel value of the pixel of same position in described character image and described static background image;
First determination module, is defined as described difference pixel for pixel value differences in described character image being greater than threshold value.
15. devices according to claim 13, is characterized in that, described segmentation subelement, comprising:
Second determination module, for according to described difference pixel, determines personage region in described character image; Wherein, described personage region is the region that can surround all described difference pixels in described character image;
Labeling module, for carrying out Face datection to described personage region, the face pixel detected is labeled as foreground pixel point, the central area of described personage region is labeled as possibility foreground pixel point, personage in described character image is occurred the area marking outside rectangular area is possible background pixel point, obtain the character image after marking;
Segmentation module, for according to described foreground pixel point, described possibility foreground pixel point and the described possibility position of background pixel point in described character image, utilizes the partitioning algorithm preset to be partitioned into described personage's area image from the character image after described mark.
16. devices according to claim 15, is characterized in that, described second determination module, comprising:
Detection sub-module, for detecting the coordinate of each described difference pixel; Wherein, horizontal direction coordinate figure and vertical direction coordinate figure is comprised in described coordinate;
Submodule is determined in region, for according to the minimum value in the minimum value in all horizontal direction coordinate figures and maximum, all vertical direction coordinate figures and maximum, determines described personage region.
17. devices according to claim 13, is characterized in that, subelement is determined in described contrast, comprising:
First output module, enters the first prompting message of default coverage for exporting prompting user;
First control module, when the interval time for presetting after described first prompting message exports arriving, controlling the first image that picture pick-up device gathers described coverage, obtaining described character image.
18. devices according to claim 17, is characterized in that, described first control module, comprising:
First detection sub-module, when the interval time for presetting after described first prompting message exports arrives, carries out person detecting by person detecting algorithm to described first image;
First determines submodule, for when acquisition detects the testing result of personage, described first image is defined as described character image.
19. devices according to claim 13, is characterized in that, subelement is determined in described contrast, comprising:
Second output module, leaves the second prompting message of default coverage for exporting prompting user;
Second control module, when the interval time for presetting after described second prompting message exports arriving, controlling the second image that picture pick-up device gathers described coverage, obtaining described static background image.
20. devices according to claim 19, is characterized in that, described second control module, comprising:
Second detection sub-module, when the interval time for presetting after described second prompting message exports arrives, carries out person detecting by person detecting algorithm to described second image;
Second determines submodule, for when acquisition does not detect the testing result of personage, described second image is defined as described static background image.
21. devices according to claim 13, is characterized in that, subelement is determined in described contrast, comprising:
Denoising module, carries out denoising for utilizing default filtering algorithm to described difference pixel.
22. devices according to claim 12, is characterized in that, described synthesis unit, comprising:
Replacing subelement, for the pixel of the target area of specifying in described background image being replaced with the pixel of described personage's area image, obtaining described composograph.
23. 1 kinds of image processing apparatus, is characterized in that, comprising:
Processor;
For the memory of storage of processor executable instruction;
Wherein, described processor is configured to:
When receiving replacing background instructions, from the character image of shooting, obtain personage's area image;
Described personage's area image and the background image of specifying are synthesized, obtains composograph.
CN201510354715.1A 2015-06-24 2015-06-24 Picture processing method and picture processing device Pending CN104902189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510354715.1A CN104902189A (en) 2015-06-24 2015-06-24 Picture processing method and picture processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510354715.1A CN104902189A (en) 2015-06-24 2015-06-24 Picture processing method and picture processing device

Publications (1)

Publication Number Publication Date
CN104902189A true CN104902189A (en) 2015-09-09

Family

ID=54034555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510354715.1A Pending CN104902189A (en) 2015-06-24 2015-06-24 Picture processing method and picture processing device

Country Status (1)

Country Link
CN (1) CN104902189A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338242A (en) * 2015-10-29 2016-02-17 努比亚技术有限公司 Image synthesis method and device
CN105847728A (en) * 2016-04-13 2016-08-10 腾讯科技(深圳)有限公司 Information processing method and terminal
CN105957045A (en) * 2016-04-29 2016-09-21 珠海市魅族科技有限公司 Picture synthesis method and device
CN106101579A (en) * 2016-07-29 2016-11-09 维沃移动通信有限公司 A kind of method of video-splicing and mobile terminal
CN106231195A (en) * 2016-08-15 2016-12-14 乐视控股(北京)有限公司 A kind for the treatment of method and apparatus of taking pictures of intelligent terminal
CN106844722A (en) * 2017-02-09 2017-06-13 北京理工大学 Group photo system and method based on Kinect device
CN107529020A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107610078A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device
CN107707823A (en) * 2017-10-18 2018-02-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107730433A (en) * 2017-09-28 2018-02-23 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
CN107872623A (en) * 2017-12-22 2018-04-03 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN108171775A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Picture synthetic method, mobile terminal and computer readable storage medium
CN108198162A (en) * 2017-12-29 2018-06-22 努比亚技术有限公司 Photo processing method, mobile terminal, server, system, storage medium
CN108200334A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN108510500A (en) * 2018-05-14 2018-09-07 深圳市云之梦科技有限公司 A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection
CN108566515A (en) * 2018-04-28 2018-09-21 努比亚技术有限公司 It takes pictures processing method, mobile terminal and storage medium
CN108874113A (en) * 2017-05-08 2018-11-23 丽宝大数据股份有限公司 Electronics makeup lens device and its background transitions method
CN109377502A (en) * 2018-10-15 2019-02-22 深圳市中科明望通信软件有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109413330A (en) * 2018-11-07 2019-03-01 深圳市博纳思信息技术有限公司 A kind of certificate photograph intelligently changes background method
CN109618088A (en) * 2018-01-05 2019-04-12 马惠岷 Intelligent camera system and method with illumination identification and reproduction capability
CN110166703A (en) * 2018-03-27 2019-08-23 华为技术有限公司 Photographic method, camera arrangement and mobile terminal
CN110992297A (en) * 2019-11-11 2020-04-10 北京百度网讯科技有限公司 Multi-commodity image synthesis method and device, electronic equipment and storage medium
CN111210450A (en) * 2019-12-25 2020-05-29 北京东宇宏达科技有限公司 Method for processing infrared image for sea-sky background
CN111447389A (en) * 2020-04-22 2020-07-24 广州酷狗计算机科技有限公司 Video generation method, device, terminal and storage medium
WO2020227971A1 (en) * 2019-05-15 2020-11-19 Microsoft Technology Licensing, Llc Image generation
CN112422825A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Intelligent photographing method, device, equipment and computer readable medium
CN113655061A (en) * 2021-09-23 2021-11-16 华志(福建)电子科技有限公司 Method for identifying melting point of substance based on image and melting point instrument
US11503228B2 (en) 2017-09-11 2022-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101212640A (en) * 2006-12-29 2008-07-02 英华达股份有限公司 Video call method
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN202026389U (en) * 2011-02-17 2011-11-02 天津三星光电子有限公司 Camera supporting background image record
CN103327253A (en) * 2013-06-26 2013-09-25 深圳市中兴移动通信有限公司 Multiple exposure method and camera shooting device
CN103475826A (en) * 2013-09-27 2013-12-25 深圳市中视典数字科技有限公司 Video matting and synthesis method
CN204305184U (en) * 2014-12-02 2015-04-29 苏州创捷传媒展览股份有限公司 Virtual photograph device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101212640A (en) * 2006-12-29 2008-07-02 英华达股份有限公司 Video call method
CN101777180A (en) * 2009-12-23 2010-07-14 中国科学院自动化研究所 Complex background real-time alternating method based on background modeling and energy minimization
CN202026389U (en) * 2011-02-17 2011-11-02 天津三星光电子有限公司 Camera supporting background image record
CN103327253A (en) * 2013-06-26 2013-09-25 深圳市中兴移动通信有限公司 Multiple exposure method and camera shooting device
CN103475826A (en) * 2013-09-27 2013-12-25 深圳市中视典数字科技有限公司 Video matting and synthesis method
CN204305184U (en) * 2014-12-02 2015-04-29 苏州创捷传媒展览股份有限公司 Virtual photograph device

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105338242A (en) * 2015-10-29 2016-02-17 努比亚技术有限公司 Image synthesis method and device
WO2017177768A1 (en) * 2016-04-13 2017-10-19 腾讯科技(深圳)有限公司 Information processing method, terminal, and computer storage medium
CN105847728A (en) * 2016-04-13 2016-08-10 腾讯科技(深圳)有限公司 Information processing method and terminal
CN105957045A (en) * 2016-04-29 2016-09-21 珠海市魅族科技有限公司 Picture synthesis method and device
CN106101579A (en) * 2016-07-29 2016-11-09 维沃移动通信有限公司 A kind of method of video-splicing and mobile terminal
CN106101579B (en) * 2016-07-29 2019-04-12 维沃移动通信有限公司 A kind of method and mobile terminal of video-splicing
CN106231195A (en) * 2016-08-15 2016-12-14 乐视控股(北京)有限公司 A kind for the treatment of method and apparatus of taking pictures of intelligent terminal
CN106844722A (en) * 2017-02-09 2017-06-13 北京理工大学 Group photo system and method based on Kinect device
CN108874113A (en) * 2017-05-08 2018-11-23 丽宝大数据股份有限公司 Electronics makeup lens device and its background transitions method
CN107529020A (en) * 2017-09-11 2017-12-29 广东欧珀移动通信有限公司 Image processing method and device, electronic installation and computer-readable recording medium
CN107610078A (en) * 2017-09-11 2018-01-19 广东欧珀移动通信有限公司 Image processing method and device
US11516412B2 (en) 2017-09-11 2022-11-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and electronic device
CN107529020B (en) * 2017-09-11 2020-10-13 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic apparatus, and computer-readable storage medium
US11503228B2 (en) 2017-09-11 2022-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Image processing method, image processing apparatus and computer readable storage medium
CN107730433A (en) * 2017-09-28 2018-02-23 努比亚技术有限公司 One kind shooting processing method, terminal and computer-readable recording medium
CN107707823A (en) * 2017-10-18 2018-02-16 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107872623A (en) * 2017-12-22 2018-04-03 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium
CN107872623B (en) * 2017-12-22 2019-11-26 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer readable storage medium
CN108171775B (en) * 2017-12-28 2023-09-08 努比亚技术有限公司 Picture synthesis method, mobile terminal and computer readable storage medium
CN108200334A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image capturing method, device, storage medium and electronic equipment
CN108171775A (en) * 2017-12-28 2018-06-15 努比亚技术有限公司 Picture synthetic method, mobile terminal and computer readable storage medium
CN108200334B (en) * 2017-12-28 2020-09-08 Oppo广东移动通信有限公司 Image shooting method and device, storage medium and electronic equipment
CN108198162A (en) * 2017-12-29 2018-06-22 努比亚技术有限公司 Photo processing method, mobile terminal, server, system, storage medium
CN109618088A (en) * 2018-01-05 2019-04-12 马惠岷 Intelligent camera system and method with illumination identification and reproduction capability
CN109618088B (en) * 2018-01-05 2022-10-14 马惠岷 Intelligent shooting system and method with illumination identification and reproduction functions
US11070743B2 (en) 2018-03-27 2021-07-20 Huawei Technologies Co., Ltd. Photographing using night shot mode processing and user interface
CN110166703A (en) * 2018-03-27 2019-08-23 华为技术有限公司 Photographic method, camera arrangement and mobile terminal
US11330194B2 (en) 2018-03-27 2022-05-10 Huawei Technologies Co., Ltd. Photographing using night shot mode processing and user interface
US11838650B2 (en) 2018-03-27 2023-12-05 Huawei Technologies Co., Ltd. Photographing using night shot mode processing and user interface
CN108566515A (en) * 2018-04-28 2018-09-21 努比亚技术有限公司 It takes pictures processing method, mobile terminal and storage medium
CN108510500B (en) * 2018-05-14 2021-02-26 深圳市云之梦科技有限公司 Method and system for processing hair image layer of virtual character image based on human face skin color detection
CN108510500A (en) * 2018-05-14 2018-09-07 深圳市云之梦科技有限公司 A kind of hair figure layer process method and system of the virtual figure image based on face complexion detection
CN109377502A (en) * 2018-10-15 2019-02-22 深圳市中科明望通信软件有限公司 A kind of image processing method, image processing apparatus and terminal device
CN109413330A (en) * 2018-11-07 2019-03-01 深圳市博纳思信息技术有限公司 A kind of certificate photograph intelligently changes background method
WO2020227971A1 (en) * 2019-05-15 2020-11-19 Microsoft Technology Licensing, Llc Image generation
CN110992297A (en) * 2019-11-11 2020-04-10 北京百度网讯科技有限公司 Multi-commodity image synthesis method and device, electronic equipment and storage medium
CN111210450A (en) * 2019-12-25 2020-05-29 北京东宇宏达科技有限公司 Method for processing infrared image for sea-sky background
CN111210450B (en) * 2019-12-25 2022-08-09 北京东宇宏达科技有限公司 Method and system for processing infrared image of sea-sky background
CN111447389A (en) * 2020-04-22 2020-07-24 广州酷狗计算机科技有限公司 Video generation method, device, terminal and storage medium
CN111447389B (en) * 2020-04-22 2022-11-04 广州酷狗计算机科技有限公司 Video generation method, device, terminal and storage medium
CN112422825A (en) * 2020-11-16 2021-02-26 珠海格力电器股份有限公司 Intelligent photographing method, device, equipment and computer readable medium
CN113655061B (en) * 2021-09-23 2024-06-21 华志(福建)电子科技有限公司 Method for identifying melting point of substance based on image and melting point instrument
CN113655061A (en) * 2021-09-23 2021-11-16 华志(福建)电子科技有限公司 Method for identifying melting point of substance based on image and melting point instrument

Similar Documents

Publication Publication Date Title
CN104902189A (en) Picture processing method and picture processing device
CN105139415A (en) Foreground and background segmentation method and apparatus of image, and terminal
CN105631797A (en) Watermarking method and device
CN104243819A (en) Photo acquiring method and device
CN104063123A (en) Icon displaying method and device
CN105631804B (en) Image processing method and device
CN105469056A (en) Face image processing method and device
CN104700353A (en) Image filter generating method and device
CN104978200A (en) Application program display method and device
CN105159661A (en) Corner mark display method and apparatus for icon
CN105487773B (en) The method and device of screenshot capture
CN105898505B (en) The method, apparatus and system of audio-visual synchronization are tested in video instant communication
CN104035674B (en) Picture displaying method and device
KR20160127606A (en) Mobile terminal and the control method thereof
CN104243814A (en) Analysis method for object layout in image and image shoot reminding method and device
CN106023083A (en) Method and device for obtaining combined image
CN106598429A (en) Method and device for adjusting window of mobile terminal
CN105512615A (en) Picture processing method and apparatus
CN104407769A (en) Picture processing method, device and equipment
CN105513067A (en) Image definition detection method and device
CN112927122A (en) Watermark removing method, device and storage medium
CN105391621A (en) Information communication method and device
CN105100634A (en) Image photographing method and image photographing device
CN104156993A (en) Method and device for switching face image in picture
CN105488829B (en) Generate the method and device of head portrait

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150909