CN107958449A - A kind of image combining method and device - Google Patents

A kind of image combining method and device Download PDF

Info

Publication number
CN107958449A
CN107958449A CN201711331637.9A CN201711331637A CN107958449A CN 107958449 A CN107958449 A CN 107958449A CN 201711331637 A CN201711331637 A CN 201711331637A CN 107958449 A CN107958449 A CN 107958449A
Authority
CN
China
Prior art keywords
image
gradual change
fused
pixel
extended area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711331637.9A
Other languages
Chinese (zh)
Inventor
孙金波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201711331637.9A priority Critical patent/CN107958449A/en
Publication of CN107958449A publication Critical patent/CN107958449A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of image combining method and device, and applied to image processing field, this method includes:Obtain target image to be fused;Dividing processing is carried out to the target image to be fused, obtains original blank image region and foreground image areas;Extended from the foreground image areas to the original blank image region, obtain the extended area reduced around the transparent value gradual change of the foreground image areas;Obtain background image to be fused;Based on the transparent value of the extended area, the target image to be fused and the background image to be fused are subjected to color blend, form fusion results image.Solves the technical problem that border of the composograph between foreground image and background image is stiff in the prior art.

Description

A kind of image combining method and device
Technical field
The present invention relates to image processing field, more particularly to a kind of image combining method and device.
Background technology
With the fast development of Internet technology and image processing techniques, user taken pictures in using terminal equipment or When recording small video, it is desirable to the small video of the photo of shooting either recording is beautified obtain the increased photo of effect or Small video., can be according to the demand of user, by some bitmaps and photo in order to beautify to shooting photo or recording small video Or each two field picture in the small video recorded is synthesized, so as to enhance the effect of picture, video.
In the prior art, the image beautified to needs obtains foreground image, utilizes the prospect by image Segmentation Technology Image is synthesized with background image, so as to synthesize the image with landscaping effect, still, composograph foreground image with Border between background image is more stiff, and there are obvious segmented sense, therefore, the effect of image after synthesizing can be caused poor.
The content of the invention
The embodiment of the present invention solves composograph in the prior art and exists by providing a kind of image combining method and device The stiff technical problem in border between foreground image and background image.
In a first aspect, the embodiment of the present invention provides a kind of image combining method, including:
Obtain target image to be fused;
Dividing processing is carried out to the target image to be fused, obtains original blank image region and foreground image area Domain;
Extended from the foreground image areas to the original blank image region, acquisition surrounds the foreground image areas Transparent value gradual change reduce extended area;
Obtain background image to be fused;
Based on the transparent value of the extended area, by the target image to be fused and the background image to be fused Color blend is carried out, forms fusion results image.
With reference to first aspect, it is described to obtain target to be fused in the first possible implementation of first aspect Image, including:
Current real scene is acquired and generates original image, or original image is read from local memory;
The original image is subjected to format conversion, forms the target image to be fused of RGBA color modes.
With reference to first aspect or first aspect the first possible implementation, may at second of first aspect Implementation in, it is described to be extended from the foreground image areas to the original blank image region, obtain before described The extended area that the transparent value gradual change of scape image-region reduces, including:
It is N number of to the original blank image region extension from each edge pixel of the foreground image areas Pixel, N are positive integer;
It is from the foreground image areas to the original sky to configure N number of transparent channel corresponding to N number of pixel The linear reduction in white image-region direction, obtains the extended area that the transparent value gradual change reduces.
With reference to first aspect or first aspect the first possible implementation, first aspect the third may Implementation in, it is described to be extended from the foreground image areas to the original blank image region, obtain before described The extended area that the transparent value gradual change of scape image-region reduces, including:
From each edge pixel of the foreground image areas, to original blank image region extension M Pixel, M are positive integer;
It is from the foreground image areas to the original sky to configure M transparent channel corresponding to the M pixel The gradual change in K gradual change segmentation in white image-region direction reduces, and obtains the extended area that the transparent value gradual change reduces, and K is Positive integer less than M.
The third possible implementation with reference to first aspect, in the 4th kind of possible implementation of first aspect In, if K gradual change segmentation includes:The second gradual change segmentation, institute after first gradual change segmentation, first gradual change segmentation State the 3rd gradual change segmentation after the second gradual change segmentation;
M transparent channel corresponding to the configuration M pixel is from the foreground image areas to the original The gradual change in K gradual change segmentation of beginning blank image region direction reduces, including:
Configure the 1st of the M pixel toTransparent channel corresponding to a pixel is from the foreground image Region reduces to the original blank image region direction in the linear of the first gradual change segmentation;
Configure the of the M pixelExtremelyTransparent channel corresponding to a pixel is from the prospect Image-region reduces to the original blank image region direction in the linear of the second gradual change segmentation, the second gradual change segmentation Gradual slope be more than first gradual change segmentation gradual slope;
Configure the of the M pixelIt is from the foreground image to the transparent channel corresponding to M pixel Region reduces to the original blank image region direction in the linear of the 3rd gradual change segmentation, and the 3rd gradual change segmentation is gradually Variable slope is less than the gradual slope of second gradual change segmentation.
With reference to first aspect or first aspect the first possible implementation, in the 5th kind of possibility of first aspect Implementation in, the transparent value based on the extended area, by the target image to be fused with it is described to be fused Background image carry out color blend, including:
The target image to be fused is converted into target image texture;
The extended area is converted into extended area image texture;
The background image to be fused is converted into background image texture;
According to the transparent value of the extended area image texture, by the target image texture and the background image texture Carry out color blend.
The 5th kind of possible implementation with reference to first aspect, in the 6th kind of possible implementation of first aspect In, the transparent value according to the extended area image texture, by the target image texture and the background image texture Color blend is carried out, including:
Invocation target renders pipeline and target coloration program, according to the transparent value of the extended area image texture, by institute State target image texture and the background image texture carries out color blend, wherein, the target image texture, the expansion area Area image texture and the background image texture are that the target renders the identifiable state of pipeline.
The 6th kind of possible implementation with reference to first aspect, in the 7th kind of possible implementation of first aspect In, the transparent value according to the extended area image texture, by the target image texture and the background image texture Color blend is carried out, including:
In the foreground image areas, figure corresponding with the foreground image areas in the target image texture is shown as As texture;
The image texture for corresponding to the extended area in the target image texture is corresponding with the background image Blend of colors is carried out in the image texture of the extended area, obtains the image texture for the extended area;
The blank image region not covered in the original blank image region by the extended area, is shown as described Image texture corresponding with the blank image region not covered by the extended area in background image texture.
The 7th kind of possible implementation with reference to first aspect, in the 8th kind of possible implementation of first aspect In, it is described to correspond to the image texture of the extended area in the target image texture with corresponding in the background image The image texture of the extended area carries out blend of colors, obtains the image texture for the extended area, specifically by Equation below carries out:
Wherein, ColoroutputFor the output color for ith pixel point in the extended area, RGBsrcRepresent the mesh Correspond to the RGB color value of ith pixel point in the extended area, RGB in logo image texturedstRepresent the background image line Correspond to the RGB color value of ith pixel point in the extended area in reason, alpha represents ith pixel point in the extended area Transparent channel, i takes a pixel in the extended area successively, and i is positive integer.
Second aspect, an embodiment of the present invention provides a kind of image synthesizer, including:
First acquisition unit, for obtaining target image to be fused;
Cutting unit, for carrying out dividing processing to the target image to be fused, obtains original blank image region And foreground image areas;
Expanding element, for being extended from the foreground image areas to the original blank image region, acquisition surrounds institute State the extended area that the transparent value gradual change of foreground image areas reduces;
Second acquisition unit, for obtaining background image to be fused;
Integrated unit, for the transparent value based on the extended area, the target image to be fused is treated with described The background image of fusion carries out color blend, forms fusion results image.
With reference to second aspect, in the first possible implementation with reference to second aspect, the first acquisition unit, bag Include:
Original image obtains subelement, for being acquired to current real scene and generating original image, or from originally Ground memory reads original image;
Format conversion subelement, for the original image to be carried out format conversion, forms the described of RGBA color modes Target image to be fused.
With reference to the possible implementation of the first of second aspect or second aspect, second in second aspect may Implementation in, the expanding element, including:
First pixel extends subelement, for each edge pixel from the foreground image areas, to institute State original blank image region and extend N number of pixel, N is positive integer;
First transparent channel configures subelement, for configuring N number of transparent channel corresponding to N number of pixel as from institute State foreground image areas to the linear of the original blank image region direction to reduce, obtain the expansion that the transparent value gradual change reduces Exhibition section domain.
With reference to the possible implementation of the first of second aspect or second aspect, in the 4th kind of possibility of first aspect Implementation in, the expanding element, including:
Second pixel extends subelement, for each edge pixel from the foreground image areas, to institute State original blank image region and extend M pixel, M is positive integer;
Second transparent channel configures subelement, for configuring M transparent channel corresponding to the M pixel as from institute State gradual change in K gradual change segmentation of the foreground image areas to the original blank image region direction to reduce, obtain described The extended area that bright value gradual change reduces, K are the positive integer less than M.
With reference to the third possible implementation of second aspect, in the 4th kind of possible implementation of second aspect In, if K gradual change segmentation includes:The second gradual change segmentation, institute after first gradual change segmentation, first gradual change segmentation The 3rd gradual change segmentation after the second gradual change segmentation is stated, second transparent channel configures subelement, is specifically used for:
Configure the 1st of the M pixel toTransparent channel corresponding to a pixel is from the foreground image Region reduces to the original blank image region direction in the linear of the first gradual change segmentation;
Configure the of the M pixelExtremelyTransparent channel corresponding to a pixel is from the prospect Image-region reduces to the original blank image region direction in the linear of the second gradual change segmentation, the second gradual change segmentation Gradual slope be more than first gradual change segmentation gradual slope;
Configure the of the M pixelIt is from the foreground image to the transparent channel corresponding to M pixel Region reduces to the original blank image region direction in the linear of the 3rd gradual change segmentation, and the 3rd gradual change segmentation is gradually Variable slope is less than the gradual slope of second gradual change segmentation.
With reference to the possible implementation of the first of second aspect or second aspect, in the 5th kind of possibility of first aspect Implementation in, the transparent value based on the extended area, by the target image to be fused with it is described to be fused Background image carry out color blend, including:
The target image to be fused is converted into target image texture;
The extended area is converted into extended area image texture;
The background image to be fused is converted into background image texture;
According to the transparent value of the extended area image texture, by the target image texture and the background image texture Carry out color blend.
With reference to the 5th kind of possible implementation of second aspect, in the 6th kind of possible implementation of second aspect In, the integrated unit, including:
Subelement is called, pipeline and target coloration program are rendered for invocation target;
Fusion treatment subelement, for rendering pipeline and target coloration program by the target of calling, according to the extension The transparent value of area image texture, color blend is carried out by the target image texture and the background image texture, wherein, institute It is that the target renders pipeline and can know to state target image texture, the extended area image texture and the background image texture Other state.
With reference to the 6th kind of possible implementation of second aspect, in the 7th kind of possible implementation of second aspect In, the fusion treatment subelement, is specifically used for:
In the foreground image areas, figure corresponding with the foreground image areas in the target image texture is shown as As texture;
The image texture for corresponding to the extended area in the target image texture is corresponding with the background image Blend of colors is carried out in the image texture of the extended area, obtains the image texture for the extended area;
The blank image region not covered in the original blank image region by the extended area, is shown as described Image texture corresponding with the blank image region not covered by the extended area in background image texture.
With reference to the 7th kind of possible implementation of second aspect, in the 8th kind of possible implementation of second aspect In, the fusion treatment subelement, the image texture specifically for being directed to the extended area by equation below:
Wherein, ColoroutputRepresent the output color for ith pixel point in the extended area, RGBsrcDescribed in expression Correspond to the RGB color value of ith pixel point in the extended area, RGB in target image texturedstRepresent the background image Correspond to the RGB color value of ith pixel point in the extended area in texture, alpha represents ith pixel in the extended area The transparent channel of point, i take a pixel in the extended area successively, and i is positive integer.
The third aspect, the embodiment of the present invention provide a kind of image composition equipment, including memory, processor and are stored in On reservoir and the computer program that can run on a processor, the processor realizes first aspect to the when performing described program Step described in any of 8th kind of possible implementation of one side possible implementation.
Fourth aspect, the embodiment of the present invention provide a kind of computer-readable recording medium, are stored thereon with computer program, The program realizes that first aspect to any of the 8th kind of possible implementation of first aspect can when being executed by processor Step described in the implementation of energy.
The one or more technical solutions provided in the embodiment of the present invention, have at least the following technical effects or advantages:
Due to carrying out dividing processing to target image, original blank image region and foreground image areas picture are obtained;In the past Scape image-region expands the extended area of transparent value gradual change reduction to original blank image region;So that target image and background When image carries out color blend, the display effect of the borderline region between foreground image and background image is target image and background The color addition of corresponding region in image, therefore, the color transition from foreground image to background image is formd in borderline region, Will not directly from the stiff color for jumping to background image of the color of foreground image, and then, the rendering effect of fused image It is more natural, improve image rendering effect.
Brief description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, for this For the those of ordinary skill of field, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.
Fig. 1 is the flow chart of image combining method provided in an embodiment of the present invention;
Fig. 2 is the schematic diagram of the extended area in the embodiment of the present invention;
Fig. 3 is the curve map that the transparent value of extended area in Fig. 2 linearly reduces;
Fig. 4 is the curve map that the transparent value subsection gradual of extended area in Fig. 2 reduces;
Fig. 5 is the module map of image synthesizer provided in an embodiment of the present invention;
Fig. 6 is the structure diagram of image composition equipment provided in an embodiment of the present invention;
Fig. 7 is the structure diagram of computer-readable storage medium provided in an embodiment of the present invention.
Embodiment
The embodiment of the present invention solves composograph in the prior art and exists by providing a kind of image combining method and device The stiff technical problem in border between foreground image and background image.The technical solution of the embodiment of the present invention is the above-mentioned skill of solution Art problem, general thought are as follows:
Dividing processing is carried out to target image to be fused, obtains original blank image region and foreground image areas;From Foreground image areas is extended to the original blank image region, what the transparent value gradual change obtained around foreground image areas reduced Extended area;Transparent value gradual change based on extended area reduces, by target image to be fused and background image to be fused into Row color blend, forms fusion results image.
Through the above technical solutions, the display effect of the borderline region between foreground image and background image is target image With the color addition of corresponding region in background image, therefore, the face from foreground image to background image is formd in borderline region Colour transition, without directly from the stiff color for jumping to background image of the color of foreground image so that fused image Rendering effect is more natural, and this improves image rendering effect.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, the technical solution in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is Part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art All other embodiments obtained without creative efforts, belong to the scope of protection of the invention.
Image combining method provided in an embodiment of the present invention, applied in terminal device, can specifically applying with taking the photograph As head terminal device in, which can be to include mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), POS (Point of Sales, point-of-sale terminal), vehicle-mounted computer camera shooting terminal, camera it is whole Any terminal devices such as end.
Refering to what is shown in Fig. 1, Fig. 1 is the flow chart of image combining method provided in an embodiment of the present invention, the embodiment of the present invention The image combining method of offer, includes the following steps:
Step S101, target image to be fused is obtained.
Target image to be fused can be:From local memory read original image after, by the original image of reading into Row format converts, and forms the target image to be fused of RGBA color modes.Target image to be fused can be:Set from other After the standby original image received, the original image of reception is subjected to format conversion, forms the mesh to be fused of RGBA color modes Logo image.Target image to be fused can also be:The camera hardware that terminal device carries adopts current real scene Collect and generate original image, format conversion is carried out to the original image currently gathered, forms the to be fused of RGBA color modes Target image.
Specifically, original image is generally YUV color modes, and Y'UV, YUV, YCbCr, YPbPr etc. is properly termed as YUV color modes, are not the reference formats for carrying out image synthesis, therefore, it is necessary to turn since original image is YUV color modes Turn to RGBA color modes.It should be noted that YUV be by eurovision system used by a kind of colour coding method.Its In, " Y " represents lightness (Lumina nce or Luma), that is, grey decision-making, and " Y " is a baseband signal.And " U " and " V " table That show is then colourity (Chrominance or Chroma), and effect is to describe colors of image and saturation degree, the face for specified pixel Color, U and V are not baseband signals.
Certainly, in specific implementation process, if original image is the color mode that other are not RGBA color modes, Pass through format conversion, the target image to be fused of formation RGBA color modes.
RGBA is a kind of color mode, and R, G, B correspond to the component for representing redgreenblue, and three primary colors, which are overlapped, to be needed The color wanted, RGBA color modes are assigned with the integer value in the range of 0~255, A (Alpha) for each red, green, blue component For transparent channel, transparent value is represented, and the value range of the transparent value of transparent channel is the integer value in 0~255, passes through setting Different ratios, can obtain arbitrary color between R color values, G color values, B color values.
By calling conversion formula, it is calculated based on the Y value in YUV color modes, U values, V values in RGBA color modes R color values, G color values, B color values.
After step slol, step S102 is then performed:Dividing processing is carried out to target image to be fused, is obtained former Beginning blank image region and foreground image areas.
Specifically, carrying out dividing processing to target image to be fused based on partitioning algorithm, specifically, calculated based on segmentation Method carries out dividing processing to target image to be fused, and target image image to be fused is divided into:Original blank image area Domain and foreground image areas, and mark foreground image areas.
In specific implementation process, if target image to be fused is the image for including portrait area, pass through people As partitioning algorithm carries out dividing processing to target image to be fused, blank image region and portrait area are divided into, marks people As region.
In specific implementation process, dividing method that can be based on threshold value or dividing method or base based on region Dividing method in edge or the dividing method based on particular theory carry out dividing processing to target image to be fused.
In specific implementation process, target image to be fused can also be carried out based on artificial neural network identification technology Dividing processing.Linear decision function is obtained particular by training multi-layer perception (MLP), then with decision function to be fused Pixel in target image is classified to achieve the purpose that to split target image to be fused.
After step s 102, step S103 is then performed:Extended from foreground image areas to original blank image region, Obtain the extended area reduced around the transparent value gradual change of foreground image areas.
Specifically, the original extended area of preset range is expanded from foreground image areas to original blank image region, The transparent channel of each pixel in the original extended area of preset range is configured, is formed from foreground image areas to original The extended area that the transparent value gradual change of beginning blank image region direction reduces.
Refering to what is shown in Fig. 2, from foreground image areas a1Each edge pixel, to original blank image region a3 One group of pixel of Directional Extension, so that, from foreground image areas a1Expand multigroup pixel.If foreground image areas a1Deposit In H edge pixel, H is positive integer then from foreground image areas to original blank image region a3Directional Extension H group pictures Vegetarian refreshments, includes multiple pixels in every group of pixel of H group pixels, each pixel is corresponding in every group of pixel The transparent value of bright passage is from foreground image areas a1To original blank image region a3The gradual change in direction reduces.
The transparent value of the corresponding transparent channel of each pixel is satisfied by the rule of gradual change reduction in every group of pixel, so that structure The extended area a reduced into transparent value gradual change2
In specific implementation process, extended area a2Transparent value can be reduced with linear gradient, or it is multiple gradually The gradual change for becoming segmentation reduces, wherein, the gradual slope of adjacent gradual change segmentation is different.
In the following, refering to what is shown in Fig. 3, the transparent value of extended area is carried out specifically for the implementation that linear gradient reduces It is bright:
From each edge pixel of foreground image areas, N number of pixel, N are extended to original blank image region For positive integer;It is from foreground image areas to original blank image region side to configure N number of transparent channel corresponding to N number of pixel To it is linear reduce, obtain the extended area that transparent value gradual change reduces.
It should be noted that the corresponding transparent channel of each pixel in configuration extended area, exactly configures in extended area The transparent value of the transparent channel of each pixel is an integer value in the range of (0,255).Transparent value is smaller more transparent, transparent value It is more big opaquer.Wherein, transparent value represents all-transparent for 0, and transparent value represents opaque for 255.
Refering to what is shown in Fig. 3, abscissa represents the 1st~the of an edge pixel extension for foreground image areas N number of pixel, ordinate represent the transparent value alpha of the corresponding transparent channel A (Alpha) of 1~n-th, according to bent in Fig. 3 Line, can determine that the transparent value of the respective transparent channel of N number of pixel in every group of pixel of extension.
Specifically, the transparent value of N number of transparent channel corresponding to N number of pixel meets from foreground image areas to original The linear reduction of the default slope of blank image region direction.Specifically, in N number of pixel neighbor pixel transparent value it Difference it is identical, and in neighbor pixel apart from foreground image areas closer to a pixel transparent channel transparent value be less than away from The transparent value of the transparent channel of a pixel farther from foreground image areas, so as to obtain the transparent value of N number of transparent channel The linear reduction of default slope.
Default slope is related to the pixel number that each edge pixel is extended to original blank image region.Often The pixel number that a edge pixel is extended to original blank image region is more, then default slope is smaller, Mei Gebian The pixel number that pixel is extended at edge is more, then it is bigger to preset slope.
Such as using each edge pixel of foreground image areas to original white space extend 254 pixels as Example is explained, and the transparent value for configuring this corresponding 254 transparent channel of 254 pixels is followed successively by 254 to 1.For Some edge pixel x of foreground image areas1Exemplified by, in 254 pixels extended to original blank image region, away from From edge pixel x1The transparent value of transparent channel of first pixel be 254, next pixel it is transparent logical The transparent value in road is 253, and the transparent value of the transparent channel of next one pixel is 252, is configured successively with this rule, until away from From edge pixel x1The transparent value of transparent channel of the 254th pixel be configured to 1, according to the configuration of this rule from each 254 pixels that edge pixel expands.
In the following, the implementation that the transparent value of extended area reduces in the gradual change that multiple gradual changes are segmented, is specifically described:
From each edge pixel of foreground image areas, M pixel, M are extended to original blank image region For positive integer;It is from foreground image areas to original blank image region side to configure M transparent channel corresponding to M pixel To the gradual change in K gradual change segmentation reduce, obtain the extended area that transparent value gradual change reduces, K is the positive integer less than M.
Wherein, the gradual slope of K gradual change segmentation is different, specifically, can be 3 in the transparent value of extended area A gradual change segmentation or the gradual change of 5 gradual change segmentations or 7 gradual change segmentations reduce.The gradual slope of intervening fade segmentation is big In the gradual slope of both ends gradual change segmentation so that close to foreground image areas and close to original blank image area in extended area The transparent value gradual change of this two parts subregion in domain is relatively slow, and away from foreground image areas and close to original blank image region The transparent value gradual change of this part middle sub-field is very fast so that in color blend the color transition of borderline region more from So.
In the following, in one embodiment, K gradual change segmentation includes:After first gradual change segmentation, the first gradual change segmentation The 3rd gradual change segmentation after second gradual change segmentation, the second gradual change segmentation, refering to what is shown in Fig. 4, step S103 includes the following steps:
Step S1031, configure M pixel the 1st toTransparent channel corresponding to a pixel is from foreground picture As the linear reduction that in first gradual change is segmented of the region to original blank image region direction.
Step S1032, the of M pixel is configuredExtremelyTransparent channel corresponding to a pixel is in the past Scape image-region reduces to original blank image region direction in the linear of the second gradual change segmentation, the gradual change of the second gradual change segmentation Slope is more than the gradual slope of the first gradual change segmentation;
Step S1033, the of M pixel is configuredIt is from foreground picture to the transparent channel corresponding to M pixel As the linear reduction that in threeth gradual change is segmented of the region to original blank image region direction, the gradual slope of the 3rd gradual change segmentation Less than the gradual slope of the second gradual change segmentation.
It should be noted that step S1031~step S1033 does not have specific execution sequence, in practical implementation, Configuration to the transparent value of the transparent channel of each pixel in extended area can be and meanwhile or according to setting order 's.
Specifically, in a program can to the transparent value alpha of the transparent channel (Alpha) of each pixel of extended area into Row normalized, the value range of transparent value is (0,1) after being normalized.It is then corresponding, in returning for the first gradual change segmentation One change after the value range of transparent value be (0,0.3), M pixel the 1st toThe normalizing of the transparent channel of a pixel Change transparent value in the range of (0,0.3) linearly to reduce.The value range of transparent value is after the normalization of the second gradual change segmentation [0.3,0.5), thenExtremelyAfter the normalization of the transparent channel of a pixel transparent value [0.3,0.5) in the range of It is linear to reduce.After the normalization of 3rd gradual change segmentation the value range of transparent value for [0.5,1), then theTo M pixel Transparent channel normalization after transparent value [linearly reducing in the range of 0.5,1).Refering to what is shown in Fig. 4, abscissa represents to be directed to 1~m-th pixel of one edge pixel extension of foreground image areas, ordinate represent 1~m-th pixel The transparent value alpha of the corresponding transparent channel A (Alpha) of point, according to the piecewise function that three gradual changes are segmented in Fig. 4, determines every The transparent value of the transparent channel of M pixel in group pixel.
Perform step S104, obtain background image to be fused.
Specifically, the bitmap images (bitmap) for the RGBA color pattern read in from the hard disk of terminal device are determined as treating The background image of fusion, certainly, in specific implementation process, background image to be fused can also be the RGBA downloaded from network The bitmap images (bitmap) of color mode, or target image to be fused is formed by the pretreatment such as mirror image, toning Background image to be fused.It should be noted that if the background image to be fused obtained is the Background of extended formatting Picture, then by format conversion, be converted into the bitmap images (bitmap) of RGBA color pattern.
It should be noted that step S104 is the step of independently executing relative to step S101~S103, it is being embodied When, it can at the same time perform, can also perform again after step s 103 with step S101~S103.
After step S101~S104 is completed, then, step S105 is performed:Transparent value based on extended area, will treat The target image of fusion carries out color blend with background image to be fused, forms fusion results image.
In specific implementation process, pipeline and target coloration program are rendered by invocation target, based on the saturating of extended area Bright value, color blend is carried out by target image to be fused and background image to be fused.
In order to carry out color blend based on target rendering program and target coloration program, complete to render, including:
Step S1051, target image to be fused is converted into target image texture.
Step S1052, extended area is converted into extended area image texture.
Step S1053, background image to be fused is converted into background image texture.
Step S1051~S1053 for independently execute the step of, do not limit execution sequencing, step S1051~ After S1053 is completed, step S1054 is performed:According to the transparent value of extended area image texture, by target image texture and the back of the body Scape image texture carries out color blend.
The color blend of target image texture and background image texture, specially uses three kinds not for three different zones With the color blend of mode, it is respectively described below:
For foreground image areas:It is shown as image texture corresponding with foreground image areas in target image texture.Tool It is 255 (i.e. opaque) in the transparent value of each pixel of foreground image areas for body, target image texture and Background When carrying out color blend as texture, since the transparent value of foreground image areas is 255, color blend result is target image texture The color of middle corresponding region, therefore, image line corresponding with foreground image areas in this subregion display target image texture Reason.
For extended area:The image texture for corresponding to extended area in target image texture is corresponding with background image Blend of colors is carried out in the image texture of extended area, obtains the image texture for extended area;
Be the integer value in the range of (0,255) in the transparent value of extended area, it is corresponding, after normalization transparent value for (0, 1) in, when target image texture carries out color blend with background image texture, color blend result is right in target image texture Answer the mixing of the color and the color of corresponding region in background image texture in region.
Pin is not expanded the blank image region of region overlay in original blank image region, is shown as background image line With not being expanded the corresponding image texture in blank image region of region overlay in reason.
Specifically, the transparent value for not being expanded the blank image region of region overlay in original blank image region is 0 (i.e. all-transparent), when target image texture carries out color blend with background image texture, due to the transparent value of foreground image areas For 0 (i.e. all-transparent), therefore, color blend result is the color of corresponding region in background image texture, therefore, this subregion Image texture in meeting display background image texture.
It can be that OpenGL (Open Graphics Library, open graphic library) renders pipeline that target, which renders pipeline, OpenGL is a graphic package interface for specialty, if necessary to based on target image OpenGL and target coloration program to be fused Color blend is carried out with background image, then by the GPU (Graphics Processing Unit, graphics processor) of terminal device OpenGL is called to render pipeline and the completion of Shader coloring process, detailed process is as follows:
Step 1, calling OpenGL render the texture mapping function of pipeline, such as gltexsublmage2D (), will be to be fused Target image be converted into target image texture.Gltexsublmage2D () is to be used to specify 2 d texture to map in OpenGL Texture mapping function.
Step 2, calling OpenGL render the texture mapping function of pipeline, such as gltexsublmage2D (), will be to be fused Background image be converted into extended area image texture.
Step 3, calling OpenGL render the texture mapping function of pipeline, such as gltexsublmage2D (), by expansion area Domain is converted into background image texture.
Step 1~step 3 is the step of independently executing, and limits the execution sequence of step 1~step 3 herein.
After step 1~step 3 is completed, step 4 is then performed:OpenGL is called to render pipeline GlbindTexture () function, by target image texture and background image texture, face is carried out according to extended area image texture The mixture of colours, and coloured based on Shader coloring process.
Correspond to extended area in the image texture and background image that extended area will be corresponded in target image texture Image texture carries out blend of colors, obtains the image texture for extended area, is carried out specifically by equation below:
Wherein, ColoroutputFor the output color for ith pixel point in extended area, RGBsrcRepresent target image line Correspond to the RGB color value of ith pixel point in extended area, RGB in reasondstRepresent to correspond to expansion area in background image texture The RGB color value of ith pixel point in domain, alpha represent the transparent channel of ith pixel point in extended area, and i takes expansion area successively A pixel in domain, i is positive integer.
Specifically, the fusion of R colors is carried out using equation below each pixel corresponding to extended area,
Wherein, RoutputFor the R colors exported for ith pixel point in extended area, RsrcRepresent in target image texture Corresponding to the R color values of ith pixel point in extended area, RdstRepresent to correspond to the i-th picture in extended area in background image texture The R color values of vegetarian refreshments, alpha represent the transparent channel of ith pixel point in extended area, and i takes one in extended area successively Pixel, i are positive integer.
Specifically, using equation below, the fusion of B colors is carried out to each pixel in extended area:
Wherein, BoutputFor the B color values exported for ith pixel point in extended area, BsrcRepresent target image texture In correspond to extended area in ith pixel point B color values, BdstRepresent to correspond to i-th in extended area in background image texture The B color values of pixel, alpha represent the transparent channel of ith pixel point in extended area, and i takes one in extended area successively A pixel, i are positive integer.
Specifically, the fusion of G colors is carried out using equation below, each pixel corresponding to extended area:
Wherein, GoutputFor the G color values exported for ith pixel point in extended area, GsrcRepresent target image texture In correspond to extended area in ith pixel point G color values, GdstRepresent to correspond to i-th in extended area in background image texture The G color values of pixel, alpha represent the transparent channel of ith pixel point in extended area, and i takes one in extended area successively A pixel, i are positive integer.
It should be noted that for the sky for not being expanded region overlay in foreground image areas and original blank image region White image-region, color blend is carried out based on equation below by target image texture and background image texture:
Wherein, Color'outputFor the output color of jth pixel, RGB'srcRepresent jth pixel in target image texture The RGB color value of point, RGB'dstRepresent the RGB color value of jth pixel in background image texture, alpha' represents jth pixel The transparent channel of point, j take a pixel in foreground image areas successively, and j is positive integer.
Since the transparent value alpha' of each pixel in foreground image areas is 255, corresponding to foreground image area The output color in domain is Color'output=RGB'src.Since the blank of region overlay is not expanded in original blank image region The transparent value alpha' of image-region is 0, therefore, for the blank sheet for not being expanded region overlay in original blank image region As the Color' in regionoutput=RGB'dst
Based on same inventive concept, the embodiment of the present invention provides a kind of image synthesizer, refering to what is shown in Fig. 5, including:
First acquisition unit 201, for obtaining target image to be fused;
Cutting unit 202, for carrying out dividing processing to the target image to be fused, obtains original blank image area Domain and foreground image areas;
Expanding element 203, for being extended from the foreground image areas to the original blank image region, is surrounded The extended area that the transparent value gradual change of the foreground image areas reduces;
Second acquisition unit 204, for obtaining background image to be fused;
Integrated unit 205, for the transparent value based on the extended area, by the target image to be fused with it is described Background image to be fused carries out color blend, forms fusion results image.
In a specific embodiment, the first acquisition unit 201, including:
Original image obtains subelement, for being acquired to current real scene and generating original image, or from originally Ground memory reads original image;
Format conversion subelement, for the original image to be carried out format conversion, forms the described of RGBA color modes Target image to be fused.
In a specific embodiment, the expanding element 203, including:
First pixel extends subelement, for each edge pixel from the foreground image areas, to institute State original blank image region and extend N number of pixel, N is positive integer;
First transparent channel configures subelement, for configuring N number of transparent channel corresponding to N number of pixel as from institute State foreground image areas to the linear of the original blank image region direction to reduce, obtain the expansion that the transparent value gradual change reduces Exhibition section domain.
In a specific embodiment, the expanding element 203, including:
Second pixel extends subelement, for each edge pixel from the foreground image areas, to institute State original blank image region and extend M pixel, M is positive integer;
Second transparent channel configures subelement, for configuring M transparent channel corresponding to the M pixel as from institute State gradual change in K gradual change segmentation of the foreground image areas to the original blank image region direction to reduce, obtain described The extended area that bright value gradual change reduces, K are the positive integer less than M.
In a specific embodiment, if K gradual change segmentation includes:First gradual change segmentation, described first are gradually Become the second gradual change segmentation after being segmented, the 3rd gradual change segmentation after second gradual change segmentation, second transparent channel Subelement is configured, is specifically used for:
Configure the 1st of the M pixel toTransparent channel corresponding to a pixel is from the foreground image Region reduces to the original blank image region direction in the linear of the first gradual change segmentation;
Configure the of the M pixelExtremelyTransparent channel corresponding to a pixel is from the prospect Image-region reduces to the original blank image region direction in the linear of the second gradual change segmentation, the second gradual change segmentation Gradual slope be more than first gradual change segmentation gradual slope;
Configure the of the M pixelIt is from the foreground image to the transparent channel corresponding to M pixel Region reduces to the original blank image region direction in the linear of the 3rd gradual change segmentation, and the 3rd gradual change segmentation is gradually Variable slope is less than the gradual slope of second gradual change segmentation.
In a specific embodiment, the transparent value based on the extended area, by the target to be fused Image carries out color blend with the background image to be fused, including:
The target image to be fused is converted into target image texture;
The extended area is converted into extended area image texture;
The background image to be fused is converted into background image texture;
According to the transparent value of the extended area image texture, by the target image texture and the background image texture Carry out color blend.
In a specific embodiment, the integrated unit 205, including:
Subelement is called, pipeline and target coloration program are rendered for invocation target;
Fusion treatment subelement, for rendering pipeline and target coloration program by the target of calling, according to the extension The transparent value of area image texture, color blend is carried out by the target image texture and the background image texture, wherein, institute It is that the target renders pipeline and can know to state target image texture, the extended area image texture and the background image texture Other state.
In a specific embodiment, the fusion treatment subelement, is specifically used for:
In the foreground image areas, figure corresponding with the foreground image areas in the target image texture is shown as As texture;
The image texture for corresponding to the extended area in the target image texture is corresponding with the background image Blend of colors is carried out in the image texture of the extended area, obtains the image texture for the extended area;
The blank image region not covered in the original blank image region by the extended area, is shown as described Image texture corresponding with the blank image region not covered by the extended area in background image texture.
In a specific embodiment, the fusion treatment subelement is described specifically for being directed to by equation below The image texture of extended area:
Wherein, ColoroutputFor the output color for ith pixel point in the extended area, RGBsrcRepresent the mesh Correspond to the RGB color value of ith pixel point in the extended area, RGB in logo image texturedstRepresent the background image line Correspond to the RGB color value of ith pixel point in the extended area in reason, alpha represents ith pixel point in the extended area Transparent channel, i takes a pixel in the extended area successively, and i is positive integer.
Based on same inventive concept, the embodiment of the present invention provides a kind of image composition equipment, including memory, processor and The computer program that can be run on a memory and on a processor is stored, the processor is realized foregoing when performing described program The step of described in image combining method embodiment.Fig. 6 is the frame of the image composition equipment according to an exemplary embodiment Figure.For example, the image composition equipment 300 can be mobile phone, and computer, digital broadcast terminal, messaging devices, game Console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc..With reference to Fig. 6, device 300 can include following One or more assemblies:Processing component 302, memory 304, power supply module 306, multimedia component 308, audio component 310 are defeated Enter/export the interface 312 of (I/O), sensor component 314, and communication component 316.
The integrated operation of the usual control device 300 of processing component 302, such as with display, call, data communication, phase The operation that machine operates and record operation is associated.Treatment element 302 can refer to including one or more processors 320 to perform Order, to complete all or part of step of above-mentioned method.In addition, processing component 302 can include one or more modules, just Interaction between processing component 302 and other assemblies.For example, processing component 302 can include multi-media module, it is more to facilitate Interaction between media component 308 and processing component 302.
Memory 304 is configured as storing various types of data to support the operation in equipment 300.These data are shown Example includes the instruction of any application program or method for operating on the device 300, and contact data, telephone book data, disappears Breath, picture, video etc..Memory 304 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) are erasable to compile Journey read-only storage (EPROM), programmable read only memory (PROM), read-only storage (ROM), magnetic memory, flash Device, disk or CD.
Electric power assembly 306 provides electric power for the various assemblies of device 300.Electric power assembly 306 can include power management system System, one or more power supplys, and other components associated with generating, managing and distributing electric power for device 300.
Multimedia component 308 is included in the screen of one output interface of offer between described device 300 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch-screen, to receive input signal from the user.Touch panel includes one or more touch sensings Device is to sense the gesture on touch, slip and touch panel.The touch sensor can not only sense touch or sliding action Border, but also detect and the duration and pressure associated with the touch or slide operation.In certain embodiments, more matchmakers Body component 308 includes a front camera and/or rear camera.When equipment 300 is in operator scheme, such as screening-mode or During video mode, front camera and/or rear camera can receive exterior multi-medium data.Each front camera and Rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 310 is configured as output and/or input audio signal.For example, audio component 310 includes a Mike Wind (MIC), when device 300 is in operator scheme, during such as call model, logging mode and speech recognition mode, microphone by with It is set to reception external audio signal.The received audio signal can be further stored in memory 304 or via communication set Part 316 is sent.In certain embodiments, audio component 310 further includes a loudspeaker, for exports audio signal.
I/O interfaces 312 provide interface between processing component 302 and peripheral interface module, and above-mentioned peripheral interface module can To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock Determine button.
Sensor component 314 includes one or more sensors, and the state for providing various aspects for device 300 is commented Estimate.For example, sensor component 314 can detect opening/closed mode of equipment 300, and the relative positioning of component, for example, it is described Component is the display and keypad of device 300, and sensor component 314 can be with 300 1 components of detection device 300 or device Position change, the existence or non-existence that user contacts with device 300,300 orientation of device or acceleration/deceleration and device 300 Temperature change.Sensor component 314 can include proximity sensor, be configured to detect without any physical contact Presence of nearby objects.Sensor component 314 can also include optical sensor, such as CMOS or ccd image sensor, for into As being used in application.In certain embodiments, which can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 316 is configured to facilitate the communication of wired or wireless way between device 300 and other equipment.Device 300 can access the wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.In an exemplary implementation In example, communication component 316 receives broadcast singal or broadcast related information from external broadcasting management system via broadcast channel. In one exemplary embodiment, the communication component 316 further includes near-field communication (NFC) module, to promote junction service.Example Such as, in NFC module radio frequency identification (RFID) technology can be based on, Infrared Data Association (IrDA) technology, ultra wide band (UWB) technology, Bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, device 300 can be believed by one or more application application-specific integrated circuit (ASIC), numeral Number processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing the above method.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided Such as include the memory 304 of instruction, above-metioned instruction can be performed to complete the above method by the processor 320 of device 300.For example, The non-transitorycomputer readable storage medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
Based on same inventive concept, an embodiment of the present invention provides a kind of computer-readable recording medium 401, with reference to figure 7, Computer program 402 is stored thereon with, which realizes aforementioned video subtitle textures generation method reality when being executed by processor The step of applying any embodiment in example.
Since the electronic equipment that the present embodiment is introduced is to implement in the embodiment of the present invention used by image combining method Electronic equipment, so based on the image combining method described in the embodiment of the present invention, those skilled in the art can be much of that The embodiment and its various change form of the electronic equipment of the present embodiment are solved, so herein for the electronic equipment such as What realizes that the method in the embodiment of the present invention is no longer discussed in detail.As long as those skilled in the art implement the embodiment of the present invention Electronic equipment used by middle image combining method, belongs to the scope of the invention to be protected.
Technical solution in the embodiments of the present invention, at least has the following technical effect that or advantage:
Due to carrying out dividing processing to target image, original blank image region and foreground image areas picture are obtained;In the past Scape image-region expands the extended area of transparent value gradual change reduction to original blank image region;So that target image and background When image carries out color blend, the display effect of the borderline region between foreground image and background image is target image and background The color addition of corresponding region in image, therefore, the color transition from foreground image to background image is formd in borderline region, Will not directly from the stiff color for jumping to background image of the color of foreground image, and then, the rendering effect of fused image It is more natural, improve image rendering effect.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program Product.Therefore, the present invention can use the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Apply the form of example.Moreover, the present invention can use the computer for wherein including computer usable program code in one or more The computer program production that usable storage medium is implemented on (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) The form of product.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram The combination of flow and/or square frame in journey and/or square frame and flowchart and/or the block diagram.These computer programs can be provided The processors of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices, which produces, to be used in fact The device for the function of being specified in present one flow of flow chart or one square frame of multiple flows and/or block diagram or multiple square frames.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory, which produces, to be included referring to Make the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one square frame of block diagram or The function of being specified in multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented processing, thus in computer or The instruction performed on other programmable devices is provided and is used for realization in one flow of flow chart or multiple flows and/or block diagram one The step of function of being specified in a square frame or multiple square frames.
Although preferred embodiments of the present invention have been described, but those skilled in the art once know basic creation Property concept, then can make these embodiments other change and modification.So appended claims be intended to be construed to include it is excellent Select embodiment and fall into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art God and scope.In this way, if these modifications and changes of the present invention belongs to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising including these modification and variations.

Claims (10)

  1. A kind of 1. image combining method, it is characterised in that including:
    Obtain target image to be fused;
    Dividing processing is carried out to the target image to be fused, obtains original blank image region and foreground image areas;
    Extend, obtained around the saturating of the foreground image areas from the foreground image areas to the original blank image region The extended area that bright value gradual change reduces;
    Obtain background image to be fused;
    Based on the transparent value of the extended area, the target image to be fused and the background image to be fused are carried out Color blend, forms fusion results image.
  2. 2. image combining method as claimed in claim 1, it is characterised in that acquisition target image to be fused, including:
    Current real scene is acquired and generates original image, or original image is read from local memory;
    The original image is subjected to format conversion, forms the target image to be fused of RGBA color modes.
  3. 3. image combining method as claimed in claim 1 or 2, it is characterised in that described from the foreground image areas to institute Original blank image region extension is stated, obtains the extended area reduced around the transparent value gradual change of the foreground image areas, bag Include:
    From each edge pixel of the foreground image areas, N number of pixel is extended to the original blank image region Point, N are positive integer;
    It is from the foreground image areas to the original blank sheet to configure N number of transparent channel corresponding to N number of pixel Linear as region direction reduces, and obtains the extended area that the transparent value gradual change reduces.
  4. 4. image combining method as claimed in claim 1 or 2, it is characterised in that described from the foreground image areas to institute Original blank image region extension is stated, obtains the extended area reduced around the transparent value gradual change of the foreground image areas, bag Include:
    From each edge pixel of the foreground image areas, M pixel is extended to the original blank image region Point, M are positive integer;
    It is from the foreground image areas to the original blank sheet to configure M transparent channel corresponding to the M pixel As the gradual change being segmented in the K gradual change reduction of region direction, the extended area that the transparent value gradual change reduces is obtained, K is less than M Positive integer.
  5. 5. image combining method as claimed in claim 4, it is characterised in that if K gradual change segmentation includes:First gradually The 3rd gradual change after the second gradual change segmentation, second gradual change segmentation after change segmentation, first gradual change segmentation is segmented;
    M transparent channel corresponding to the configuration M pixel is from the foreground image areas to the original sky The gradual change in K gradual change segmentation in white image-region direction reduces, including:
    Configure the 1st of the M pixel toTransparent channel corresponding to a pixel is from the foreground image areas To reducing in the linear of the first gradual change segmentation for the original blank image region direction;
    Configure the of the M pixelExtremelyTransparent channel corresponding to a pixel is from the foreground image Region reduces to the original blank image region direction in the linear of the second gradual change segmentation, and the second gradual change segmentation is gradually Variable slope is more than the gradual slope of first gradual change segmentation;
    Configure the of the M pixelIt is from the foreground image areas to the transparent channel corresponding to M pixel To reducing in the linear of the 3rd gradual change segmentation for the original blank image region direction, the gradual change of the 3rd gradual change segmentation is oblique Rate is less than the gradual slope of second gradual change segmentation.
  6. 6. image combining method as claimed in claim 1 or 2, it is characterised in that described based on the transparent of the extended area Value, color blend is carried out by the target image to be fused and the background image to be fused, including:
    The target image to be fused is converted into target image texture;
    The extended area is converted into extended area image texture;
    The background image to be fused is converted into background image texture;
    According to the transparent value of the extended area image texture, the target image texture and the background image texture are carried out Color blend.
  7. 7. image combining method as claimed in claim 6, it is characterised in that described according to the extended area image texture Transparent value, color blend is carried out by the target image texture and the background image texture, including:
    Invocation target renders pipeline and target coloration program, according to the transparent value of the extended area image texture, by the mesh Logo image texture and the background image texture carry out color blend, wherein, the target image texture, the extended area figure As texture and the background image texture are that the target renders the identifiable state of pipeline.
  8. A kind of 8. image synthesizer, it is characterised in that including:
    First acquisition unit, for obtaining target image to be fused;
    Cutting unit, for carrying out dividing processing to the target image to be fused, obtains original blank image region with before Scape image-region;
    Expanding element, for being extended from the foreground image areas to the original blank image region, obtains before described The extended area that the transparent value gradual change of scape image-region reduces;
    Second acquisition unit, for obtaining background image to be fused;
    Integrated unit, for the transparent value based on the extended area, by the target image to be fused with it is described to be fused Background image carry out color blend, formed fusion results image.
  9. 9. a kind of image composition equipment, including memory, processor and storage can be run on a memory and on a processor Computer program, it is characterised in that the processor realizes the step any one of claim 1-7 when performing described program Suddenly.
  10. 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the program is by processor The step of being realized during execution any one of claim 1-7.
CN201711331637.9A 2017-12-13 2017-12-13 A kind of image combining method and device Pending CN107958449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711331637.9A CN107958449A (en) 2017-12-13 2017-12-13 A kind of image combining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711331637.9A CN107958449A (en) 2017-12-13 2017-12-13 A kind of image combining method and device

Publications (1)

Publication Number Publication Date
CN107958449A true CN107958449A (en) 2018-04-24

Family

ID=61958832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711331637.9A Pending CN107958449A (en) 2017-12-13 2017-12-13 A kind of image combining method and device

Country Status (1)

Country Link
CN (1) CN107958449A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108777783A (en) * 2018-07-09 2018-11-09 广东交通职业技术学院 A kind of image processing method and device
CN108805849A (en) * 2018-05-22 2018-11-13 北京京东金融科技控股有限公司 Image interfusion method, device, medium and electronic equipment
CN108848389A (en) * 2018-07-27 2018-11-20 恒信东方文化股份有限公司 A kind of panoramic video processing method, apparatus and system
CN109886010A (en) * 2019-01-28 2019-06-14 平安科技(深圳)有限公司 Verify picture sending method, synthetic method and device, storage medium and terminal
CN109961453A (en) * 2018-10-15 2019-07-02 华为技术有限公司 A kind of image processing method, device and equipment
CN110062176A (en) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110060197A (en) * 2019-03-26 2019-07-26 浙江达峰科技有限公司 A kind of 3D rendering interactive module and processing method
CN110971839A (en) * 2019-11-18 2020-04-07 咪咕动漫有限公司 Video fusion method, electronic device and storage medium
CN111191580A (en) * 2019-12-27 2020-05-22 支付宝(杭州)信息技术有限公司 Synthetic rendering method, apparatus, electronic device and medium
CN112037291A (en) * 2020-08-31 2020-12-04 维沃移动通信有限公司 Data processing method and device and electronic equipment
CN112598694A (en) * 2020-12-31 2021-04-02 深圳市即构科技有限公司 Video image processing method, electronic device and storage medium
CN112929561A (en) * 2021-01-19 2021-06-08 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN113012188A (en) * 2021-03-23 2021-06-22 影石创新科技股份有限公司 Image fusion method and device, computer equipment and storage medium
CN113407289A (en) * 2021-07-13 2021-09-17 上海米哈游璃月科技有限公司 Wallpaper switching method, wallpaper generation method, device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120616A1 (en) * 2011-11-14 2013-05-16 Casio Computer Co., Ltd. Image synthesizing apparatus, image recording method, and recording medium
CN104992412A (en) * 2015-06-10 2015-10-21 北京金山安全软件有限公司 Picture gradual change method and device
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN105608716A (en) * 2015-12-21 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment
CN107092684A (en) * 2017-04-21 2017-08-25 腾讯科技(深圳)有限公司 Image processing method and device, storage medium
CN107369188A (en) * 2017-07-12 2017-11-21 北京奇虎科技有限公司 The synthetic method and device of image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130120616A1 (en) * 2011-11-14 2013-05-16 Casio Computer Co., Ltd. Image synthesizing apparatus, image recording method, and recording medium
CN104992412A (en) * 2015-06-10 2015-10-21 北京金山安全软件有限公司 Picture gradual change method and device
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN105608716A (en) * 2015-12-21 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment
CN107092684A (en) * 2017-04-21 2017-08-25 腾讯科技(深圳)有限公司 Image processing method and device, storage medium
CN107369188A (en) * 2017-07-12 2017-11-21 北京奇虎科技有限公司 The synthetic method and device of image

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805849A (en) * 2018-05-22 2018-11-13 北京京东金融科技控股有限公司 Image interfusion method, device, medium and electronic equipment
CN108805849B (en) * 2018-05-22 2020-07-31 京东数字科技控股有限公司 Image fusion method, device, medium and electronic equipment
CN108777783A (en) * 2018-07-09 2018-11-09 广东交通职业技术学院 A kind of image processing method and device
CN108848389A (en) * 2018-07-27 2018-11-20 恒信东方文化股份有限公司 A kind of panoramic video processing method, apparatus and system
CN109961453A (en) * 2018-10-15 2019-07-02 华为技术有限公司 A kind of image processing method, device and equipment
CN109961453B (en) * 2018-10-15 2021-03-12 华为技术有限公司 Image processing method, device and equipment
CN109886010B (en) * 2019-01-28 2023-10-17 平安科技(深圳)有限公司 Verification picture sending method, verification picture synthesizing method and device, storage medium and terminal
CN109886010A (en) * 2019-01-28 2019-06-14 平安科技(深圳)有限公司 Verify picture sending method, synthetic method and device, storage medium and terminal
CN110060197A (en) * 2019-03-26 2019-07-26 浙江达峰科技有限公司 A kind of 3D rendering interactive module and processing method
CN110062176A (en) * 2019-04-12 2019-07-26 北京字节跳动网络技术有限公司 Generate method, apparatus, electronic equipment and the computer readable storage medium of video
CN110971839A (en) * 2019-11-18 2020-04-07 咪咕动漫有限公司 Video fusion method, electronic device and storage medium
CN111191580A (en) * 2019-12-27 2020-05-22 支付宝(杭州)信息技术有限公司 Synthetic rendering method, apparatus, electronic device and medium
CN112037291A (en) * 2020-08-31 2020-12-04 维沃移动通信有限公司 Data processing method and device and electronic equipment
WO2022042572A1 (en) * 2020-08-31 2022-03-03 维沃移动通信有限公司 Data processing method and apparatus, and electronic device
CN112037291B (en) * 2020-08-31 2024-03-22 维沃移动通信有限公司 Data processing method and device and electronic equipment
CN112598694A (en) * 2020-12-31 2021-04-02 深圳市即构科技有限公司 Video image processing method, electronic device and storage medium
CN112929561A (en) * 2021-01-19 2021-06-08 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN112929561B (en) * 2021-01-19 2023-04-28 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN113012188A (en) * 2021-03-23 2021-06-22 影石创新科技股份有限公司 Image fusion method and device, computer equipment and storage medium
CN113407289A (en) * 2021-07-13 2021-09-17 上海米哈游璃月科技有限公司 Wallpaper switching method, wallpaper generation method, device and storage medium

Similar Documents

Publication Publication Date Title
CN107958449A (en) A kind of image combining method and device
US9547427B2 (en) User interface with color themes based on input image data
CN104221358B (en) Unified slider control for modifying multiple image properties
CN110706310B (en) Image-text fusion method and device and electronic equipment
JP6355746B2 (en) Image editing techniques for devices
US9832378B2 (en) Exposure mapping and dynamic thresholding for blending of multiple images using floating exposure
CN110675310A (en) Video processing method and device, electronic equipment and storage medium
CN106713696B (en) Image processing method and device
CN106341574B (en) Method of color gamut mapping of color and device
CN110609649B (en) Interface display method, device and storage medium
CN107948733A (en) Method of video image processing and device, electronic equipment
CN111338743B (en) Interface processing method and device and storage medium
CN106331427B (en) Saturation degree Enhancement Method and device
EP3285474B1 (en) Colour gamut mapping method and apparatus, computer program and recording medium
US9053568B2 (en) Applying a realistic artistic texture to images
US10204403B2 (en) Method, device and medium for enhancing saturation
CN105574834A (en) Image processing method and apparatus
CN107230428A (en) Display methods, device and the terminal of curve screens
CN110880164B (en) Image processing method, device, equipment and computer storage medium
CN106775548B (en) page processing method and device
CN109426522A (en) Interface processing method, device, equipment, medium and the operating system of mobile device
WO2023045946A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN114327166A (en) Image processing method and device, electronic equipment and readable storage medium
CN115484345A (en) Display mode switching method and device, electronic equipment and medium
CN114845157A (en) Video processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180424