CN105574834B - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN105574834B CN105574834B CN201510982411.XA CN201510982411A CN105574834B CN 105574834 B CN105574834 B CN 105574834B CN 201510982411 A CN201510982411 A CN 201510982411A CN 105574834 B CN105574834 B CN 105574834B
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- processing
- input
- diffusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 144
- 238000009792 diffusion process Methods 0.000 claims abstract description 103
- 230000004927 fusion Effects 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims description 36
- 230000004044 response Effects 0.000 claims description 26
- 238000007499 fusion processing Methods 0.000 claims description 25
- 238000001914 filtration Methods 0.000 claims description 22
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 abstract description 11
- 238000010586 diagram Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The disclosure is directed to a kind of image processing method and devices, wherein image processing method includes:Obtain the edge image of input picture;Processing is diffused to edge image, obtains edge-diffusion image;Feature stream is carried out to edge diffusion image to be filtered, and obtains the first image;Stretch processing is carried out to the gray level image of input picture or input picture, obtains the second image;Fusion treatment is carried out according to the texture image of input, the first image and the second image, obtains target image.The embodiment of the present disclosure, it is diffused processing by the edge image to acquisition, obtain edge-diffusion image, feature stream is carried out to edge diffusion image to be filtered, the first image is obtained, and stretch processing is carried out to the gray level image of input picture or input picture, obtains the second image, then fusion treatment is carried out according to the texture image of input, the first image and the second image so that the target image of generation has good non-genuine effect.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
In recent years, with the rapid development of image processing technology, non-photorealistic processing of videos, images and three-dimensional models becomes more popular. The nonphotorealistic processing refers to a technique of generating a graphic having a hand-drawing style without photo-like realism using a computer. Non-photorealistic rendering is not aimed at the realism of the graphic, but primarily at rendering the artistic quality of the graphic, simulating an artwork, or as an effective complement to a photorealistic graphic.
How to process an image by using a non-real processing technology so that the processed image has a good special effect is a technical problem which is urgently needed to be solved at present.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method and apparatus.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:
acquiring an edge image of an input image;
performing diffusion processing on the edge image to obtain an edge diffusion image;
performing feature stream filtering processing on the edge diffusion image to obtain a first image;
stretching the input image or the gray level image of the input image to obtain a second image;
and carrying out fusion processing according to the input texture image, the first image and the second image to obtain a target image.
In one embodiment, the acquiring an edge image of the input image includes:
if the input image is a color image, converting the input color image into a gray image, and extracting an edge image of the gray image;
and if the input image is a gray image, directly extracting an edge image of the gray image.
In an embodiment, the performing diffusion processing on the edge image to obtain an edge diffusion image includes:
performing diffusion processing on the edge image to obtain an edge response image;
and performing diffusion processing on the edge response image to obtain the edge diffusion image.
In an embodiment, the performing feature stream filtering processing on the edge diffusion image includes:
and calculating the edge tangent direction of the input image, and performing linear convolution on the edge diffusion image according to the edge tangent direction.
In an embodiment, the stretching the input image or the grayscale image of the input image to obtain the second image includes:
obtaining a corresponding relation between the image and the histogram curve;
inquiring the corresponding relation according to the input image or the gray level image of the input image to obtain a target histogram curve;
and obtaining the histogram distribution of the input image, and adjusting the histogram distribution of the input image according to the target histogram curve to generate the second image.
In an embodiment, the obtaining a target image by performing fusion processing according to the input texture image, the first image, and the second image includes:
performing texture fusion according to the texture image and the second image to obtain a third image;
and performing edge contour fusion processing according to the first image and the third image to obtain the target image.
In an embodiment, the method further comprises:
converting a red, green, blue, RGB, color space to a luminance color, LAB, color space, and representing the target image as a luminance of the LAB color space;
converting the LAB color space to the RGB color space to obtain a color image of the target image.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
an acquisition module configured to acquire an edge image of an input image;
the diffusion processing module is configured to perform diffusion processing on the edge image obtained by the obtaining module to obtain an edge diffusion image;
the filtering processing module is configured to perform feature stream filtering processing on the edge diffusion image obtained by the diffusion processing module to obtain a first image;
the stretching processing module is configured to stretch the input image or the gray level image of the input image to obtain a second image;
and the fusion processing module is configured to perform fusion processing according to the input texture image, the first image obtained by the filtering processing module and the second image obtained by the stretching processing module to obtain a target image.
In one embodiment, the obtaining module includes:
a conversion extraction sub-module configured to convert the input color image into a grayscale image and extract an edge image of the grayscale image if the input image is the color image;
an extraction sub-module configured to directly extract an edge image of the grayscale image if the input image is a grayscale image.
In one embodiment, the diffusion processing module comprises:
the first diffusion processing submodule is configured to perform diffusion processing on the edge image to obtain an edge response image;
and the second diffusion processing submodule is configured to perform diffusion processing on the edge response image obtained by the first diffusion processing submodule to obtain the edge diffusion image.
In an embodiment, the filtering processing module is configured to:
and calculating the edge tangent direction of the input image, and performing linear convolution on the edge diffusion image according to the edge tangent direction.
In one embodiment, the stretch processing module includes:
a first obtaining sub-module configured to obtain a correspondence of the image and the histogram curve;
a second obtaining sub-module configured to query the corresponding relation obtained by the first obtaining sub-module according to the input image or a gray level image of the input image to obtain a target histogram curve;
an adjusting sub-module configured to obtain a histogram distribution of the input image, and adjust the histogram distribution of the input image according to the target histogram curve obtained by the second obtaining sub-module to generate the second image.
In one embodiment, the fusion processing module comprises:
the texture fusion submodule is configured to perform texture fusion on the second image to obtain a third image;
and the edge contour fusion submodule is configured to perform edge contour fusion processing according to the first image and the third image obtained by the texture fusion submodule to obtain the target image.
In one embodiment, the apparatus further comprises:
a conversion representation module configured to convert a red, green, blue, RGB, color space to a luminance color, LAB, color space and to represent the target image as a luminance of the LAB color space;
a conversion module configured to convert the LAB color space to the RGB color space to obtain a color image of the target image.
According to a third aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring an edge image of an input image;
performing diffusion processing on the edge image to obtain an edge diffusion image;
performing feature stream filtering processing on the edge diffusion image to obtain a first image;
stretching the input image or the gray level image of the input image to obtain a second image;
and carrying out fusion processing according to the input texture image, the first image and the second image to obtain a target image.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method comprises the steps of obtaining an edge diffusion image by performing diffusion processing on an obtained edge image, performing characteristic flow filtering processing on the edge diffusion image to obtain a first image, performing stretching processing on an input image or a gray level image of the input image to obtain a second image, and performing fusion processing according to the input texture image, the first image and the second image to obtain a target image, so that the generated target image has a good unreal effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating a method of obtaining an edge diffusion image according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating obtaining a second image according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating a method of obtaining a target image according to an exemplary embodiment.
FIG. 5 is a flow diagram illustrating another method of image processing according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment.
Fig. 12 is a block diagram illustrating a suitable image processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating an image processing method, which is applicable to an image processing apparatus, as illustrated in fig. 1, according to an exemplary embodiment, the method including the steps of S101-S105:
in step S101, an edge image of an input image is acquired.
In this embodiment, if the input image is a color image, the input color image may be converted into a grayscale image, and then an edge image of the grayscale image may be extracted. If the input image is a grayscale image, the edge image of the grayscale image can be directly extracted.
Assuming that the converted grayscale image or the input grayscale image is G, an edge image E of the grayscale image can be extracted by the following formula:
wherein,Gxrepresenting the gradient in the horizontal direction of the grey image, GyRepresenting the gradient in the vertical direction of the grey image, hxRepresents a horizontal template, hyA vertical template is shown and,representing a convolution operation.
In step S102, the edge image is subjected to diffusion processing to obtain an edge diffusion image.
As shown in fig. 2, the step S102 may include the steps of:
in step S1021, diffusion processing is performed on the edge image, and an edge response image is obtained.
In this embodiment, edge diffusion processing in 8 directions may be performed on the edge image, so that each pixel point may obtain 8 response values, and then the maximum response value in each direction is taken as the edge response image.
In step S1022, the edge response image is subjected to diffusion processing to obtain an edge diffusion image.
After the edge response image is obtained, performing secondary diffusion processing on the edge response image, that is, performing 8-direction diffusion on the edge response image, and accumulating 8-direction response values to obtain a final edge diffusion image S, where the edge diffusion image S may be obtained by the following formula:
in step S103, a feature stream filtering process is performed on the edge diffusion image to obtain a first image.
The characteristic flow filtering process performed on the edge diffusion image may be: and calculating the edge tangent direction of the input image, and performing linear convolution on the edge diffusion image according to the edge tangent direction.
In this embodiment, the process of calculating the edge tangent direction of the input image may be:
calculating the edge tangent direction of the input image at each pixel position, that is, obtaining the eigenvector in the structure tensor of the current pixel position, where the structure tensor T can be obtained by the following formula:
wherein G is a Gaussian function, IxIs a gradient in the horizontal direction, IyIs the vertical direction gradient, then the eigenvector U of the structure tensor represents the smoothed edge gradient vector, which can be obtained by the following equation:
wherein λ is2The characteristic value is represented.
In this embodiment, the process of performing linear convolution on the edge diffusion image according to the edge tangent direction may be as follows:
taking a curve with a certain length from front to back as a direction line along the tangential direction of the gradient at any point on the edge diffusion image, and acquiring the accumulated integral value as a target value of the current pixel, thereby obtaining a first image, wherein the implementation process can be obtained by the following formula:
w represents a weight on a direction line, the length of the direction line during convolution is set to L2 × 3 σ +1, σ is a gaussian weighted value, f represents the number of pixels calculated forward, b represents the number of pixels calculated backward, and Texture represents an edge spread image.
In the embodiment, the edge smoothing processing is enhanced by performing linear convolution according to the characteristic flow of the input image, and the processing effect of the subsequent image is greatly improved.
In step S104, the input image or the grayscale image of the input image is subjected to stretching processing to obtain a second image.
As shown in fig. 3, step S104 may include the steps of:
in step S1041, a correspondence relationship between the image and the histogram curve is obtained.
In this embodiment, a set of photo-like images may be collected in advance to count the correspondence between the images and the histogram curve.
In step S1042, a correspondence between the image and the histogram curve is queried according to the input image or the grayscale image of the input image, so as to obtain a target histogram curve.
In this embodiment, if the input image is a grayscale image, the input image may be used to query the correspondence between the image and the histogram curve to obtain the target histogram curve. If the input image is a color image, a gray scale image of the color image is obtained, and the gray scale image of the color image can be used to query the corresponding relation between the image and the histogram curve to obtain the target histogram curve.
In step S1043, a histogram distribution of the input image is obtained, and the histogram distribution of the input image is adjusted according to the target histogram curve to generate a second image.
In this embodiment, after obtaining the histogram distribution of the input image, the histogram distribution of the input image may be adjusted according to the target histogram curve, thereby generating the second image. Among them, various histogram matching methods, such as a single mapping rule (SML) and a group mapping rule (GML) matching method, may be employed in adjusting the histogram distribution of the input image according to the target histogram curve.
In step S105, a fusion process is performed on the input texture image, the first image, and the second image, and a target image is obtained.
As shown in fig. 4, step S105 may include the steps of:
in step S1051, texture fusion is performed according to the texture image and the second image, so as to obtain a third image.
in this embodiment, the parameter β may be obtained by calculating the minimum value of the following formula*:
Wherein, the value of lambda is 0.2, P (x) is the input texture image, and the above formula is based on P (x)β(x)T (x) is obtained by conversion, T (x) is the second image, and the third image T is:
in step S1052, edge contour fusion processing is performed on the first image and the third image to obtain a target image.
In this embodiment, the target image S can be obtained by the following formula:
S=Slic*T
where T is the third image, SlicIs the first image.
The image processing method provided by the embodiment can be used for processing portrait, landscape and the like of a person, but is not limited to the processing.
In the embodiment of the image processing method, the obtained edge image is subjected to diffusion processing to obtain an edge diffusion image, the edge diffusion image is subjected to feature stream filtering processing to obtain a first image, the input image or the gray image of the input image is subjected to stretching processing to obtain a second image, and then fusion processing is performed according to the input texture image, the first image and the second image to obtain the target image, so that the generated target image has a good unreal effect.
Fig. 5 is a flowchart illustrating another image processing method according to an exemplary embodiment, and as shown in fig. 5, after step S105, the method may further include the steps of:
in step S106, a red, green, and blue (RGB) color space is converted into a luminance color (LAB) color space, and the target image is represented as the luminance of the LAB color space.
The target image acquired in the embodiment shown in fig. 1 is a single-channel image, and if a color image is to be acquired, color conversion processing needs to be performed on the target image.
In this embodiment, the target image is first converted from the RGB color space to the LAB color space and expressed as the luminance of the LAB color space.
In step S107, the LAB color space is converted to the RGB color space to obtain a color image of the target image.
After representing the target image as the luminance of the LAB color space, the LAB color space may be converted to the RGB color space to obtain a color image of the target image.
In the embodiment of the image processing method, the color conversion processing is performed on the target image to obtain the color image of the target image, so that the image color desired by the user can be obtained.
Corresponding to the embodiment of the image processing method, the embodiment of the image processing device is further provided in the disclosure.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment, as illustrated in fig. 6, including: an acquisition module 61, a diffusion processing module 62, a filtering processing module 63, a stretching processing module 64, and a fusion processing module 65.
The acquisition module 61 is configured to acquire an edge image of the input image.
In this embodiment, if the input image is a color image, the input color image may be converted into a grayscale image, and then an edge image of the grayscale image may be extracted. If the input image is a grayscale image, the edge image of the grayscale image can be directly extracted.
Assuming that the converted grayscale image or the input grayscale image is G, an edge image E of the grayscale image can be extracted by the following formula:
wherein,Gxrepresenting the gradient in the horizontal direction of the grey image, GyRepresenting the gradient in the vertical direction of the grey image, hxRepresents a horizontal template, hyA vertical template is shown and,representing a convolution operation.
The diffusion processing module 62 is configured to perform diffusion processing on the edge image obtained by the obtaining module 61, resulting in an edge diffusion image.
The filtering processing module 63 is configured to perform feature stream filtering processing on the edge diffusion image obtained by the diffusion processing module 62 to obtain a first image.
The characteristic flow filtering process performed on the edge diffusion image may be: and calculating the edge tangent direction of the input image, and performing linear convolution on the edge diffusion image according to the edge tangent direction.
In this embodiment, the process of calculating the edge tangent direction of the input image may be:
calculating the edge tangent direction of the input image at each pixel position, that is, obtaining the eigenvector in the structure tensor of the current pixel position, where the structure tensor T can be obtained by the following formula:
wherein G is a Gaussian function, IxIs a gradient in the horizontal direction, IyIs the vertical direction gradient, then the eigenvector U of the structure tensor represents the smoothed edge gradient vector, which can be obtained by the following equation:
wherein λ is2The characteristic value is represented.
In this embodiment, the process of performing linear convolution on the edge diffusion image according to the edge tangent direction may be as follows:
taking a curve with a certain length from front to back as a direction line along the tangential direction of the gradient at any point on the edge diffusion image, and acquiring the accumulated integral value as a target value of the current pixel, thereby obtaining a first image, wherein the implementation process can be obtained by the following formula:
w represents a weight on a direction line, the length of the direction line during convolution is set to L2 × 3 σ +1, σ is a gaussian weighted value, f represents the number of pixels calculated forward, b represents the number of pixels calculated backward, and Texture represents an edge spread image.
In the embodiment, the edge smoothing processing is enhanced by performing linear convolution according to the characteristic flow of the input image, and the processing effect of the subsequent image is greatly improved.
The stretching processing module 64 is configured to stretch the input image or the grayscale image of the input image, resulting in a second image.
The fusion processing module 65 is configured to perform fusion processing on the input texture image, the first image obtained by the filtering processing module 63, and the second image obtained by the stretching processing module 64 to obtain a target image.
The apparatus shown in fig. 6 is used for implementing the method flow shown in fig. 1, and related contents are described the same, which are not described herein again.
In the embodiment of the image processing device, the obtained edge image is subjected to diffusion processing to obtain an edge diffusion image, the edge diffusion image is subjected to feature stream filtering processing to obtain a first image, the input image or the gray image of the input image is subjected to stretching processing to obtain a second image, and then fusion processing is performed according to the input texture image, the first image and the second image to obtain the target image, so that the generated target image has a good unreal effect.
Fig. 7 is a block diagram of another image processing apparatus according to an exemplary embodiment, and as shown in fig. 7, on the basis of the embodiment shown in fig. 6, the obtaining module 61 may include: a transform extraction sub-module 611 and an extraction sub-module 612.
The conversion extraction sub-module 611 is configured to convert the input color image into a grayscale image and extract an edge image of the grayscale image if the input image is a color image.
The extraction sub-module 612 is configured to directly extract an edge image of the grayscale image if the input image is a grayscale image.
The apparatus shown in fig. 7 is used for implementing the method flow shown in fig. 1, and related contents are described the same, which are not described herein again.
The image processing device embodiment provides conditions for obtaining better non-real effect subsequently by extracting the edge image of the gray image.
Fig. 8 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment, and as shown in fig. 8, on the basis of the above-described embodiment shown in fig. 6, the diffusion processing module 62 may include: a first diffusion processing sub-module 621 and a second diffusion processing sub-module 622.
The first diffusion processing sub-module 621 is configured to perform diffusion processing on the edge image, resulting in an edge response image.
In this embodiment, edge diffusion processing in 8 directions may be performed on the edge image, so that each pixel point may obtain 8 response values, and then the maximum response value in each direction is taken as the edge response image.
The second diffusion processing sub-module 622 is configured to perform diffusion processing on the edge response image obtained by the first diffusion processing sub-module 621, so as to obtain an edge diffusion image.
After the edge response image is obtained, performing secondary diffusion processing on the edge response image, that is, performing 8-direction diffusion on the edge response image, and accumulating 8-direction response values to obtain a final edge diffusion image S, where the edge diffusion image S may be obtained by the following formula:
the apparatus shown in fig. 8 is used to implement the method flow shown in fig. 2, and related contents are described the same, which is not described herein again.
The embodiment of the image processing device provides conditions for obtaining better non-real effects subsequently by performing diffusion processing on the image.
Fig. 9 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment, and as shown in fig. 9, on the basis of the above-mentioned embodiment shown in fig. 6, the stretch processing module 64 may include: a first obtaining sub-module 641, a second obtaining sub-module 642, and a tuning sub-module 643.
The first obtaining submodule 641 is configured to obtain a correspondence of the image to the histogram curve.
In this embodiment, a set of photo-like images may be collected in advance to count the correspondence between the images and the histogram curve.
The second obtaining sub-module 642 is configured to query the correspondence obtained by the first obtaining sub-module 641 according to the input image or the grayscale image of the input image, to obtain a target histogram curve.
In this embodiment, the first image may be used to query the correspondence between the image and the histogram curve, and a target histogram curve, i.e. the histogram curve corresponding to the first image, is obtained.
The adjusting sub-module 643 is configured to obtain a histogram distribution of the input image, and adjust the histogram distribution of the input image according to the target histogram curve obtained by the second obtaining sub-module 642 to generate a second image.
In this embodiment, after obtaining the histogram distribution of the input image, the histogram distribution of the input image may be adjusted according to the target histogram curve, thereby generating the second image. Among them, various histogram matching methods, such as a single mapping rule (SML) and a group mapping rule (GML) matching method, may be employed in adjusting the histogram distribution of the input image according to the target histogram curve.
The apparatus shown in fig. 9 is used to implement the method flow shown in fig. 3, and related contents are described the same, which are not described herein again.
According to the embodiment of the image processing device, the image is stretched, so that conditions are provided for obtaining better unreal effects subsequently.
Fig. 10 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment, and as shown in fig. 10, on the basis of the above-described embodiment shown in fig. 6, the fusion processing module 65 may include: a texture blend sub-module 651 and an edge contour blend sub-module 652.
The texture fusion sub-module 651 is configured to perform texture fusion on the texture image and the second image, resulting in a third image.
in this embodiment, the parameter β may be obtained by calculating the minimum value of the following formula*:
Wherein, the value of lambda is 0.2, P (x) is the input texture image, and the above formula is based on P (x)β(x)T (x) is obtained by conversion, T (x) is the second image, and the third image T is:
the edge contour fusion submodule 652 is configured to perform edge contour fusion processing according to the third image obtained by the first image and texture fusion submodule 651, so as to obtain a target image.
In this embodiment, the target image S can be obtained by the following formula:
S=Slic*T
where T is the third image, SlicIs the first image.
The apparatus shown in fig. 10 is used to implement the method flow shown in fig. 4, and related contents are described the same, which are not described herein again.
According to the image processing device, the texture fusion and the edge contour fusion are carried out on the image, so that the generated target image has a good unreal effect.
Fig. 11 is a block diagram illustrating another image processing apparatus according to an exemplary embodiment, and as shown in fig. 11, the image processing apparatus may further include, on the basis of the above-described embodiment shown in fig. 6: a conversion representation module 66 and a conversion result module 67.
The conversion representation module 66 is configured to convert the red, green, blue, RGB color space to a luminance color LAB color space and to represent the target image as a luminance of the LAB color space.
The conversion resulting module 67 is configured to convert the LAB color space to the RGB color space to obtain a color image of the target image.
The apparatus shown in fig. 11 is used to implement the method flow shown in fig. 5, and related contents are described the same, which is not described herein again.
The image processing device embodiment obtains the color image of the target image by performing the color conversion processing on the target image, so that the image color desired by the user can be obtained.
With regard to the apparatus in the above embodiment, the specific manner in which each module and sub-module performs operations has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 12 is a block diagram illustrating a suitable image processing apparatus according to an exemplary embodiment. For example, the apparatus 1200 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, an aircraft, and so forth.
Referring to fig. 12, the apparatus 1200 may include one or more of the following components: processing component 1202, memory 1204, power component 1206, multimedia component 1208, audio component 1210, input/output (I/O) interface 1212, sensor component 1214, and communications component 1216.
The processing component 1202 generally controls overall operation of the apparatus 1200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 1202 may include one or more processors 1220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1202 can include one or more modules that facilitate interaction between the processing component 1202 and other components. For example, the processing component 1202 can include a multimedia module to facilitate interaction between the multimedia component 1208 and the processing component 1202.
The memory 1204 is configured to store various types of data to support operation at the device 1200. Examples of such data include instructions for any application or method operating on the device 1200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1204 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power component 1206 provides power to the various components of the device 1200. Power components 1206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for apparatus 1200.
The multimedia components 1208 include a screen that provides an output interface between the device 1200 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1208 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1200 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1210 is configured to output and/or input audio signals. For example, audio component 1210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1204 or transmitted via the communication component 1216. In some embodiments, audio assembly 1210 further includes a speaker for outputting audio signals.
The I/O interface 1212 provides an interface between the processing component 1202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1214 includes one or more sensors for providing various aspects of state assessment for the apparatus 1200. For example, the sensor assembly 1214 may detect an open/closed state of the device 1200, the relative positioning of the components, such as a display and keypad of the apparatus 1200, the sensor assembly 1214 may also detect a change in the position of the apparatus 1200 or a component of the apparatus 1200, the presence or absence of user contact with the apparatus 1200, an orientation or acceleration/deceleration of the apparatus 1200, and a change in the temperature of the apparatus 1200. The sensor assembly 1214 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1216 is configured to facilitate communications between the apparatus 1200 and other devices in a wired or wireless manner. The apparatus 1200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1216 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 1204 comprising instructions, executable by processor 1220 of apparatus 1200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (14)
1. An image processing method, characterized in that the method comprises:
acquiring an edge image of an input image;
performing diffusion processing on the edge image to obtain an edge diffusion image;
performing feature stream filtering processing on the edge diffusion image to obtain a first image;
stretching the input image or the gray level image of the input image to obtain a second image;
and carrying out fusion processing according to the input texture image, the first image and the second image to obtain a target image.
2. The image processing method according to claim 1, wherein the acquiring the edge image of the input image comprises:
if the input image is a color image, converting the input color image into a gray image, and extracting an edge image of the gray image;
and if the input image is a gray image, directly extracting an edge image of the gray image.
3. The image processing method according to claim 1, wherein the performing diffusion processing on the edge image to obtain an edge diffusion image comprises:
performing diffusion processing on the edge image to obtain an edge response image;
and performing diffusion processing on the edge response image to obtain the edge diffusion image.
4. The image processing method according to claim 1, wherein the performing feature stream filtering processing on the edge diffusion image includes:
and calculating the edge tangent direction of the input image, and performing linear convolution on the edge diffusion image according to the edge tangent direction.
5. The image processing method according to claim 1, wherein the stretching the input image or the grayscale image of the input image to obtain the second image comprises:
obtaining a corresponding relation between the image and the histogram curve;
inquiring the corresponding relation according to the input image or the gray level image of the input image to obtain a target histogram curve;
and obtaining the histogram distribution of the input image, and adjusting the histogram distribution of the input image according to the target histogram curve to generate the second image.
6. The image processing method according to claim 1, wherein the obtaining of the target image by performing the fusion processing based on the input texture image, the first image, and the second image comprises:
performing texture fusion according to the texture image and the second image to obtain a third image;
and performing edge contour fusion processing according to the first image and the third image to obtain the target image.
7. The image processing method according to any one of claims 1 to 6, characterized in that the method further comprises:
converting a red, green, blue, RGB, color space to a luminance color, LAB, color space, and representing the target image as a luminance of the LAB color space;
converting the LAB color space to the RGB color space to obtain a color image of the target image.
8. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition module configured to acquire an edge image of an input image;
the diffusion processing module is configured to perform diffusion processing on the edge image obtained by the obtaining module to obtain an edge diffusion image;
the filtering processing module is configured to perform feature stream filtering processing on the edge diffusion image obtained by the diffusion processing module to obtain a first image;
the stretching processing module is configured to stretch the input image or the gray level image of the input image to obtain a second image;
and the fusion processing module is configured to perform fusion processing according to the input texture image, the first image obtained by the filtering processing module and the second image obtained by the stretching processing module to obtain a target image.
9. The image processing apparatus according to claim 8, wherein the acquisition module includes:
a conversion extraction sub-module configured to convert the input color image into a grayscale image and extract an edge image of the grayscale image if the input image is the color image;
an extraction sub-module configured to directly extract an edge image of the grayscale image if the input image is a grayscale image.
10. The image processing apparatus according to claim 8, wherein the diffusion processing module includes:
the first diffusion processing submodule is configured to perform diffusion processing on the edge image to obtain an edge response image;
and the second diffusion processing submodule is configured to perform diffusion processing on the edge response image obtained by the first diffusion processing submodule to obtain the edge diffusion image.
11. The image processing apparatus according to claim 8, wherein the filter processing module is configured to:
and calculating the edge tangent direction of the input image, and performing linear convolution on the edge diffusion image according to the edge tangent direction.
12. The image processing apparatus according to claim 8, wherein the stretch processing module includes:
a first obtaining sub-module configured to obtain a correspondence of the image and the histogram curve;
a second obtaining sub-module configured to query the corresponding relation obtained by the first obtaining sub-module according to the input image or a gray level image of the input image to obtain a target histogram curve;
an adjusting sub-module configured to obtain a histogram distribution of the input image, and adjust the histogram distribution of the input image according to the target histogram curve obtained by the second obtaining sub-module to generate the second image.
13. The image processing apparatus according to claim 8, wherein the fusion processing module includes:
the texture fusion submodule is configured to perform texture fusion according to the texture image and the second image to obtain a third image;
and the edge contour fusion submodule is configured to perform edge contour fusion processing according to the first image and the third image obtained by the texture fusion submodule to obtain the target image.
14. The image processing apparatus according to any one of claims 8 to 13, characterized in that the apparatus further comprises:
a conversion representation module configured to convert a red, green, blue, RGB, color space to a luminance color, LAB, color space and to represent the target image as a luminance of the LAB color space;
a conversion module configured to convert the LAB color space to the RGB color space to obtain a color image of the target image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510982411.XA CN105574834B (en) | 2015-12-23 | 2015-12-23 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510982411.XA CN105574834B (en) | 2015-12-23 | 2015-12-23 | Image processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105574834A CN105574834A (en) | 2016-05-11 |
CN105574834B true CN105574834B (en) | 2018-09-04 |
Family
ID=55884929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510982411.XA Active CN105574834B (en) | 2015-12-23 | 2015-12-23 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105574834B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205804B (en) * | 2016-12-16 | 2022-05-31 | 斑马智行网络(香港)有限公司 | Image processing method and device and electronic equipment |
CN107527359B (en) * | 2017-08-07 | 2020-04-10 | 东软医疗系统股份有限公司 | PET image reconstruction method and PET imaging equipment |
CN109544490B (en) * | 2018-10-17 | 2021-07-13 | 北京达佳互联信息技术有限公司 | Image enhancement method, device and computer readable storage medium |
CN109741248B (en) * | 2018-12-29 | 2023-06-13 | 深圳美图创新科技有限公司 | Watercolour style drawing method and device |
CN113781296B (en) * | 2021-09-22 | 2024-05-28 | 亿图软件(湖南)有限公司 | Image watercolor processing method, device, computer equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2034436A1 (en) * | 2007-09-06 | 2009-03-11 | Thomson Licensing | Method for non-photorealistic rendering |
US7663622B2 (en) * | 2005-12-08 | 2010-02-16 | Electronics And Telecommunications Research Institute | Unified framework based on extensible styles for 3D non-photorealistic rendering and method of configuring the same |
WO2010134675A1 (en) * | 2009-05-18 | 2010-11-25 | 연세대학교 산학협력단 | Apparatus and method for generating non-photorealistic image using non-photorealistic rendering, and recording medium wherein program for implementing the method is recorded |
CN103729866A (en) * | 2013-12-13 | 2014-04-16 | 深圳市云宙多媒体技术有限公司 | Water color style non-photorealistic rendering method and device |
CN103778655A (en) * | 2014-01-28 | 2014-05-07 | 西安理工大学 | Color natural image calculation artifying method based on self-adaption ink painting spreading |
CN103914862A (en) * | 2014-03-10 | 2014-07-09 | 上海大学 | Pencil sketch simulating method based on edge tangent stream |
CN104346789A (en) * | 2014-08-19 | 2015-02-11 | 浙江工业大学 | Fast artistic style study method supporting diverse images |
-
2015
- 2015-12-23 CN CN201510982411.XA patent/CN105574834B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7663622B2 (en) * | 2005-12-08 | 2010-02-16 | Electronics And Telecommunications Research Institute | Unified framework based on extensible styles for 3D non-photorealistic rendering and method of configuring the same |
EP2034436A1 (en) * | 2007-09-06 | 2009-03-11 | Thomson Licensing | Method for non-photorealistic rendering |
WO2010134675A1 (en) * | 2009-05-18 | 2010-11-25 | 연세대학교 산학협력단 | Apparatus and method for generating non-photorealistic image using non-photorealistic rendering, and recording medium wherein program for implementing the method is recorded |
CN103729866A (en) * | 2013-12-13 | 2014-04-16 | 深圳市云宙多媒体技术有限公司 | Water color style non-photorealistic rendering method and device |
CN103778655A (en) * | 2014-01-28 | 2014-05-07 | 西安理工大学 | Color natural image calculation artifying method based on self-adaption ink painting spreading |
CN103914862A (en) * | 2014-03-10 | 2014-07-09 | 上海大学 | Pencil sketch simulating method based on edge tangent stream |
CN104346789A (en) * | 2014-08-19 | 2015-02-11 | 浙江工业大学 | Fast artistic style study method supporting diverse images |
Non-Patent Citations (3)
Title |
---|
基于特征流的抽象线条画绘制;王山东 等;《计算机学报》;20140331;第37卷(第3期);第611-620页 * |
基于直方图拉伸的图像增强算法及其实现;祝中秋;《信息技术》;20091231(第5期);第54-57页 * |
结合轮廓和纹理特征的铅笔画自动生成系统;徐美婷 等;《电子测量技术》;20150630;第38卷(第6期);第48-53页 * |
Also Published As
Publication number | Publication date |
---|---|
CN105574834A (en) | 2016-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111328448B (en) | Method and apparatus for image processing | |
CN109345485B (en) | Image enhancement method and device, electronic equipment and storage medium | |
KR101727169B1 (en) | Method and apparatus for generating image filter | |
CN104156947B (en) | Image partition method, device and equipment | |
CN105574834B (en) | Image processing method and device | |
CN108154465B (en) | Image processing method and device | |
CN105654039B (en) | The method and apparatus of image procossing | |
CN110958401B (en) | Super night scene image color correction method and device and electronic equipment | |
CN105528765B (en) | Method and device for processing image | |
CN106131441B (en) | Photographing method and device and electronic equipment | |
CN107798654B (en) | Image buffing method and device and storage medium | |
CN107967459B (en) | Convolution processing method, convolution processing device and storage medium | |
CN107944367B (en) | Face key point detection method and device | |
CN105631803B (en) | The method and apparatus of filter processing | |
CN104050645B (en) | Image processing method and device | |
CN109784164B (en) | Foreground identification method and device, electronic equipment and storage medium | |
CN105354793A (en) | Facial image processing method and device | |
US11222235B2 (en) | Method and apparatus for training image processing model, and storage medium | |
CN104408404A (en) | Face identification method and apparatus | |
CN109784327B (en) | Boundary box determining method and device, electronic equipment and storage medium | |
CN112634160A (en) | Photographing method and device, terminal and storage medium | |
CN106210446A (en) | saturation enhancement method and device | |
CN107507128B (en) | Image processing method and apparatus | |
CN106469446B (en) | Depth image segmentation method and segmentation device | |
CN105654470B (en) | Image choosing method, apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |