CN107564085B - Image warping processing method and device, computing equipment and computer storage medium - Google Patents
Image warping processing method and device, computing equipment and computer storage medium Download PDFInfo
- Publication number
- CN107564085B CN107564085B CN201711002711.2A CN201711002711A CN107564085B CN 107564085 B CN107564085 B CN 107564085B CN 201711002711 A CN201711002711 A CN 201711002711A CN 107564085 B CN107564085 B CN 107564085B
- Authority
- CN
- China
- Prior art keywords
- image
- distortion
- data
- offset
- effect image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 230000000694 effects Effects 0.000 claims abstract description 271
- 238000012545 processing Methods 0.000 claims abstract description 82
- 238000000034 method Methods 0.000 claims abstract description 73
- 230000001960 triggered effect Effects 0.000 claims abstract description 33
- 239000000779 smoke Substances 0.000 claims description 50
- 230000011218 segmentation Effects 0.000 claims description 33
- 239000002344 surface layer Substances 0.000 claims description 32
- 230000006870 function Effects 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000007499 fusion processing Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 8
- 210000000746 body region Anatomy 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- FFRBMBIXVSCUFS-UHFFFAOYSA-N 2,4-dinitro-1-naphthol Chemical compound C1=CC=C2C(O)=C([N+]([O-])=O)C=C([N+]([O-])=O)C2=C1 FFRBMBIXVSCUFS-UHFFFAOYSA-N 0.000 description 3
- 239000000654 additive Substances 0.000 description 3
- 230000000996 additive effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000011022 operating instruction Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses an image warping processing method, an image warping processing device, computing equipment and a computer storage medium, wherein the image warping processing method comprises the following steps: acquiring an image to be processed and first noise data; determining first distorted texture data according to the first noise data for each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data; obtaining image distortion data corresponding to the image to be processed; obtaining a distortion effect image according to the image distortion data; and saving the distortion effect image according to a shooting instruction triggered by a user. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
Description
Technical Field
The invention relates to the field of image processing, in particular to an image warping processing method, an image warping processing device, computing equipment and a computer storage medium.
Background
With the development of science and technology, the technology of image acquisition equipment is also increasing day by day. The acquired image is clearer, and the resolution and the display effect are also greatly improved. However, the existing images may not meet the requirements of the user, and the user wants to perform personalized processing on the images, for example, to process the content in the images to have a distortion effect when looking through the steam. In the prior art, most of images are processed by utilizing functions such as sine or cosine and the like to obtain images with distortion effects, however, the images obtained by processing in the mode have poor distortion effects, are very hard and are not natural.
Disclosure of Invention
In view of the above, the present invention has been made to provide an image warping processing method, apparatus, computing device and computer storage medium that overcome or at least partially solve the above-mentioned problems.
According to an aspect of the present invention, there is provided an image warping processing method, including:
acquiring an image to be processed and first noise data;
determining first distorted texture data according to the first noise data for each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data;
obtaining image distortion data corresponding to the image to be processed;
and obtaining a distortion effect image according to the image distortion data.
Further, the first noise data includes a plurality of first color data;
determining the first warped texture data from the first noise data further comprises:
extracting first color data from the first noise data;
determining first warped texture data according to the extracted first color data.
Further, extracting the first color data from the first noise data further includes: first color data is extracted from the first noise data according to a time parameter.
Further, processing color component values of the pixel points using the first warped texture data further comprises:
determining a first distortion offset corresponding to the pixel point by using the first distortion texture data;
determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point;
and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
Further, determining a first warping offset corresponding to the pixel point using the first warped texture data further includes: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and the preset distortion coefficient.
Further, obtaining a warping effect image according to the image warping data further includes: and determining a basic effect image according to the image distortion data, and determining the basic effect image as a distortion effect image.
Further, after acquiring the image to be processed and the first noise data, the method further includes:
acquiring second noise data;
processing the second noise data by using the first noise data to generate a surface layer smoke effect map;
according to the image distortion data, obtaining the distortion effect image specifically comprises the following steps: determining a basic effect image according to the image distortion data; and adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
Further, the second noise data includes a plurality of second color data;
processing the second noise data using the first noise data, the generating the surface smoke effect map further comprising:
determining second warped texture data from the first noise data for each of the second color data in the second noise data; determining a second distortion offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second distortion offset;
obtaining noise distortion data corresponding to the second noise data;
and generating a surface layer smoke effect map according to the noise distortion data.
Further, generating a surface smoke effect map based on the noise distortion data further comprises: and performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data to generate a surface layer smoke effect map.
Further, determining an offset object corresponding to the second warping offset amount according to the second warping offset amount and the second color data further includes:
obtaining an offset object to be determined according to the second distortion offset and the second color data;
judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
Further, the first noise data is discrete color noise data.
Further, the second noise data is continuous black-and-white noise data.
Further, the method further comprises:
carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; wherein, the image to be processed contains a specific object;
determining the contour information of a specific object according to a scene segmentation result corresponding to the image to be processed;
and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
Further, obtaining the local distortion effect image according to the contour information of the specific object, the image to be processed, and the distortion effect image further includes:
extracting a local image from the distortion effect image according to the contour information of the specific object;
and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
Further, acquiring the image to be processed further comprises: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
Further, after obtaining the distortion effect image, the method further comprises: displaying the distortion effect image.
Further, displaying the warp effect image further comprises: and displaying the distortion effect image in real time.
Further, after obtaining the distortion effect image, the method further comprises: and saving the distortion effect image according to a shooting instruction triggered by a user.
Further, after obtaining the distortion effect image, the method further comprises: and saving the video formed by the distortion effect image as a frame image according to a recording instruction triggered by a user.
Further, after obtaining the local warping effect image, the method further includes: displaying the local distortion effect image.
Further, displaying the local distortion effect image further comprises: and displaying the local distortion effect image in real time.
Further, after obtaining the local warping effect image, the method further includes: and saving the local distortion effect image according to a shooting instruction triggered by a user.
Further, after obtaining the local warping effect image, the method further includes: and storing the video formed by the local distortion effect image as a frame image according to a recording instruction triggered by a user.
According to another aspect of the present invention, there is provided an image warping processing apparatus including:
an acquisition module adapted to acquire an image to be processed and first noise data;
the first processing module is suitable for determining first distorted texture data according to the first noise data aiming at each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data;
the first generation module is suitable for obtaining image distortion data corresponding to the image to be processed;
and the second generation module is suitable for obtaining a distortion effect image according to the image distortion data.
Further, the first noise data includes a plurality of first color data;
the first processing module is further adapted to:
extracting first color data from the first noise data;
determining first warped texture data according to the extracted first color data.
Further, the first processing module is further adapted to: first color data is extracted from the first noise data according to a time parameter.
Further, the first processing module is further adapted to:
determining a first distortion offset corresponding to the pixel point by using the first distortion texture data;
determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point;
and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
Further, the first processing module is further adapted to: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and the preset distortion coefficient.
Further, the second generating module is further adapted to: and determining a basic effect image according to the image distortion data, and determining the basic effect image as a distortion effect image.
Further, the obtaining module is further adapted to: acquiring second noise data;
the device also includes: the second processing module is suitable for processing the second noise data by using the first noise data to generate a surface layer smoke effect map;
the second generation module is further adapted to: determining a basic effect image according to the image distortion data; and adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
Further, the second noise data includes a plurality of second color data;
the second processing module is further adapted to:
determining second warped texture data from the first noise data for each of the second color data in the second noise data; determining a second distortion offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second distortion offset;
obtaining noise distortion data corresponding to the second noise data;
and generating a surface layer smoke effect map according to the noise distortion data.
Further, the second processing module is further adapted to: and performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data to generate a surface layer smoke effect map.
Further, the second processing module is further adapted to:
obtaining an offset object to be determined according to the second distortion offset and the second color data;
judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
Further, the first noise data is discrete color noise data.
Further, the second noise data is continuous black-and-white noise data.
Further, the apparatus further comprises:
the segmentation module is suitable for carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; wherein, the image to be processed contains a specific object;
the determining module is suitable for determining the outline information of the specific object according to a scene segmentation result corresponding to the image to be processed;
and the third generation module is suitable for obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
Further, the third generating module is further adapted to: extracting a local image from the distortion effect image according to the contour information of the specific object; and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
Further, the obtaining module is further adapted to: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
Further, the apparatus further comprises: and the display module is suitable for displaying the distortion effect image.
Further, the display module is further adapted to: and displaying the distortion effect image in real time.
Further, the apparatus further comprises: and the first storage module is suitable for storing the distortion effect image according to a shooting instruction triggered by a user.
Further, the apparatus further comprises: and the second storage module is suitable for storing the video formed by the distorted effect image as the frame image according to the recording instruction triggered by the user.
Further, the apparatus further comprises: and the display module is suitable for displaying the local distortion effect image.
Further, the display module is further adapted to: and displaying the local distortion effect image in real time.
Further, the apparatus further comprises: and the first storage module is suitable for storing the local distortion effect image according to a shooting instruction triggered by a user.
Further, the apparatus further comprises: and the second storage module is suitable for storing the video formed by the local distortion effect image as the frame image according to the recording instruction triggered by the user.
According to yet another aspect of the present invention, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image distortion processing method.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the image warping processing method.
According to the technical scheme provided by the invention, the image to be processed and the first noise data are obtained, then, aiming at each pixel point in the image to be processed, the first distorted texture data is determined according to the first noise data, the color component values of the pixel points are processed by utilizing the first distorted texture data to obtain the image distorted data corresponding to the image to be processed, and then, the image with the distorted effect is obtained according to the image distorted data. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a flow diagram of an image warping processing method according to an embodiment of the invention;
FIG. 2 shows a flow diagram of an image warping processing method according to another embodiment of the present invention;
fig. 3 is a block diagram showing a configuration of an image warping processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram showing a configuration of an image warping processing apparatus according to another embodiment of the present invention;
FIG. 5 illustrates a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flow diagram of an image warping processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S100, an image to be processed and first noise data are acquired.
Specifically, the image to be processed may be an image taken by the user, an image in a website, or an image shared by other users, which is not limited herein. When the user wants to process the image to be processed into an image having a distortion effect, for example, wants to process the content in the image to have a distortion effect seen through the vapor, the image to be processed and the first noise data may be acquired in step S100. The first noise data includes a plurality of first color data, and particularly, the first noise data may be discrete color noise data, and the discrete first noise data is used to help improve the image warping effect, so that the processed image has a natural and better warping effect.
Step S101, aiming at each pixel point in the image to be processed, first distorted texture data is determined according to first noise data.
In step S101, it is required to determine, for each pixel point in the image to be processed, first distorted texture data corresponding to the pixel point according to the first noise data. Specifically, the pixel points in the image to be processed correspond to the first color data in the first noise data, and then for each pixel point in the image to be processed, the first distorted texture data corresponding to the pixel point is determined according to the first color data corresponding to the pixel point in the first noise data, so that different first distorted texture data are determined for the pixel points in the image to be processed, and compared with the case that all the pixel points use the same distorted texture data, the processed image has a natural and better distorted effect.
In step S102, the color component values of the pixel points are processed by using the first warped texture data.
And for each pixel point in the image to be processed, assigning and the like to the color component value of the pixel point by using the first distorted texture data corresponding to the pixel point. When the image to be processed is a color image, taking the color mode of the image to be processed as an example of adopting an RGB color mode, and the color component values of the pixel points in the image to be processed include the color component values of the three color channels of red, green, and blue, in a specific application, a person skilled in the art can select a suitable color component value of the color channel from the color component values of the three color channels corresponding to the pixel points to process, or process the color component values of all the color channels, which is not limited herein. For example, only the color component values of the red color channel and the color component values of the green color channel of a pixel point may be processed.
And step S103, obtaining image distortion data corresponding to the image to be processed.
And processing the color component value of each pixel point in the image to be processed to obtain data, namely the image distortion data corresponding to the image to be processed.
And step S104, obtaining a distortion effect image according to the image distortion data.
After the image warping data is obtained in step S103, a warping effect image is obtained from the image warping data in step S104. For example, if a user wants to process all the contents of the image to be processed to have a distortion effect viewed through steam, a base effect image, i.e., an image in which all the contents of the image to be processed have a distortion effect, may be determined based on the image distortion data, and then the base effect image may be determined as a distortion effect image.
According to the image warping method provided by this embodiment, an image to be processed and first noise data are acquired, then, for each pixel point in the image to be processed, first warped texture data is determined according to the first noise data, color component values of the pixel points are processed by using the first warped texture data, image warping data corresponding to the image to be processed is obtained, and then, a warped effect image is obtained according to the image warping data. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
Fig. 2 shows a flow chart of an image warping processing method according to another embodiment of the present invention, as shown in fig. 2, the method includes the steps of:
step S200, acquiring an image to be processed, first noise data, and second noise data.
Wherein the first noise data includes a plurality of first color data, and the second noise data includes a plurality of second color data. Specifically, the first noise data is discrete color noise data, and the second noise data is continuous black and white noise data, and the second noise data can be processed by using the first noise data to generate a surface layer smoke effect map. The discrete first noise data is adopted, so that the image distortion effect is improved, and the processed image has a natural and good distortion effect; by using the continuous second noise data, a surface layer smoke effect map with continuous smoke effects can be generated, and the processed image has a natural smoke turning effect.
Alternatively, in step S200, the image to be processed captured by the image capturing device may be acquired in real time. Specifically, the image capturing device may be a mobile terminal, and for example, the image capturing device is a mobile terminal, and the image to be processed captured by a camera of the mobile terminal is obtained in real time, where the image to be processed may be any image, such as an image including a landscape or an image including a human body, and the like, and is not limited herein.
Step S201, extracting first color data from first noise data aiming at each pixel point in an image to be processed; determining first warped texture data according to the extracted first color data.
And extracting first color data corresponding to the pixel point from the first noise data aiming at each pixel point in the image to be processed. Therefore, different first distorted texture data are determined for the pixel points in the image to be processed, and compared with the situation that all the pixel points use the same distorted texture data, the method is beneficial to obtaining a natural and better distorted effect.
To further improve the image warping effect, the first color data may be extracted from the first noise data according to a temporal parameter. Specifically, for the same pixel point in the image to be processed, when the time parameter changes, different first color data are extracted from the first noise data and used as first color data corresponding to the pixel point.
In practical applications, the first noise data may be a color noise map, and the color component value corresponding to each pixel in the color noise map is a first color data. For convenience of explanation, it is assumed that pixel points in the image to be processed are respectively a1, a2, A3, etc., and pixel points in the color noise map are respectively B1, B2, B3, etc., and for the pixel point a1 in the image to be processed, when the time parameter is time 1, a color component value corresponding to the pixel point B1 is extracted from the color noise map and is used as first color data corresponding to the pixel point a 1; when the time parameter is time 2, the color component value corresponding to the pixel point B3 is extracted from the color noise map as the first color data corresponding to the pixel point a 1.
Specifically, for each pixel point in the image to be processed, the first warped texture data may be obtained by calculation according to the extracted first color data corresponding to the pixel point and a preset first calculation function, where a person skilled in the art may set the preset first calculation function according to actual needs, and the method is not limited herein.
Step S202, determining a first warping offset corresponding to the pixel point by using the first warped texture data.
Specifically, a first distortion offset corresponding to the pixel point may be determined by using the first distortion texture data and the preset distortion coefficient. The person skilled in the art can adjust the degree of distortion of the image by adjusting the coefficient of distortion.
Step S203, determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point.
After the first distortion offset corresponding to the pixel point is determined, the pixel point corresponding to the first distortion offset can be determined according to the first distortion offset and the pixel point.
Step S204, assigning the color component value of the pixel point to the pixel point corresponding to the first distortion offset.
After the pixel point corresponding to the first distortion offset is determined, the color component value of the pixel point is assigned to the pixel point corresponding to the first distortion offset. For example, for the pixel point a1 in the image to be processed, the pixel point corresponding to the first distortion offset is the pixel point a2, and then the color component value of the pixel point a1 is assigned to the pixel point a2, so that the pixel point a2 has the color component value of the pixel point a1, thereby achieving the effect of image distortion.
Assuming that the color component values of the pixel points in the image to be processed include color component values of three color channels, namely red, green and blue, in a specific application, a person skilled in the art may select a color component value of a suitable color channel from the color component values of the three color channels corresponding to the pixel points and assign the color component value to the pixel point corresponding to the first distortion offset, or assign the color component values of all the color channels to the pixel point corresponding to the first distortion offset, which is not limited herein. For example, only the color component value of the red color channel and the color component value of the green color channel of the pixel point may be correspondingly assigned to the pixel point corresponding to the first distortion offset.
In step S205, image distortion data corresponding to the image to be processed is obtained.
And assigning the color component value of each pixel point in the image to be processed to obtain data, namely the image distortion data corresponding to the image to be processed.
In step S206, a base effect image is determined according to the image warping data.
The basic effect image is an image with distortion effect in all contents in the image to be processed.
Step S207 determines second warped texture data from the first noise data for each of the second color data in the second noise data.
Second color data in the second noise data corresponds to first color data in the first noise data, and then for each second color data in the second noise data, first color data corresponding to the second color data is extracted from the first noise data, and then second warped texture data is calculated according to the extracted first color data corresponding to the second color data and a preset second calculation function, so that different second warped texture data is determined for the second color data in the second noise data, which helps to obtain a natural and better smoke-dazzling effect than when all the second color data use the same warped texture data.
The preset second calculation function may be set by a person skilled in the art according to actual needs, and the preset second calculation function may be the same as the preset first calculation function or different from the preset first calculation function, which is not limited herein.
In step S208, a second warping offset corresponding to the second color data is determined using the second warped texture data.
Specifically, a second distortion offset corresponding to the second color data may be determined using the second distorted texture data and a preset smoke distortion coefficient. The person skilled in the art can adjust the smoke distortion degree by adjusting the smoke distortion coefficient.
In step S209, an offset object corresponding to the second distortion offset amount is determined according to the second distortion offset amount and the second color data.
Specifically, obtaining an offset object to be determined according to the second distortion offset and the second color data, and then judging whether the offset object to be determined exceeds a preset object range; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
Since the to-be-determined offset object obtained according to the second distortion offset and the second color data may exceed the preset object range, it is also necessary to determine whether the to-be-determined offset object exceeds the preset object range. And if the to-be-determined offset object exceeds the preset object range, calculating to obtain an offset object corresponding to the second distortion offset according to the preset algorithm and the to-be-determined offset object, so that the offset object is adjusted, and the subsequent generation of the surface layer smoke effect map with continuous smoke effect is facilitated. If the offset object to be determined does not exceed the preset object range, the offset object to be determined may be directly determined as the offset object corresponding to the second distortion offset.
In step S210, the second color data is assigned to the offset object corresponding to the second distortion offset amount.
After determining the offset object corresponding to the second warping offset amount, the second color data is assigned to the offset object corresponding to the second warping offset amount.
In practical applications, the first noise data may be a color noise map, and the second noise data is a black-and-white noise map, so that the color component value corresponding to each pixel in the color noise map is a first color data, and the color component value corresponding to each pixel in the black-and-white noise map is a second color data. For convenience of explanation, it is assumed that pixel points in the color noise map are respectively B1, B2, B3, etc., pixel points in the black-and-white noise map are respectively C1, C2, C3, etc., and it is assumed that a certain second color data is a color component value corresponding to a pixel point C1 in the black-and-white noise map, and for the second color data, an offset object corresponding to the second distortion offset is a pixel point C3, then the color component value of the pixel point C1 is assigned to the pixel point C3, so that the pixel point C3 has the color component value of the pixel point C1, thereby achieving the effect of smoke wrapping.
In step S211, noise distortion data corresponding to the second noise data is obtained.
And performing assignment processing on each second color data in the second noise data to obtain data, namely noise distortion data corresponding to the second noise data.
Step S212, generating a surface layer smoke effect map according to the noise distortion data.
And after the noise distortion data is obtained, generating a surface smoke effect map according to the noise distortion data. Specifically, the surface layer smoke effect map can be generated by performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data. Wherein, a person skilled in the art can set the preset function and the preset additive color value according to actual needs, and is not limited here. For example, the preset function may be a sine function, a cosine function, or the like, and the preset color value may be a color value corresponding to a color of golden yellow or a color value corresponding to a color of red, or the like. Since the second noise data is black and white noise data, a surface smoke effect map with a color smoke effect can be generated according to the preset additive color value and the noise distortion data. For example, when the preset additive color value is a color value corresponding to golden yellow, a surface layer smoke effect map having a golden yellow smoke effect is generated.
Step S213, adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
After the basic effect image and the surface layer smoke effect map are obtained, the surface layer smoke effect map is added to the basic effect image to obtain a distortion effect image, the distortion effect image has a distortion effect, the image distortion effect is improved, the smoke effect is achieved, and the image effect is greatly enriched.
Step S214, displaying the distortion effect image in real time.
And displaying the obtained distortion effect image in real time, so that a user can directly see the distortion effect image obtained after the image to be processed is processed. Immediately after the warp effect image is obtained, the captured image to be processed is replaced and displayed by the warp effect image, the replacement is generally performed within 1/24 seconds, and the replacement time is relatively short, so that the human eyes have no obvious perception, which is equivalent to displaying the warp effect image in real time.
Step S215, saving the distortion effect image according to the shooting instruction triggered by the user.
After the distortion effect image is displayed, the distortion effect image can be stored according to a shooting instruction triggered by a user. If the user clicks a shooting button of the camera, a shooting instruction is triggered, and the displayed distortion effect image is stored.
And step S216, storing the video formed by the distortion effect image as the frame image according to the recording instruction triggered by the user.
When the distortion effect image is displayed, the video formed by the distortion effect image as the frame image can be stored according to a recording instruction triggered by a user. If the user clicks a recording button of the camera, a recording instruction is triggered, and the displayed distorted effect image is stored as a frame image in the video, so that a plurality of distorted effect images are stored as the video formed by the frame images.
Step S215 and step S216 are optional steps of this embodiment, and there is no execution sequence, and the corresponding steps are selected and executed according to different instructions triggered by the user.
In addition, in some application scenarios, the image to be processed contains a specific object, such as a human body, and the user only wants to warp a specific object region or a non-specific object region in the image to be processed, in which case the method may further include: carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; determining the contour information of a specific object according to a scene segmentation result corresponding to the image to be processed; and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
When the image to be processed is subjected to scene segmentation processing, a deep learning method can be utilized. Deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. And performing scene segmentation on the image to be processed by utilizing a segmentation method of deep learning to obtain a scene segmentation result corresponding to the image to be processed. Specifically, a scene segmentation network or the like obtained by a deep learning method can be used for performing scene segmentation on an image to be processed to obtain a scene segmentation result corresponding to the image to be processed, and then the contour information of the specific object is determined according to the scene segmentation result corresponding to the image to be processed. If the specific object is a human body, the contour information of the human body can be determined according to the scene segmentation result, so that the regions in the image to be processed are distinguished from the regions which are not human bodies.
After the contour information of the specific object is determined, a local image can be extracted from the distortion effect image according to the contour information of the specific object, and then the image to be processed and the local image are subjected to fusion processing to obtain a local distortion effect image. Specifically, it may be determined which regions in the distortion effect image are specific object regions and which regions are non-specific object regions according to the contour information of the specific object, the non-specific object regions may be referred to as background regions, and then an image of the specific object regions or an image of the non-specific object regions may be extracted from the distortion effect image as a partial image. For example, when the specific object is a human body, the user wants to distort a human body region in the image to be processed, then, according to the contour information of the human body, an image of the human body region is extracted from the distortion effect image to be used as a local image, and then the image to be processed and the local image are subjected to fusion processing to obtain a local distortion effect image, wherein the local distortion effect image is an image in which only the human body region has a distortion effect and the background region does not have the distortion effect; for another example, when the specific object is a human body, the user wants to distort a background region except the human body region in the image to be processed, then, according to the contour information of the human body, an image of the background region is extracted from the distortion effect image to be used as a local image, and then the image to be processed and the local image are subjected to fusion processing to obtain a local distortion effect image, wherein the local distortion effect image is an image in which only the background region has a distortion effect and the human body region does not have the distortion effect.
In the case where the partial distortion effect image is obtained, the partial distortion effect image is displayed in real time without displaying the distortion effect image in step S214, the partial distortion effect image is saved in accordance with a shooting instruction triggered by a user in step S215, and a video composed of the partial distortion effect image as a frame image is saved in accordance with a recording instruction triggered by a user in step S216.
According to the image distortion processing method provided by the embodiment, the color component values of the pixel points in the image are processed by using noise data, so that the basic effect image can be conveniently obtained; the other noise data is processed by utilizing the noise data, and a surface layer smoke effect map can be obtained, so that an image with both a distortion effect and a smoke effect is obtained, an image distortion processing mode is optimized, the image distortion effect is effectively improved, and the image effect is greatly enriched; in addition, images with local distortion effects can be obtained according to scene segmentation results corresponding to the images to be processed.
Fig. 3 is a block diagram showing a configuration of an image warping processing apparatus according to an embodiment of the present invention, as shown in fig. 3, the apparatus including: an acquisition module 301, a first processing module 302, a first generation module 303, and a second generation module 304.
The acquisition module 301 is adapted to: an image to be processed and first noise data are acquired.
The first noise data includes a plurality of first color data, and particularly, the first noise data may be discrete color noise data, and the discrete first noise data is used to help improve the image warping effect, so that the processed image has a natural and better warping effect.
The first processing module 302 is adapted to: determining first distorted texture data according to the first noise data for each pixel point in the image to be processed; color component values of the pixel points are processed using the first warped texture data.
The first generation module 303 is adapted to: and obtaining image distortion data corresponding to the image to be processed.
The second generation module 304 is adapted to: and obtaining a distortion effect image according to the image distortion data.
Optionally, the second generating module 304 is further adapted to: and determining a basic effect image according to the image distortion data, and determining the basic effect image as a distortion effect image.
According to the image warping processing apparatus provided in this embodiment, the obtaining module obtains an image to be processed and first noise data, the first processing module determines first warped texture data according to the first noise data for each pixel point in the image to be processed, color component values of the pixel points are processed by using the first warped texture data, the first generating module obtains image warping data corresponding to the image to be processed, and the second generating module obtains a warped effect image according to the image warping data. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
Fig. 4 is a block diagram showing a configuration of an image warping processing apparatus according to another embodiment of the present invention, as shown in fig. 4, the apparatus including: the device comprises an acquisition module 401, a first processing module 402, a first generation module 403, a second processing module 404, a second generation module 405, a display module 406, a first saving module 407 and a second saving module 408.
The acquisition module 401 is adapted to: an image to be processed, first noise data, and second noise data are acquired.
Wherein the first noise data includes a plurality of first color data, and the second noise data includes a plurality of second color data. Specifically, the first noise data is discrete color noise data, and the second noise data is continuous black and white noise data, and the second noise data can be processed by using the first noise data to generate a surface layer smoke effect map.
Optionally, the obtaining module 401 is further adapted to: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
The first processing module 402 is adapted to: extracting first color data from the first noise data aiming at each pixel point in the image to be processed; determining first warped texture data according to the extracted first color data; color component values of the pixel points are processed using the first warped texture data.
Wherein the first processing module 402 is further adapted to: first color data is extracted from the first noise data according to a time parameter.
The first processing module 402 is further adapted to: determining a first distortion offset corresponding to the pixel point by using the first distortion texture data; determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point; and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
The first processing module 402 is further adapted to: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and the preset distortion coefficient.
The first generation module 403 is adapted to: and obtaining image distortion data corresponding to the image to be processed.
The second processing module 404 is adapted to: and processing the second noise data by using the first noise data to generate a surface layer smoke effect map.
In particular, the second processing module 404 is further adapted to: determining second warped texture data from the first noise data for each of the second color data in the second noise data; determining a second distortion offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second distortion offset; obtaining noise distortion data corresponding to the second noise data; and generating a surface layer smoke effect map according to the noise distortion data.
Optionally, the second processing module 404 is further adapted to: and performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data to generate a surface layer smoke effect map.
Optionally, the second processing module 404 is further adapted to: obtaining an offset object to be determined according to the second distortion offset and the second color data; judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
The second generation module 405 is adapted to: determining a basic effect image according to the image distortion data; and adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
The display module 406 is adapted to: displaying the distortion effect image.
Optionally, the display module 406 is further adapted to: and displaying the distortion effect image in real time. The display module 406 displays the obtained distortion effect image in real time, and a user can directly see the distortion effect image obtained after the image to be processed is processed. Immediately after the second generation module 405 obtains the distortion effect image, the display module 406 replaces the captured image to be processed with the distortion effect image for display, typically within 1/24 seconds, and since the replacement time is relatively short, the human eye does not perceive the image to be processed, which is equivalent to the display module 406 displaying the distortion effect image in real time.
The first saving module 407 is adapted to: and saving the distortion effect image according to a shooting instruction triggered by a user.
After displaying the distortion effect image, the first saving module 407 may save the distortion effect image according to a shooting instruction triggered by a user. If the user clicks a shooting button of the camera to trigger a shooting instruction, the first saving module 407 saves the displayed distortion effect image.
The second saving module 408 is adapted to: and saving the video formed by the distortion effect image as a frame image according to a recording instruction triggered by a user.
When the distortion effect image is displayed, the second saving module 408 may save the video composed of the distortion effect image as the frame image according to a recording instruction triggered by the user. If the user clicks the recording button of the camera to trigger a recording command, the second saving module 408 saves the displayed distortion effect image as a frame image in the video, so as to save a plurality of distortion effect images as a video composed of frame images.
And executing the corresponding first saving module 407 and the second saving module 408 according to different instructions triggered by a user.
In addition, in some application scenarios, the image to be processed contains a specific object, such as a human body, and the user only wants to warp a specific object region or a non-specific object region in the image to be processed, in which case the apparatus further includes: a segmentation module 409, a determination module 410 and a third generation module 411.
The segmentation module 409 is adapted to: and carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed.
The determination module 410 is adapted to: and determining the contour information of the specific object according to the scene segmentation result corresponding to the image to be processed.
The third generating module 411 is adapted to: and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
Wherein the third generating module 411 is further adapted to: extracting a local image from the distortion effect image according to the contour information of the specific object; and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
In this case, the display module 406 displays not the distortion effect image but the partial distortion effect image obtained by the third generation module 411, for example, displays the partial distortion effect image in real time. Similarly, the first saving module 407 is adapted to save the local distortion effect image according to a shooting instruction triggered by a user. The second saving module 408 is adapted to save the video composed of the local distortion effect image as the frame image according to a recording instruction triggered by the user.
According to the image distortion processing device provided by the embodiment, the color component values of the pixels in the image are processed by using the noise data, so that the basic effect image can be conveniently obtained; the other noise data is processed by utilizing the noise data, and a surface layer smoke effect map can be obtained, so that an image with both a distortion effect and a smoke effect is obtained, an image distortion processing mode is optimized, the image distortion effect is effectively improved, and the image effect is greatly enriched; in addition, images with local distortion effects can be obtained according to scene segmentation results corresponding to the images to be processed.
The invention also provides a nonvolatile computer storage medium, and the computer storage medium stores at least one executable instruction which can execute the image distortion processing method in any method embodiment.
Fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor (processor)502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described image warping method embodiment.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may specifically be configured to cause the processor 502 to execute the image warping processing method in any of the above-described method embodiments. For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing image warping processing embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Claims (44)
1. A method of image warping processing, the method comprising:
acquiring an image to be processed and first noise data;
determining first distorted texture data according to the first noise data aiming at each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data;
obtaining image distortion data corresponding to the image to be processed;
obtaining a distortion effect image according to the image distortion data;
wherein after the acquiring the image to be processed and the first noise data, the method further comprises:
acquiring second noise data;
processing the second noise data by using the first noise data to generate a surface layer smoke effect map;
the obtaining of the distortion effect image according to the image distortion data specifically comprises: determining a basic effect image according to the image distortion data; adding the surface layer smoke effect map to the basic effect image to obtain a distortion effect image;
wherein the second noise data includes a plurality of second color data;
the processing the second noise data using the first noise data to generate a surface smoke effect map further comprises:
determining, for each second color data in the second noise data, second warped texture data from the first noise data; determining a second warping offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second warping offset;
obtaining noise distortion data corresponding to the second noise data;
and generating a surface layer smoke effect map according to the noise distortion data.
2. The method of claim 1, wherein the first noise data comprises a plurality of first color data;
said determining first warped texture data from said first noise data further comprises:
extracting first color data from the first noise data;
determining first warped texture data according to the extracted first color data.
3. The method of claim 2, wherein said extracting first color data from said first noise data further comprises: first color data is extracted from the first noise data according to a time parameter.
4. The method of any of claims 1-3, wherein said processing color component values of said pixel points using said first warped texture data further comprises:
determining a first warped offset corresponding to the pixel point using the first warped texture data;
determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point;
and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
5. The method of claim 4, wherein said determining, using the first warped texture data, a first warped offset corresponding to the pixel point further comprises: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and a preset distortion coefficient.
6. The method of claim 1, wherein said deriving a warp effect image from said image warping data further comprises: and determining a basic effect image according to the image warping data, and determining the basic effect image as a warping effect image.
7. The method of claim 1, wherein said generating a surface smoke effect map from said noise distortion data further comprises: and performing semi-transparent processing according to a preset function and/or a preset adding color value and the noise distortion data to generate a surface layer smoke effect map.
8. The method of claim 1, wherein the determining, from the second warping offset and the second color data, an offset object corresponding to the second warping offset further comprises:
obtaining an offset object to be determined according to the second distortion offset and the second color data;
judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
9. The method of claim 1, wherein said first noise data is discrete color noise data.
10. The method of claim 1, wherein the second noise data is continuous black and white noise data.
11. The method of claim 1, wherein the method further comprises:
performing scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; wherein, the image to be processed contains a specific object;
determining the contour information of the specific object according to a scene segmentation result corresponding to the image to be processed;
and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
12. The method of claim 11, wherein the deriving a local warping effect image from the contour information of the specific object, the image to be processed, and the warping effect image further comprises:
extracting a local image from the distortion effect image according to the contour information of the specific object;
and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
13. The method of claim 1, wherein the acquiring a to-be-processed image further comprises: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
14. The method of claim 1, wherein after the deriving the warp-effect image, the method further comprises: displaying the distortion effect image.
15. The method of claim 14, wherein the displaying the warp effect image further comprises: and displaying the distortion effect image in real time.
16. The method of claim 11, wherein after the deriving the warp-effect image, the method further comprises: and saving the distortion effect image according to a shooting instruction triggered by a user.
17. The method of claim 11, wherein after the deriving the warp-effect image, the method further comprises: and saving the video formed by the distortion effect image as a frame image according to a recording instruction triggered by a user.
18. The method of claim 11, wherein after the obtaining the local warping effect image, the method further comprises: displaying the local distortion effect image.
19. The method of any of claim 18, wherein the displaying the local warping effect image further comprises: and displaying the local distortion effect image in real time.
20. The method of any of claim 11, wherein after said deriving the local warping effect image, the method further comprises: and saving the local distortion effect image according to a shooting instruction triggered by a user.
21. The method of any of claim 11, wherein after said deriving the local warping effect image, the method further comprises: and storing the video formed by the local distortion effect image as a frame image according to a recording instruction triggered by a user.
22. An image warping processing apparatus, the apparatus comprising:
an acquisition module adapted to acquire an image to be processed and first noise data;
the first processing module is suitable for determining first distorted texture data according to the first noise data aiming at each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data;
the first generation module is suitable for obtaining image distortion data corresponding to the image to be processed;
the second generation module is suitable for obtaining a distortion effect image according to the image distortion data;
wherein the obtaining module is further adapted to: acquiring second noise data;
the device further comprises: the second processing module is suitable for processing the second noise data by using the first noise data to generate a surface layer smoke effect map;
the second generation module is further adapted to: determining a basic effect image according to the image distortion data; adding the surface layer smoke effect map to the basic effect image to obtain a distortion effect image;
wherein the second noise data includes a plurality of second color data;
the second processing module is further adapted to:
determining, for each second color data in the second noise data, second warped texture data from the first noise data; determining a second warping offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second warping offset;
obtaining noise distortion data corresponding to the second noise data;
and generating a surface layer smoke effect map according to the noise distortion data.
23. The apparatus of claim 22, wherein the first noise data comprises a plurality of first color data;
the first processing module is further adapted to:
extracting first color data from the first noise data;
determining first warped texture data according to the extracted first color data.
24. The apparatus of claim 23, wherein the first processing module is further adapted to: first color data is extracted from the first noise data according to a time parameter.
25. The apparatus of any one of claims 22-24, wherein the first processing module is further adapted to:
determining a first warped offset corresponding to the pixel point using the first warped texture data;
determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point;
and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
26. The apparatus of claim 25, wherein the first processing module is further adapted to: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and a preset distortion coefficient.
27. The apparatus of claim 22, wherein the second generating means is further adapted to: and determining a basic effect image according to the image warping data, and determining the basic effect image as a warping effect image.
28. The apparatus of claim 22, wherein the second processing module is further adapted to: and performing semi-transparent processing according to a preset function and/or a preset adding color value and the noise distortion data to generate a surface layer smoke effect map.
29. The apparatus of claim 22, wherein the second processing module is further adapted to:
obtaining an offset object to be determined according to the second distortion offset and the second color data;
judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
30. The apparatus of claim 22, wherein said first noise data is discrete color noise data.
31. The apparatus of claim 22, wherein the second noise data is continuous black and white noise data.
32. The apparatus of claim 22, wherein the apparatus further comprises:
the segmentation module is suitable for carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; wherein, the image to be processed contains a specific object;
the determining module is suitable for determining the contour information of the specific object according to a scene segmentation result corresponding to the image to be processed;
and the third generation module is suitable for obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
33. The apparatus of claim 32, wherein the third generating means is further adapted to: extracting a local image from the distortion effect image according to the contour information of the specific object; and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
34. The apparatus of claim 22, wherein the acquisition module is further adapted to: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
35. The apparatus of claim 22, wherein the apparatus further comprises: and the display module is suitable for displaying the distortion effect image.
36. The apparatus of claim 35, wherein the display module is further adapted to: and displaying the distortion effect image in real time.
37. The apparatus of claim 32, wherein the apparatus further comprises: and the first storage module is suitable for storing the distortion effect image according to a shooting instruction triggered by a user.
38. The apparatus of claim 32, wherein the apparatus further comprises: and the second storage module is suitable for storing the video formed by the distorted effect image as a frame image according to a recording instruction triggered by a user.
39. The apparatus of claim 32, wherein the apparatus further comprises: and the display module is suitable for displaying the local distortion effect image.
40. The apparatus of claim 39, wherein the display module is further adapted to: and displaying the local distortion effect image in real time.
41. The apparatus of claim 32, wherein the apparatus further comprises: and the first storage module is suitable for storing the local distortion effect image according to a shooting instruction triggered by a user.
42. The apparatus of claim 32, wherein the apparatus further comprises: and the second storage module is suitable for storing the video formed by the local distortion effect image as a frame image according to a recording instruction triggered by a user.
43. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the image distortion processing method according to any one of claims 1-21.
44. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform operations corresponding to the image warping method according to any one of claims 1-21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711002711.2A CN107564085B (en) | 2017-10-24 | 2017-10-24 | Image warping processing method and device, computing equipment and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711002711.2A CN107564085B (en) | 2017-10-24 | 2017-10-24 | Image warping processing method and device, computing equipment and computer storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107564085A CN107564085A (en) | 2018-01-09 |
CN107564085B true CN107564085B (en) | 2021-05-07 |
Family
ID=60987379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711002711.2A Expired - Fee Related CN107564085B (en) | 2017-10-24 | 2017-10-24 | Image warping processing method and device, computing equipment and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107564085B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108900903B (en) * | 2018-07-27 | 2021-04-27 | 北京市商汤科技开发有限公司 | Video processing method and device, electronic equipment and storage medium |
CN111097169B (en) * | 2019-12-25 | 2023-08-29 | 上海米哈游天命科技有限公司 | Game image processing method, device, equipment and storage medium |
CN112669429A (en) * | 2021-01-07 | 2021-04-16 | 稿定(厦门)科技有限公司 | Image distortion rendering method and device |
CN113181639B (en) * | 2021-04-28 | 2024-06-04 | 网易(杭州)网络有限公司 | Graphic processing method and device in game |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101197678A (en) * | 2007-12-27 | 2008-06-11 | 腾讯科技(深圳)有限公司 | Picture identifying code generation method and generation device |
US9342858B2 (en) * | 2012-05-31 | 2016-05-17 | Apple Inc. | Systems and methods for statistics collection using clipped pixel tracking |
CN105631924A (en) * | 2015-12-28 | 2016-06-01 | 北京像素软件科技股份有限公司 | Method for implementing distortion effect in scene |
US20170046833A1 (en) * | 2015-08-10 | 2017-02-16 | The Board Of Trustees Of The Leland Stanford Junior University | 3D Reconstruction and Registration of Endoscopic Data |
-
2017
- 2017-10-24 CN CN201711002711.2A patent/CN107564085B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101197678A (en) * | 2007-12-27 | 2008-06-11 | 腾讯科技(深圳)有限公司 | Picture identifying code generation method and generation device |
US9342858B2 (en) * | 2012-05-31 | 2016-05-17 | Apple Inc. | Systems and methods for statistics collection using clipped pixel tracking |
US20170046833A1 (en) * | 2015-08-10 | 2017-02-16 | The Board Of Trustees Of The Leland Stanford Junior University | 3D Reconstruction and Registration of Endoscopic Data |
CN105631924A (en) * | 2015-12-28 | 2016-06-01 | 北京像素软件科技股份有限公司 | Method for implementing distortion effect in scene |
Non-Patent Citations (1)
Title |
---|
《基于Android的图像特效的设计与实现》;管胜;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120615(第6期);正文第27-31页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107564085A (en) | 2018-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018176925A1 (en) | Hdr image generation method and apparatus | |
CN107564085B (en) | Image warping processing method and device, computing equipment and computer storage medium | |
WO2018103244A1 (en) | Live streaming video processing method, device, and electronic apparatus | |
KR101662846B1 (en) | Apparatus and method for generating bokeh in out-of-focus shooting | |
CN107507155B (en) | Video segmentation result edge optimization real-time processing method and device and computing equipment | |
CN107172354B (en) | Video processing method and device, electronic equipment and storage medium | |
CN108830892B (en) | Face image processing method and device, electronic equipment and computer readable storage medium | |
CN105243371A (en) | Human face beauty degree detection method and system and shooting terminal | |
CN107665482B (en) | Video data real-time processing method and device for realizing double exposure and computing equipment | |
CN105049718A (en) | Image processing method and terminal | |
CN112541867B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
WO2014170886A1 (en) | System and method for online processing of video images in real time | |
CN111008935B (en) | Face image enhancement method, device, system and storage medium | |
CN107194869B (en) | Image processing method and terminal, computer storage medium and computer equipment | |
CN107959798B (en) | Video data real-time processing method and device and computing equipment | |
CN107610149B (en) | Image segmentation result edge optimization processing method and device and computing equipment | |
CN112258440B (en) | Image processing method, device, electronic equipment and storage medium | |
WO2017173578A1 (en) | Image enhancement method and device | |
US20240296531A1 (en) | System and methods for depth-aware video processing and depth perception enhancement | |
CN107705279B (en) | Image data real-time processing method and device for realizing double exposure and computing equipment | |
CN114862729A (en) | Image processing method, image processing device, computer equipment and storage medium | |
CN113793257B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN116263942A (en) | Method for adjusting image contrast, storage medium and computer program product | |
CN113205011B (en) | Image mask determining method and device, storage medium and electronic equipment | |
CN108734712B (en) | Background segmentation method and device and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210507 |