CN107564085B - Image warping processing method, device, computing device and computer storage medium - Google Patents

Image warping processing method, device, computing device and computer storage medium Download PDF

Info

Publication number
CN107564085B
CN107564085B CN201711002711.2A CN201711002711A CN107564085B CN 107564085 B CN107564085 B CN 107564085B CN 201711002711 A CN201711002711 A CN 201711002711A CN 107564085 B CN107564085 B CN 107564085B
Authority
CN
China
Prior art keywords
image
distortion
data
offset
effect image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711002711.2A
Other languages
Chinese (zh)
Other versions
CN107564085A (en
Inventor
眭一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201711002711.2A priority Critical patent/CN107564085B/en
Publication of CN107564085A publication Critical patent/CN107564085A/en
Application granted granted Critical
Publication of CN107564085B publication Critical patent/CN107564085B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种图像扭曲处理方法、装置、计算设备及计算机存储介质,其中,图像扭曲处理方法包括:获取待处理图像和第一噪声数据;针对待处理图像中的每一个像素点,依据第一噪声数据,确定第一扭曲纹理数据;利用第一扭曲纹理数据,对像素点的颜色分量值进行处理;得到与待处理图像对应的图像扭曲数据;根据图像扭曲数据,得到扭曲效果图像;根据用户触发的拍摄指令,保存扭曲效果图像。本发明采用了深度学习方法,实现了高效率高精准性地完成场景分割处理,且根据本发明提供的技术方案,利用噪声数据对图像中像素点的颜色分量值进行处理,能够方便地得到扭曲效果图像,优化了图像扭曲处理方式,提高了图像扭曲效果。

Figure 201711002711

The invention discloses an image warping processing method, device, computing device and computer storage medium, wherein the image warping processing method includes: acquiring a to-be-processed image and first noise data; The first noise data is used to determine the first warped texture data; the first warped texture data is used to process the color component value of the pixel point; the image warped data corresponding to the image to be processed is obtained; the warped effect image is obtained according to the image warped data; According to the shooting command triggered by the user, the distortion effect image is saved. The present invention adopts the deep learning method to realize the scene segmentation processing with high efficiency and high accuracy, and according to the technical scheme provided by the present invention, the color component value of the pixel point in the image is processed by using the noise data, and the distortion can be easily obtained. Effect image, optimized the image distortion processing method, and improved the image distortion effect.

Figure 201711002711

Description

Image warping processing method and device, computing equipment and computer storage medium
Technical Field
The invention relates to the field of image processing, in particular to an image warping processing method, an image warping processing device, computing equipment and a computer storage medium.
Background
With the development of science and technology, the technology of image acquisition equipment is also increasing day by day. The acquired image is clearer, and the resolution and the display effect are also greatly improved. However, the existing images may not meet the requirements of the user, and the user wants to perform personalized processing on the images, for example, to process the content in the images to have a distortion effect when looking through the steam. In the prior art, most of images are processed by utilizing functions such as sine or cosine and the like to obtain images with distortion effects, however, the images obtained by processing in the mode have poor distortion effects, are very hard and are not natural.
Disclosure of Invention
In view of the above, the present invention has been made to provide an image warping processing method, apparatus, computing device and computer storage medium that overcome or at least partially solve the above-mentioned problems.
According to an aspect of the present invention, there is provided an image warping processing method, including:
acquiring an image to be processed and first noise data;
determining first distorted texture data according to the first noise data for each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data;
obtaining image distortion data corresponding to the image to be processed;
and obtaining a distortion effect image according to the image distortion data.
Further, the first noise data includes a plurality of first color data;
determining the first warped texture data from the first noise data further comprises:
extracting first color data from the first noise data;
determining first warped texture data according to the extracted first color data.
Further, extracting the first color data from the first noise data further includes: first color data is extracted from the first noise data according to a time parameter.
Further, processing color component values of the pixel points using the first warped texture data further comprises:
determining a first distortion offset corresponding to the pixel point by using the first distortion texture data;
determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point;
and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
Further, determining a first warping offset corresponding to the pixel point using the first warped texture data further includes: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and the preset distortion coefficient.
Further, obtaining a warping effect image according to the image warping data further includes: and determining a basic effect image according to the image distortion data, and determining the basic effect image as a distortion effect image.
Further, after acquiring the image to be processed and the first noise data, the method further includes:
acquiring second noise data;
processing the second noise data by using the first noise data to generate a surface layer smoke effect map;
according to the image distortion data, obtaining the distortion effect image specifically comprises the following steps: determining a basic effect image according to the image distortion data; and adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
Further, the second noise data includes a plurality of second color data;
processing the second noise data using the first noise data, the generating the surface smoke effect map further comprising:
determining second warped texture data from the first noise data for each of the second color data in the second noise data; determining a second distortion offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second distortion offset;
obtaining noise distortion data corresponding to the second noise data;
and generating a surface layer smoke effect map according to the noise distortion data.
Further, generating a surface smoke effect map based on the noise distortion data further comprises: and performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data to generate a surface layer smoke effect map.
Further, determining an offset object corresponding to the second warping offset amount according to the second warping offset amount and the second color data further includes:
obtaining an offset object to be determined according to the second distortion offset and the second color data;
judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
Further, the first noise data is discrete color noise data.
Further, the second noise data is continuous black-and-white noise data.
Further, the method further comprises:
carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; wherein, the image to be processed contains a specific object;
determining the contour information of a specific object according to a scene segmentation result corresponding to the image to be processed;
and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
Further, obtaining the local distortion effect image according to the contour information of the specific object, the image to be processed, and the distortion effect image further includes:
extracting a local image from the distortion effect image according to the contour information of the specific object;
and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
Further, acquiring the image to be processed further comprises: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
Further, after obtaining the distortion effect image, the method further comprises: displaying the distortion effect image.
Further, displaying the warp effect image further comprises: and displaying the distortion effect image in real time.
Further, after obtaining the distortion effect image, the method further comprises: and saving the distortion effect image according to a shooting instruction triggered by a user.
Further, after obtaining the distortion effect image, the method further comprises: and saving the video formed by the distortion effect image as a frame image according to a recording instruction triggered by a user.
Further, after obtaining the local warping effect image, the method further includes: displaying the local distortion effect image.
Further, displaying the local distortion effect image further comprises: and displaying the local distortion effect image in real time.
Further, after obtaining the local warping effect image, the method further includes: and saving the local distortion effect image according to a shooting instruction triggered by a user.
Further, after obtaining the local warping effect image, the method further includes: and storing the video formed by the local distortion effect image as a frame image according to a recording instruction triggered by a user.
According to another aspect of the present invention, there is provided an image warping processing apparatus including:
an acquisition module adapted to acquire an image to be processed and first noise data;
the first processing module is suitable for determining first distorted texture data according to the first noise data aiming at each pixel point in the image to be processed; processing color component values of the pixel points by using the first warped texture data;
the first generation module is suitable for obtaining image distortion data corresponding to the image to be processed;
and the second generation module is suitable for obtaining a distortion effect image according to the image distortion data.
Further, the first noise data includes a plurality of first color data;
the first processing module is further adapted to:
extracting first color data from the first noise data;
determining first warped texture data according to the extracted first color data.
Further, the first processing module is further adapted to: first color data is extracted from the first noise data according to a time parameter.
Further, the first processing module is further adapted to:
determining a first distortion offset corresponding to the pixel point by using the first distortion texture data;
determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point;
and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
Further, the first processing module is further adapted to: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and the preset distortion coefficient.
Further, the second generating module is further adapted to: and determining a basic effect image according to the image distortion data, and determining the basic effect image as a distortion effect image.
Further, the obtaining module is further adapted to: acquiring second noise data;
the device also includes: the second processing module is suitable for processing the second noise data by using the first noise data to generate a surface layer smoke effect map;
the second generation module is further adapted to: determining a basic effect image according to the image distortion data; and adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
Further, the second noise data includes a plurality of second color data;
the second processing module is further adapted to:
determining second warped texture data from the first noise data for each of the second color data in the second noise data; determining a second distortion offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second distortion offset;
obtaining noise distortion data corresponding to the second noise data;
and generating a surface layer smoke effect map according to the noise distortion data.
Further, the second processing module is further adapted to: and performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data to generate a surface layer smoke effect map.
Further, the second processing module is further adapted to:
obtaining an offset object to be determined according to the second distortion offset and the second color data;
judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
Further, the first noise data is discrete color noise data.
Further, the second noise data is continuous black-and-white noise data.
Further, the apparatus further comprises:
the segmentation module is suitable for carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; wherein, the image to be processed contains a specific object;
the determining module is suitable for determining the outline information of the specific object according to a scene segmentation result corresponding to the image to be processed;
and the third generation module is suitable for obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
Further, the third generating module is further adapted to: extracting a local image from the distortion effect image according to the contour information of the specific object; and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
Further, the obtaining module is further adapted to: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
Further, the apparatus further comprises: and the display module is suitable for displaying the distortion effect image.
Further, the display module is further adapted to: and displaying the distortion effect image in real time.
Further, the apparatus further comprises: and the first storage module is suitable for storing the distortion effect image according to a shooting instruction triggered by a user.
Further, the apparatus further comprises: and the second storage module is suitable for storing the video formed by the distorted effect image as the frame image according to the recording instruction triggered by the user.
Further, the apparatus further comprises: and the display module is suitable for displaying the local distortion effect image.
Further, the display module is further adapted to: and displaying the local distortion effect image in real time.
Further, the apparatus further comprises: and the first storage module is suitable for storing the local distortion effect image according to a shooting instruction triggered by a user.
Further, the apparatus further comprises: and the second storage module is suitable for storing the video formed by the local distortion effect image as the frame image according to the recording instruction triggered by the user.
According to yet another aspect of the present invention, there is provided a computing device comprising: the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the image distortion processing method.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the image warping processing method.
According to the technical scheme provided by the invention, the image to be processed and the first noise data are obtained, then, aiming at each pixel point in the image to be processed, the first distorted texture data is determined according to the first noise data, the color component values of the pixel points are processed by utilizing the first distorted texture data to obtain the image distorted data corresponding to the image to be processed, and then, the image with the distorted effect is obtained according to the image distorted data. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 shows a flow diagram of an image warping processing method according to an embodiment of the invention;
FIG. 2 shows a flow diagram of an image warping processing method according to another embodiment of the present invention;
fig. 3 is a block diagram showing a configuration of an image warping processing apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram showing a configuration of an image warping processing apparatus according to another embodiment of the present invention;
FIG. 5 illustrates a schematic structural diagram of a computing device according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a flow diagram of an image warping processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S100, an image to be processed and first noise data are acquired.
Specifically, the image to be processed may be an image taken by the user, an image in a website, or an image shared by other users, which is not limited herein. When the user wants to process the image to be processed into an image having a distortion effect, for example, wants to process the content in the image to have a distortion effect seen through the vapor, the image to be processed and the first noise data may be acquired in step S100. The first noise data includes a plurality of first color data, and particularly, the first noise data may be discrete color noise data, and the discrete first noise data is used to help improve the image warping effect, so that the processed image has a natural and better warping effect.
Step S101, aiming at each pixel point in the image to be processed, first distorted texture data is determined according to first noise data.
In step S101, it is required to determine, for each pixel point in the image to be processed, first distorted texture data corresponding to the pixel point according to the first noise data. Specifically, the pixel points in the image to be processed correspond to the first color data in the first noise data, and then for each pixel point in the image to be processed, the first distorted texture data corresponding to the pixel point is determined according to the first color data corresponding to the pixel point in the first noise data, so that different first distorted texture data are determined for the pixel points in the image to be processed, and compared with the case that all the pixel points use the same distorted texture data, the processed image has a natural and better distorted effect.
In step S102, the color component values of the pixel points are processed by using the first warped texture data.
And for each pixel point in the image to be processed, assigning and the like to the color component value of the pixel point by using the first distorted texture data corresponding to the pixel point. When the image to be processed is a color image, taking the color mode of the image to be processed as an example of adopting an RGB color mode, and the color component values of the pixel points in the image to be processed include the color component values of the three color channels of red, green, and blue, in a specific application, a person skilled in the art can select a suitable color component value of the color channel from the color component values of the three color channels corresponding to the pixel points to process, or process the color component values of all the color channels, which is not limited herein. For example, only the color component values of the red color channel and the color component values of the green color channel of a pixel point may be processed.
And step S103, obtaining image distortion data corresponding to the image to be processed.
And processing the color component value of each pixel point in the image to be processed to obtain data, namely the image distortion data corresponding to the image to be processed.
And step S104, obtaining a distortion effect image according to the image distortion data.
After the image warping data is obtained in step S103, a warping effect image is obtained from the image warping data in step S104. For example, if a user wants to process all the contents of the image to be processed to have a distortion effect viewed through steam, a base effect image, i.e., an image in which all the contents of the image to be processed have a distortion effect, may be determined based on the image distortion data, and then the base effect image may be determined as a distortion effect image.
According to the image warping method provided by this embodiment, an image to be processed and first noise data are acquired, then, for each pixel point in the image to be processed, first warped texture data is determined according to the first noise data, color component values of the pixel points are processed by using the first warped texture data, image warping data corresponding to the image to be processed is obtained, and then, a warped effect image is obtained according to the image warping data. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
Fig. 2 shows a flow chart of an image warping processing method according to another embodiment of the present invention, as shown in fig. 2, the method includes the steps of:
step S200, acquiring an image to be processed, first noise data, and second noise data.
Wherein the first noise data includes a plurality of first color data, and the second noise data includes a plurality of second color data. Specifically, the first noise data is discrete color noise data, and the second noise data is continuous black and white noise data, and the second noise data can be processed by using the first noise data to generate a surface layer smoke effect map. The discrete first noise data is adopted, so that the image distortion effect is improved, and the processed image has a natural and good distortion effect; by using the continuous second noise data, a surface layer smoke effect map with continuous smoke effects can be generated, and the processed image has a natural smoke turning effect.
Alternatively, in step S200, the image to be processed captured by the image capturing device may be acquired in real time. Specifically, the image capturing device may be a mobile terminal, and for example, the image capturing device is a mobile terminal, and the image to be processed captured by a camera of the mobile terminal is obtained in real time, where the image to be processed may be any image, such as an image including a landscape or an image including a human body, and the like, and is not limited herein.
Step S201, extracting first color data from first noise data aiming at each pixel point in an image to be processed; determining first warped texture data according to the extracted first color data.
And extracting first color data corresponding to the pixel point from the first noise data aiming at each pixel point in the image to be processed. Therefore, different first distorted texture data are determined for the pixel points in the image to be processed, and compared with the situation that all the pixel points use the same distorted texture data, the method is beneficial to obtaining a natural and better distorted effect.
To further improve the image warping effect, the first color data may be extracted from the first noise data according to a temporal parameter. Specifically, for the same pixel point in the image to be processed, when the time parameter changes, different first color data are extracted from the first noise data and used as first color data corresponding to the pixel point.
In practical applications, the first noise data may be a color noise map, and the color component value corresponding to each pixel in the color noise map is a first color data. For convenience of explanation, it is assumed that pixel points in the image to be processed are respectively a1, a2, A3, etc., and pixel points in the color noise map are respectively B1, B2, B3, etc., and for the pixel point a1 in the image to be processed, when the time parameter is time 1, a color component value corresponding to the pixel point B1 is extracted from the color noise map and is used as first color data corresponding to the pixel point a 1; when the time parameter is time 2, the color component value corresponding to the pixel point B3 is extracted from the color noise map as the first color data corresponding to the pixel point a 1.
Specifically, for each pixel point in the image to be processed, the first warped texture data may be obtained by calculation according to the extracted first color data corresponding to the pixel point and a preset first calculation function, where a person skilled in the art may set the preset first calculation function according to actual needs, and the method is not limited herein.
Step S202, determining a first warping offset corresponding to the pixel point by using the first warped texture data.
Specifically, a first distortion offset corresponding to the pixel point may be determined by using the first distortion texture data and the preset distortion coefficient. The person skilled in the art can adjust the degree of distortion of the image by adjusting the coefficient of distortion.
Step S203, determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point.
After the first distortion offset corresponding to the pixel point is determined, the pixel point corresponding to the first distortion offset can be determined according to the first distortion offset and the pixel point.
Step S204, assigning the color component value of the pixel point to the pixel point corresponding to the first distortion offset.
After the pixel point corresponding to the first distortion offset is determined, the color component value of the pixel point is assigned to the pixel point corresponding to the first distortion offset. For example, for the pixel point a1 in the image to be processed, the pixel point corresponding to the first distortion offset is the pixel point a2, and then the color component value of the pixel point a1 is assigned to the pixel point a2, so that the pixel point a2 has the color component value of the pixel point a1, thereby achieving the effect of image distortion.
Assuming that the color component values of the pixel points in the image to be processed include color component values of three color channels, namely red, green and blue, in a specific application, a person skilled in the art may select a color component value of a suitable color channel from the color component values of the three color channels corresponding to the pixel points and assign the color component value to the pixel point corresponding to the first distortion offset, or assign the color component values of all the color channels to the pixel point corresponding to the first distortion offset, which is not limited herein. For example, only the color component value of the red color channel and the color component value of the green color channel of the pixel point may be correspondingly assigned to the pixel point corresponding to the first distortion offset.
In step S205, image distortion data corresponding to the image to be processed is obtained.
And assigning the color component value of each pixel point in the image to be processed to obtain data, namely the image distortion data corresponding to the image to be processed.
In step S206, a base effect image is determined according to the image warping data.
The basic effect image is an image with distortion effect in all contents in the image to be processed.
Step S207 determines second warped texture data from the first noise data for each of the second color data in the second noise data.
Second color data in the second noise data corresponds to first color data in the first noise data, and then for each second color data in the second noise data, first color data corresponding to the second color data is extracted from the first noise data, and then second warped texture data is calculated according to the extracted first color data corresponding to the second color data and a preset second calculation function, so that different second warped texture data is determined for the second color data in the second noise data, which helps to obtain a natural and better smoke-dazzling effect than when all the second color data use the same warped texture data.
The preset second calculation function may be set by a person skilled in the art according to actual needs, and the preset second calculation function may be the same as the preset first calculation function or different from the preset first calculation function, which is not limited herein.
In step S208, a second warping offset corresponding to the second color data is determined using the second warped texture data.
Specifically, a second distortion offset corresponding to the second color data may be determined using the second distorted texture data and a preset smoke distortion coefficient. The person skilled in the art can adjust the smoke distortion degree by adjusting the smoke distortion coefficient.
In step S209, an offset object corresponding to the second distortion offset amount is determined according to the second distortion offset amount and the second color data.
Specifically, obtaining an offset object to be determined according to the second distortion offset and the second color data, and then judging whether the offset object to be determined exceeds a preset object range; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
Since the to-be-determined offset object obtained according to the second distortion offset and the second color data may exceed the preset object range, it is also necessary to determine whether the to-be-determined offset object exceeds the preset object range. And if the to-be-determined offset object exceeds the preset object range, calculating to obtain an offset object corresponding to the second distortion offset according to the preset algorithm and the to-be-determined offset object, so that the offset object is adjusted, and the subsequent generation of the surface layer smoke effect map with continuous smoke effect is facilitated. If the offset object to be determined does not exceed the preset object range, the offset object to be determined may be directly determined as the offset object corresponding to the second distortion offset.
In step S210, the second color data is assigned to the offset object corresponding to the second distortion offset amount.
After determining the offset object corresponding to the second warping offset amount, the second color data is assigned to the offset object corresponding to the second warping offset amount.
In practical applications, the first noise data may be a color noise map, and the second noise data is a black-and-white noise map, so that the color component value corresponding to each pixel in the color noise map is a first color data, and the color component value corresponding to each pixel in the black-and-white noise map is a second color data. For convenience of explanation, it is assumed that pixel points in the color noise map are respectively B1, B2, B3, etc., pixel points in the black-and-white noise map are respectively C1, C2, C3, etc., and it is assumed that a certain second color data is a color component value corresponding to a pixel point C1 in the black-and-white noise map, and for the second color data, an offset object corresponding to the second distortion offset is a pixel point C3, then the color component value of the pixel point C1 is assigned to the pixel point C3, so that the pixel point C3 has the color component value of the pixel point C1, thereby achieving the effect of smoke wrapping.
In step S211, noise distortion data corresponding to the second noise data is obtained.
And performing assignment processing on each second color data in the second noise data to obtain data, namely noise distortion data corresponding to the second noise data.
Step S212, generating a surface layer smoke effect map according to the noise distortion data.
And after the noise distortion data is obtained, generating a surface smoke effect map according to the noise distortion data. Specifically, the surface layer smoke effect map can be generated by performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data. Wherein, a person skilled in the art can set the preset function and the preset additive color value according to actual needs, and is not limited here. For example, the preset function may be a sine function, a cosine function, or the like, and the preset color value may be a color value corresponding to a color of golden yellow or a color value corresponding to a color of red, or the like. Since the second noise data is black and white noise data, a surface smoke effect map with a color smoke effect can be generated according to the preset additive color value and the noise distortion data. For example, when the preset additive color value is a color value corresponding to golden yellow, a surface layer smoke effect map having a golden yellow smoke effect is generated.
Step S213, adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
After the basic effect image and the surface layer smoke effect map are obtained, the surface layer smoke effect map is added to the basic effect image to obtain a distortion effect image, the distortion effect image has a distortion effect, the image distortion effect is improved, the smoke effect is achieved, and the image effect is greatly enriched.
Step S214, displaying the distortion effect image in real time.
And displaying the obtained distortion effect image in real time, so that a user can directly see the distortion effect image obtained after the image to be processed is processed. Immediately after the warp effect image is obtained, the captured image to be processed is replaced and displayed by the warp effect image, the replacement is generally performed within 1/24 seconds, and the replacement time is relatively short, so that the human eyes have no obvious perception, which is equivalent to displaying the warp effect image in real time.
Step S215, saving the distortion effect image according to the shooting instruction triggered by the user.
After the distortion effect image is displayed, the distortion effect image can be stored according to a shooting instruction triggered by a user. If the user clicks a shooting button of the camera, a shooting instruction is triggered, and the displayed distortion effect image is stored.
And step S216, storing the video formed by the distortion effect image as the frame image according to the recording instruction triggered by the user.
When the distortion effect image is displayed, the video formed by the distortion effect image as the frame image can be stored according to a recording instruction triggered by a user. If the user clicks a recording button of the camera, a recording instruction is triggered, and the displayed distorted effect image is stored as a frame image in the video, so that a plurality of distorted effect images are stored as the video formed by the frame images.
Step S215 and step S216 are optional steps of this embodiment, and there is no execution sequence, and the corresponding steps are selected and executed according to different instructions triggered by the user.
In addition, in some application scenarios, the image to be processed contains a specific object, such as a human body, and the user only wants to warp a specific object region or a non-specific object region in the image to be processed, in which case the method may further include: carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed; determining the contour information of a specific object according to a scene segmentation result corresponding to the image to be processed; and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
When the image to be processed is subjected to scene segmentation processing, a deep learning method can be utilized. Deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. And performing scene segmentation on the image to be processed by utilizing a segmentation method of deep learning to obtain a scene segmentation result corresponding to the image to be processed. Specifically, a scene segmentation network or the like obtained by a deep learning method can be used for performing scene segmentation on an image to be processed to obtain a scene segmentation result corresponding to the image to be processed, and then the contour information of the specific object is determined according to the scene segmentation result corresponding to the image to be processed. If the specific object is a human body, the contour information of the human body can be determined according to the scene segmentation result, so that the regions in the image to be processed are distinguished from the regions which are not human bodies.
After the contour information of the specific object is determined, a local image can be extracted from the distortion effect image according to the contour information of the specific object, and then the image to be processed and the local image are subjected to fusion processing to obtain a local distortion effect image. Specifically, it may be determined which regions in the distortion effect image are specific object regions and which regions are non-specific object regions according to the contour information of the specific object, the non-specific object regions may be referred to as background regions, and then an image of the specific object regions or an image of the non-specific object regions may be extracted from the distortion effect image as a partial image. For example, when the specific object is a human body, the user wants to distort a human body region in the image to be processed, then, according to the contour information of the human body, an image of the human body region is extracted from the distortion effect image to be used as a local image, and then the image to be processed and the local image are subjected to fusion processing to obtain a local distortion effect image, wherein the local distortion effect image is an image in which only the human body region has a distortion effect and the background region does not have the distortion effect; for another example, when the specific object is a human body, the user wants to distort a background region except the human body region in the image to be processed, then, according to the contour information of the human body, an image of the background region is extracted from the distortion effect image to be used as a local image, and then the image to be processed and the local image are subjected to fusion processing to obtain a local distortion effect image, wherein the local distortion effect image is an image in which only the background region has a distortion effect and the human body region does not have the distortion effect.
In the case where the partial distortion effect image is obtained, the partial distortion effect image is displayed in real time without displaying the distortion effect image in step S214, the partial distortion effect image is saved in accordance with a shooting instruction triggered by a user in step S215, and a video composed of the partial distortion effect image as a frame image is saved in accordance with a recording instruction triggered by a user in step S216.
According to the image distortion processing method provided by the embodiment, the color component values of the pixel points in the image are processed by using noise data, so that the basic effect image can be conveniently obtained; the other noise data is processed by utilizing the noise data, and a surface layer smoke effect map can be obtained, so that an image with both a distortion effect and a smoke effect is obtained, an image distortion processing mode is optimized, the image distortion effect is effectively improved, and the image effect is greatly enriched; in addition, images with local distortion effects can be obtained according to scene segmentation results corresponding to the images to be processed.
Fig. 3 is a block diagram showing a configuration of an image warping processing apparatus according to an embodiment of the present invention, as shown in fig. 3, the apparatus including: an acquisition module 301, a first processing module 302, a first generation module 303, and a second generation module 304.
The acquisition module 301 is adapted to: an image to be processed and first noise data are acquired.
The first noise data includes a plurality of first color data, and particularly, the first noise data may be discrete color noise data, and the discrete first noise data is used to help improve the image warping effect, so that the processed image has a natural and better warping effect.
The first processing module 302 is adapted to: determining first distorted texture data according to the first noise data for each pixel point in the image to be processed; color component values of the pixel points are processed using the first warped texture data.
The first generation module 303 is adapted to: and obtaining image distortion data corresponding to the image to be processed.
The second generation module 304 is adapted to: and obtaining a distortion effect image according to the image distortion data.
Optionally, the second generating module 304 is further adapted to: and determining a basic effect image according to the image distortion data, and determining the basic effect image as a distortion effect image.
According to the image warping processing apparatus provided in this embodiment, the obtaining module obtains an image to be processed and first noise data, the first processing module determines first warped texture data according to the first noise data for each pixel point in the image to be processed, color component values of the pixel points are processed by using the first warped texture data, the first generating module obtains image warping data corresponding to the image to be processed, and the second generating module obtains a warped effect image according to the image warping data. According to the technical scheme provided by the invention, the noise data is utilized to process the color component values of the pixel points in the image, so that the image with the distortion effect can be conveniently obtained, the image distortion processing mode is optimized, and the image distortion effect is improved.
Fig. 4 is a block diagram showing a configuration of an image warping processing apparatus according to another embodiment of the present invention, as shown in fig. 4, the apparatus including: the device comprises an acquisition module 401, a first processing module 402, a first generation module 403, a second processing module 404, a second generation module 405, a display module 406, a first saving module 407 and a second saving module 408.
The acquisition module 401 is adapted to: an image to be processed, first noise data, and second noise data are acquired.
Wherein the first noise data includes a plurality of first color data, and the second noise data includes a plurality of second color data. Specifically, the first noise data is discrete color noise data, and the second noise data is continuous black and white noise data, and the second noise data can be processed by using the first noise data to generate a surface layer smoke effect map.
Optionally, the obtaining module 401 is further adapted to: and acquiring the to-be-processed image captured by the image acquisition equipment in real time.
The first processing module 402 is adapted to: extracting first color data from the first noise data aiming at each pixel point in the image to be processed; determining first warped texture data according to the extracted first color data; color component values of the pixel points are processed using the first warped texture data.
Wherein the first processing module 402 is further adapted to: first color data is extracted from the first noise data according to a time parameter.
The first processing module 402 is further adapted to: determining a first distortion offset corresponding to the pixel point by using the first distortion texture data; determining a pixel point corresponding to the first distortion offset according to the first distortion offset and the pixel point; and assigning the color component values of the pixel points to the pixel points corresponding to the first distortion offset.
The first processing module 402 is further adapted to: and determining a first distortion offset corresponding to the pixel point by using the first distortion texture data and the preset distortion coefficient.
The first generation module 403 is adapted to: and obtaining image distortion data corresponding to the image to be processed.
The second processing module 404 is adapted to: and processing the second noise data by using the first noise data to generate a surface layer smoke effect map.
In particular, the second processing module 404 is further adapted to: determining second warped texture data from the first noise data for each of the second color data in the second noise data; determining a second distortion offset corresponding to the second color data using the second warped texture data; determining an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assigning the second color data to an offset object corresponding to the second distortion offset; obtaining noise distortion data corresponding to the second noise data; and generating a surface layer smoke effect map according to the noise distortion data.
Optionally, the second processing module 404 is further adapted to: and performing semi-transparent processing according to a preset function and/or a preset adding color value and noise distortion data to generate a surface layer smoke effect map.
Optionally, the second processing module 404 is further adapted to: obtaining an offset object to be determined according to the second distortion offset and the second color data; judging whether the offset object to be determined exceeds a preset object range or not; if so, calculating to obtain an offset object corresponding to the second distortion offset according to a preset algorithm and the offset object to be determined; if not, determining the offset object to be determined as the offset object corresponding to the second distortion offset.
The second generation module 405 is adapted to: determining a basic effect image according to the image distortion data; and adding a surface layer smoke effect map to the basic effect image to obtain a distortion effect image.
The display module 406 is adapted to: displaying the distortion effect image.
Optionally, the display module 406 is further adapted to: and displaying the distortion effect image in real time. The display module 406 displays the obtained distortion effect image in real time, and a user can directly see the distortion effect image obtained after the image to be processed is processed. Immediately after the second generation module 405 obtains the distortion effect image, the display module 406 replaces the captured image to be processed with the distortion effect image for display, typically within 1/24 seconds, and since the replacement time is relatively short, the human eye does not perceive the image to be processed, which is equivalent to the display module 406 displaying the distortion effect image in real time.
The first saving module 407 is adapted to: and saving the distortion effect image according to a shooting instruction triggered by a user.
After displaying the distortion effect image, the first saving module 407 may save the distortion effect image according to a shooting instruction triggered by a user. If the user clicks a shooting button of the camera to trigger a shooting instruction, the first saving module 407 saves the displayed distortion effect image.
The second saving module 408 is adapted to: and saving the video formed by the distortion effect image as a frame image according to a recording instruction triggered by a user.
When the distortion effect image is displayed, the second saving module 408 may save the video composed of the distortion effect image as the frame image according to a recording instruction triggered by the user. If the user clicks the recording button of the camera to trigger a recording command, the second saving module 408 saves the displayed distortion effect image as a frame image in the video, so as to save a plurality of distortion effect images as a video composed of frame images.
And executing the corresponding first saving module 407 and the second saving module 408 according to different instructions triggered by a user.
In addition, in some application scenarios, the image to be processed contains a specific object, such as a human body, and the user only wants to warp a specific object region or a non-specific object region in the image to be processed, in which case the apparatus further includes: a segmentation module 409, a determination module 410 and a third generation module 411.
The segmentation module 409 is adapted to: and carrying out scene segmentation processing on the image to be processed to obtain a scene segmentation result corresponding to the image to be processed.
The determination module 410 is adapted to: and determining the contour information of the specific object according to the scene segmentation result corresponding to the image to be processed.
The third generating module 411 is adapted to: and obtaining a local distortion effect image according to the contour information of the specific object, the image to be processed and the distortion effect image.
Wherein the third generating module 411 is further adapted to: extracting a local image from the distortion effect image according to the contour information of the specific object; and carrying out fusion processing on the image to be processed and the local image to obtain a local distortion effect image.
In this case, the display module 406 displays not the distortion effect image but the partial distortion effect image obtained by the third generation module 411, for example, displays the partial distortion effect image in real time. Similarly, the first saving module 407 is adapted to save the local distortion effect image according to a shooting instruction triggered by a user. The second saving module 408 is adapted to save the video composed of the local distortion effect image as the frame image according to a recording instruction triggered by the user.
According to the image distortion processing device provided by the embodiment, the color component values of the pixels in the image are processed by using the noise data, so that the basic effect image can be conveniently obtained; the other noise data is processed by utilizing the noise data, and a surface layer smoke effect map can be obtained, so that an image with both a distortion effect and a smoke effect is obtained, an image distortion processing mode is optimized, the image distortion effect is effectively improved, and the image effect is greatly enriched; in addition, images with local distortion effects can be obtained according to scene segmentation results corresponding to the images to be processed.
The invention also provides a nonvolatile computer storage medium, and the computer storage medium stores at least one executable instruction which can execute the image distortion processing method in any method embodiment.
Fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor (processor)502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically perform relevant steps in the above-described image warping method embodiment.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may specifically be configured to cause the processor 502 to execute the image warping processing method in any of the above-described method embodiments. For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing image warping processing embodiments, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (44)

1.一种图像扭曲处理方法,所述方法包括:1. An image distortion processing method, the method comprising: 获取待处理图像和第一噪声数据;Obtain the image to be processed and the first noise data; 针对所述待处理图像中的每一个像素点,依据所述第一噪声数据,确定第一扭曲纹理数据;利用所述第一扭曲纹理数据,对所述像素点的颜色分量值进行处理;For each pixel in the to-be-processed image, determine first distorted texture data according to the first noise data; use the first distorted texture data to process the color component value of the pixel; 得到与待处理图像对应的图像扭曲数据;Obtain the image distortion data corresponding to the image to be processed; 根据所述图像扭曲数据,得到扭曲效果图像;According to the image distortion data, a distortion effect image is obtained; 其中,在所述获取待处理图像和第一噪声数据之后,所述方法还包括:Wherein, after the acquisition of the image to be processed and the first noise data, the method further includes: 获取第二噪声数据;obtain second noise data; 利用所述第一噪声数据,对所述第二噪声数据进行处理,生成表层烟雾效果贴图;Using the first noise data, processing the second noise data to generate a surface layer smoke effect map; 所述根据所述图像扭曲数据,得到扭曲效果图像具体为:根据所述图像扭曲数据,确定基础效果图像;为所述基础效果图像添加所述表层烟雾效果贴图,得到扭曲效果图像;The obtaining of the distortion effect image according to the image distortion data is specifically: determining a basic effect image according to the image distortion data; adding the surface smoke effect map to the basic effect image to obtain a distortion effect image; 其中,所述第二噪声数据包括多个第二颜色数据;Wherein, the second noise data includes a plurality of second color data; 所述利用所述第一噪声数据,对所述第二噪声数据进行处理,生成表层烟雾效果贴图进一步包括:The generating a surface layer smoke effect map by processing the second noise data by using the first noise data further includes: 针对所述第二噪声数据中的每一个第二颜色数据,依据所述第一噪声数据,确定第二扭曲纹理数据;利用所述第二扭曲纹理数据,确定与所述第二颜色数据对应的第二扭曲偏移量;根据所述第二扭曲偏移量和所述第二颜色数据,确定与所述第二扭曲偏移量对应的偏移对象;将所述第二颜色数据赋值给与所述第二扭曲偏移量对应的偏移对象;For each second color data in the second noise data, according to the first noise data, determine second warped texture data; using the second warped texture data, determine the corresponding second color data a second distortion offset; determine an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assign the second color data to and an offset object corresponding to the second twist offset; 得到与所述第二噪声数据对应的噪声扭曲数据;obtaining noise distortion data corresponding to the second noise data; 根据所述噪声扭曲数据,生成表层烟雾效果贴图。Based on the noise distortion data, a surface layer smoke effect map is generated. 2.根据权利要求1所述的方法,其中,所述第一噪声数据包括多个第一颜色数据;2. The method of claim 1, wherein the first noise data comprises a plurality of first color data; 所述依据所述第一噪声数据,确定第一扭曲纹理数据进一步包括:The determining, according to the first noise data, the first warped texture data further includes: 从所述第一噪声数据中提取出第一颜色数据;extracting first color data from the first noise data; 依据所提取的第一颜色数据,确定第一扭曲纹理数据。According to the extracted first color data, first warped texture data is determined. 3.根据权利要求2所述的方法,其中,所述从所述第一噪声数据中提取出第一颜色数据进一步包括:根据时间参数,从所述第一噪声数据中提取出第一颜色数据。3. The method of claim 2, wherein the extracting the first color data from the first noise data further comprises: extracting the first color data from the first noise data according to a time parameter . 4.根据权利要求1-3任一项所述的方法,其中,所述利用所述第一扭曲纹理数据,对所述像素点的颜色分量值进行处理进一步包括:4. The method according to any one of claims 1-3, wherein using the first warped texture data to process the color component value of the pixel further comprises: 利用所述第一扭曲纹理数据,确定与所述像素点对应的第一扭曲偏移量;Using the first warped texture data, determining a first warped offset corresponding to the pixel; 根据所述第一扭曲偏移量和所述像素点,确定与所述第一扭曲偏移量对应的像素点;According to the first distortion offset and the pixel point, determine the pixel point corresponding to the first distortion offset; 将所述像素点的颜色分量值赋值给与所述第一扭曲偏移量对应的像素点。Assign the color component value of the pixel point to the pixel point corresponding to the first distortion offset. 5.根据权利要求4所述的方法,其中,所述利用所述第一扭曲纹理数据,确定与所述像素点对应的第一扭曲偏移量进一步包括:利用所述第一扭曲纹理数据和预设扭曲度系数,确定与所述像素点对应的第一扭曲偏移量。5. The method according to claim 4, wherein the determining the first warp offset corresponding to the pixel point by using the first warp texture data further comprises: using the first warp texture data and A preset distortion coefficient is used to determine a first distortion offset corresponding to the pixel point. 6.根据权利要求1所述的方法,其中,所述根据所述图像扭曲数据,得到扭曲效果图像进一步包括:根据所述图像扭曲数据,确定基础效果图像,并将所述基础效果图像确定为扭曲效果图像。6. The method according to claim 1, wherein the obtaining a warping effect image according to the image warping data further comprises: determining a basic effect image according to the image warping data, and determining the basic effect image as Distort effect image. 7.根据权利要求1所述的方法,其中,所述根据所述噪声扭曲数据,生成表层烟雾效果贴图进一步包括:根据预设函数和/或预设添加颜色值以及所述噪声扭曲数据,进行半透明处理,生成表层烟雾效果贴图。7. The method according to claim 1, wherein the generating a surface layer smoke effect map according to the noise distortion data further comprises: adding a color value and the noise distortion data according to a preset function and/or a preset, performing Translucent processing to generate surface smoke effect maps. 8.根据权利要求1所述的方法,其中,所述根据所述第二扭曲偏移量和所述第二颜色数据,确定与所述第二扭曲偏移量对应的偏移对象进一步包括:8. The method of claim 1, wherein the determining an offset object corresponding to the second twist offset based on the second twist offset and the second color data further comprises: 根据所述第二扭曲偏移量和所述第二颜色数据,得到待确定偏移对象;obtaining an offset object to be determined according to the second distortion offset and the second color data; 判断所述待确定偏移对象是否超过预设对象范围;若是,则根据预设算法和所述待确定偏移对象,计算得到与所述第二扭曲偏移量对应的偏移对象;若否,则将所述待确定偏移对象确定为与所述第二扭曲偏移量对应的偏移对象。Determine whether the to-be-determined offset object exceeds the preset object range; if so, calculate the offset object corresponding to the second distortion offset according to the preset algorithm and the to-be-determined offset object; if not , the to-be-determined offset object is determined as the offset object corresponding to the second twist offset. 9.根据权利要求1所述的方法,其中,所述第一噪声数据为离散的彩色噪声数据。9. The method of claim 1, wherein the first noise data is discrete color noise data. 10.根据权利要求1所述的方法,其中,所述第二噪声数据为连续的黑白噪声数据。10. The method of claim 1, wherein the second noise data is continuous black and white noise data. 11.根据权利要求1所述的方法,其中,所述方法还包括:11. The method of claim 1, wherein the method further comprises: 对所述待处理图像进行场景分割处理,得到与待处理图像对应的场景分割结果;其中,所述待处理图像包含有特定对象;Perform scene segmentation processing on the to-be-processed image to obtain a scene segmentation result corresponding to the to-be-processed image; wherein the to-be-processed image contains a specific object; 根据与待处理图像对应的场景分割结果,确定所述特定对象的轮廓信息;Determine the contour information of the specific object according to the scene segmentation result corresponding to the image to be processed; 依据所述特定对象的轮廓信息、所述待处理图像和所述扭曲效果图像,得到局部扭曲效果图像。According to the contour information of the specific object, the to-be-processed image and the distortion effect image, a local distortion effect image is obtained. 12.根据权利要求11所述的方法,其中,所述依据所述特定对象的轮廓信息、所述待处理图像和所述扭曲效果图像,得到局部扭曲效果图像进一步包括:12 . The method according to claim 11 , wherein the obtaining a local warping effect image according to the contour information of the specific object, the to-be-processed image and the warping effect image further comprises: 12 . 依据所述特定对象的轮廓信息,从所述扭曲效果图像中提取出局部图像;extracting a partial image from the distortion effect image according to the contour information of the specific object; 对所述待处理图像和所述局部图像进行融合处理,得到局部扭曲效果图像。Fusion processing is performed on the to-be-processed image and the partial image to obtain a partial distortion effect image. 13.根据权利要求1所述的方法,其中,所述获取待处理图像进一步包括:实时获取图像采集设备捕捉的待处理图像。13. The method of claim 1, wherein the acquiring the image to be processed further comprises: acquiring the image to be processed captured by the image acquisition device in real time. 14.根据权利要求1所述的方法,其中,在所述得到扭曲效果图像之后,所述方法还包括:显示所述扭曲效果图像。14. The method of claim 1, wherein after the obtaining the distortion effect image, the method further comprises: displaying the distortion effect image. 15.根据权利要求14所述的方法,其中,所述显示所述扭曲效果图像进一步包括:实时显示所述扭曲效果图像。15. The method of claim 14, wherein the displaying the warping effect image further comprises: displaying the warping effect image in real time. 16.根据权利要求11所述的方法,其中,在所述得到扭曲效果图像之后,所述方法还包括:根据用户触发的拍摄指令,保存所述扭曲效果图像。16. The method according to claim 11, wherein, after the obtaining the distortion effect image, the method further comprises: saving the distortion effect image according to a shooting instruction triggered by a user. 17.根据权利要求11所述的方法,其中,在所述得到扭曲效果图像之后,所述方法还包括:根据用户触发的录制指令,保存由所述扭曲效果图像作为帧图像组成的视频。17. The method according to claim 11, wherein after obtaining the distortion effect image, the method further comprises: saving a video composed of the distortion effect image as a frame image according to a recording instruction triggered by a user. 18.根据权利要求11所述的方法,其中,在所述得到局部扭曲效果图像之后,所述方法还包括:显示所述局部扭曲效果图像。18. The method of claim 11, wherein after the obtaining the local warping effect image, the method further comprises: displaying the local warping effect image. 19.根据权利要求18任一项所述的方法,其中,所述显示所述局部扭曲效果图像进一步包括:实时显示所述局部扭曲效果图像。19. The method of any one of claims 18, wherein the displaying the local warping effect image further comprises: displaying the local warping effect image in real time. 20.根据权利要求11任一项所述的方法,其中,在所述得到局部扭曲效果图像之后,所述方法还包括:根据用户触发的拍摄指令,保存所述局部扭曲效果图像。20 . The method according to claim 11 , wherein, after obtaining the local distortion effect image, the method further comprises: saving the local distortion effect image according to a shooting instruction triggered by a user. 21 . 21.根据权利要求11任一项所述的方法,其中,在所述得到局部扭曲效果图像之后,所述方法还包括:根据用户触发的录制指令,保存由所述局部扭曲效果图像作为帧图像组成的视频。21. The method according to any one of claims 11, wherein, after obtaining the local distortion effect image, the method further comprises: saving the local distortion effect image as a frame image according to a recording instruction triggered by a user composed video. 22.一种图像扭曲处理装置,所述装置包括:22. An image distortion processing device, the device comprising: 获取模块,适于获取待处理图像和第一噪声数据;an acquisition module, adapted to acquire the image to be processed and the first noise data; 第一处理模块,适于针对所述待处理图像中的每一个像素点,依据所述第一噪声数据,确定第一扭曲纹理数据;利用所述第一扭曲纹理数据,对所述像素点的颜色分量值进行处理;The first processing module is adapted to, for each pixel in the to-be-processed image, determine first distorted texture data according to the first noise data; Color component values are processed; 第一生成模块,适于得到与待处理图像对应的图像扭曲数据;a first generation module, adapted to obtain image distortion data corresponding to the image to be processed; 第二生成模块,适于根据所述图像扭曲数据,得到扭曲效果图像;a second generation module, adapted to obtain a distortion effect image according to the image distortion data; 其中,所述获取模块进一步适于:获取第二噪声数据;Wherein, the acquisition module is further adapted to: acquire second noise data; 所述装置还包括:第二处理模块,适于利用所述第一噪声数据,对所述第二噪声数据进行处理,生成表层烟雾效果贴图;The device further includes: a second processing module adapted to use the first noise data to process the second noise data to generate a surface layer smoke effect map; 所述第二生成模块进一步适于:根据所述图像扭曲数据,确定基础效果图像;为所述基础效果图像添加所述表层烟雾效果贴图,得到扭曲效果图像;The second generation module is further adapted to: determine a basic effect image according to the image distortion data; add the surface layer smoke effect map to the basic effect image to obtain a distortion effect image; 其中,所述第二噪声数据包括多个第二颜色数据;Wherein, the second noise data includes a plurality of second color data; 所述第二处理模块进一步适于:The second processing module is further adapted to: 针对所述第二噪声数据中的每一个第二颜色数据,依据所述第一噪声数据,确定第二扭曲纹理数据;利用所述第二扭曲纹理数据,确定与所述第二颜色数据对应的第二扭曲偏移量;根据所述第二扭曲偏移量和所述第二颜色数据,确定与所述第二扭曲偏移量对应的偏移对象;将所述第二颜色数据赋值给与所述第二扭曲偏移量对应的偏移对象;For each second color data in the second noise data, second distorted texture data is determined according to the first noise data; a second distortion offset; determine an offset object corresponding to the second distortion offset according to the second distortion offset and the second color data; assign the second color data to and an offset object corresponding to the second twist offset; 得到与所述第二噪声数据对应的噪声扭曲数据;obtaining noise distortion data corresponding to the second noise data; 根据所述噪声扭曲数据,生成表层烟雾效果贴图。Based on the noise distortion data, a surface layer smoke effect map is generated. 23.根据权利要求22所述的装置,其中,所述第一噪声数据包括多个第一颜色数据;23. The apparatus of claim 22, wherein the first noise data comprises a plurality of first color data; 所述第一处理模块进一步适于:The first processing module is further adapted to: 从所述第一噪声数据中提取出第一颜色数据;extracting first color data from the first noise data; 依据所提取的第一颜色数据,确定第一扭曲纹理数据。According to the extracted first color data, first warped texture data is determined. 24.根据权利要求23所述的装置,其中,所述第一处理模块进一步适于:根据时间参数,从所述第一噪声数据中提取出第一颜色数据。24. The apparatus of claim 23, wherein the first processing module is further adapted to extract first color data from the first noise data according to a time parameter. 25.根据权利要求22-24任一项所述的装置,其中,所述第一处理模块进一步适于:25. The apparatus of any of claims 22-24, wherein the first processing module is further adapted to: 利用所述第一扭曲纹理数据,确定与所述像素点对应的第一扭曲偏移量;Using the first warped texture data, determining a first warped offset corresponding to the pixel; 根据所述第一扭曲偏移量和所述像素点,确定与所述第一扭曲偏移量对应的像素点;According to the first distortion offset and the pixel point, determine the pixel point corresponding to the first distortion offset; 将所述像素点的颜色分量值赋值给与所述第一扭曲偏移量对应的像素点。Assign the color component value of the pixel point to the pixel point corresponding to the first distortion offset. 26.根据权利要求25所述的装置,其中,所述第一处理模块进一步适于:利用所述第一扭曲纹理数据和预设扭曲度系数,确定与所述像素点对应的第一扭曲偏移量。26. The apparatus according to claim 25, wherein the first processing module is further adapted to: determine a first distortion offset corresponding to the pixel point by using the first distortion texture data and a preset distortion coefficient shift. 27.根据权利要求22所述的装置,其中,所述第二生成模块进一步适于:根据所述图像扭曲数据,确定基础效果图像,并将所述基础效果图像确定为扭曲效果图像。27. The apparatus of claim 22, wherein the second generation module is further adapted to: determine a base effect image according to the image warping data, and determine the base effect image as a warping effect image. 28.根据权利要求22所述的装置,其中,所述第二处理模块进一步适于:根据预设函数和/或预设添加颜色值以及所述噪声扭曲数据,进行半透明处理,生成表层烟雾效果贴图。28. The apparatus according to claim 22, wherein the second processing module is further adapted to: add color values and the noise distortion data according to a preset function and/or preset, perform translucent processing, and generate surface smoke Effect map. 29.根据权利要求22所述的装置,其中,所述第二处理模块进一步适于:29. The apparatus of claim 22, wherein the second processing module is further adapted to: 根据所述第二扭曲偏移量和所述第二颜色数据,得到待确定偏移对象;obtaining an offset object to be determined according to the second distortion offset and the second color data; 判断所述待确定偏移对象是否超过预设对象范围;若是,则根据预设算法和所述待确定偏移对象,计算得到与所述第二扭曲偏移量对应的偏移对象;若否,则将所述待确定偏移对象确定为与所述第二扭曲偏移量对应的偏移对象。Determine whether the to-be-determined offset object exceeds the preset object range; if so, calculate the offset object corresponding to the second distortion offset according to the preset algorithm and the to-be-determined offset object; if not , the to-be-determined offset object is determined as the offset object corresponding to the second twist offset. 30.根据权利要求22所述的装置,其中,所述第一噪声数据为离散的彩色噪声数据。30. The apparatus of claim 22, wherein the first noise data is discrete color noise data. 31.根据权利要求22所述的装置,其中,所述第二噪声数据为连续的黑白噪声数据。31. The apparatus of claim 22, wherein the second noise data is continuous black and white noise data. 32.根据权利要求22所述的装置,其中,所述装置还包括:32. The apparatus of claim 22, wherein the apparatus further comprises: 分割模块,适于对所述待处理图像进行场景分割处理,得到与待处理图像对应的场景分割结果;其中,所述待处理图像包含有特定对象;a segmentation module, adapted to perform scene segmentation processing on the to-be-processed image to obtain a scene segmentation result corresponding to the to-be-processed image; wherein the to-be-processed image contains a specific object; 确定模块,适于根据与待处理图像对应的场景分割结果,确定所述特定对象的轮廓信息;a determination module, adapted to determine the contour information of the specific object according to the scene segmentation result corresponding to the image to be processed; 第三生成模块,适于依据所述特定对象的轮廓信息、所述待处理图像和所述扭曲效果图像,得到局部扭曲效果图像。The third generating module is adapted to obtain a local distortion effect image according to the contour information of the specific object, the to-be-processed image and the distortion effect image. 33.根据权利要求32所述的装置,其中,所述第三生成模块进一步适于:依据所述特定对象的轮廓信息,从所述扭曲效果图像中提取出局部图像;对所述待处理图像和所述局部图像进行融合处理,得到局部扭曲效果图像。33. The apparatus according to claim 32, wherein the third generation module is further adapted to: extract a partial image from the distortion effect image according to the contour information of the specific object; Perform fusion processing with the local image to obtain a local distortion effect image. 34.根据权利要求22所述的装置,其中,所述获取模块进一步适于:实时获取图像采集设备捕捉的待处理图像。34. The apparatus of claim 22, wherein the acquisition module is further adapted to acquire, in real time, an image to be processed captured by an image acquisition device. 35.根据权利要求22所述的装置,其中,所述装置还包括:显示模块,适于显示所述扭曲效果图像。35. The apparatus of claim 22, wherein the apparatus further comprises a display module adapted to display the distortion effect image. 36.根据权利要求35所述的装置,其中,所述显示模块进一步适于:实时显示所述扭曲效果图像。36. The apparatus of claim 35, wherein the display module is further adapted to display the distortion effect image in real time. 37.根据权利要求32所述的装置,其中,所述装置还包括:第一保存模块,适于根据用户触发的拍摄指令,保存所述扭曲效果图像。37. The apparatus according to claim 32, wherein the apparatus further comprises: a first saving module, adapted to save the distortion effect image according to a shooting instruction triggered by a user. 38.根据权利要求32所述的装置,其中,所述装置还包括:第二保存模块,适于根据用户触发的录制指令,保存由所述扭曲效果图像作为帧图像组成的视频。38. The apparatus according to claim 32, wherein the apparatus further comprises: a second saving module, adapted to save a video composed of the distortion effect image as a frame image according to a recording instruction triggered by a user. 39.根据权利要求32所述的装置,其中,所述装置还包括:显示模块,适于显示所述局部扭曲效果图像。39. The apparatus of claim 32, wherein the apparatus further comprises a display module adapted to display the local distortion effect image. 40.根据权利要求39所述的装置,其中,所述显示模块进一步适于:实时显示所述局部扭曲效果图像。40. The apparatus of claim 39, wherein the display module is further adapted to: display the local warping effect image in real time. 41.根据权利要求32所述的装置,其中,所述装置还包括:第一保存模块,适于根据用户触发的拍摄指令,保存所述局部扭曲效果图像。41. The apparatus according to claim 32, wherein the apparatus further comprises: a first saving module, adapted to save the local distortion effect image according to a shooting instruction triggered by a user. 42.根据权利要求32所述的装置,其中,所述装置还包括:第二保存模块,适于根据用户触发的录制指令,保存由所述局部扭曲效果图像作为帧图像组成的视频。42. The apparatus according to claim 32, wherein the apparatus further comprises: a second saving module, adapted to save a video composed of the partial distortion effect image as a frame image according to a recording instruction triggered by a user. 43.一种计算设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;43. A computing device, comprising: a processor, a memory, a communication interface and a communication bus, the processor, the memory and the communication interface communicate with each other through the communication bus; 所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-21中任一项所述的图像扭曲处理方法对应的操作。The memory is used for storing at least one executable instruction, and the executable instruction enables the processor to perform an operation corresponding to the image warping processing method according to any one of claims 1-21. 44.一种计算机存储介质,所述存储介质中存储有至少一可执行指令,所述可执行指令使处理器执行如权利要求1-21中任一项所述的图像扭曲处理方法对应的操作。44. A computer storage medium, wherein at least one executable instruction is stored in the storage medium, and the executable instruction enables a processor to perform operations corresponding to the image warping processing method according to any one of claims 1-21 .
CN201711002711.2A 2017-10-24 2017-10-24 Image warping processing method, device, computing device and computer storage medium Expired - Fee Related CN107564085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711002711.2A CN107564085B (en) 2017-10-24 2017-10-24 Image warping processing method, device, computing device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711002711.2A CN107564085B (en) 2017-10-24 2017-10-24 Image warping processing method, device, computing device and computer storage medium

Publications (2)

Publication Number Publication Date
CN107564085A CN107564085A (en) 2018-01-09
CN107564085B true CN107564085B (en) 2021-05-07

Family

ID=60987379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711002711.2A Expired - Fee Related CN107564085B (en) 2017-10-24 2017-10-24 Image warping processing method, device, computing device and computer storage medium

Country Status (1)

Country Link
CN (1) CN107564085B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900903B (en) * 2018-07-27 2021-04-27 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN111097169B (en) * 2019-12-25 2023-08-29 上海米哈游天命科技有限公司 Game image processing method, device, equipment and storage medium
CN112669429A (en) * 2021-01-07 2021-04-16 稿定(厦门)科技有限公司 Image distortion rendering method and device
CN113181639B (en) * 2021-04-28 2024-06-04 网易(杭州)网络有限公司 Graphic processing method and device in game

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197678A (en) * 2007-12-27 2008-06-11 腾讯科技(深圳)有限公司 Picture identifying code generation method and generation device
US9342858B2 (en) * 2012-05-31 2016-05-17 Apple Inc. Systems and methods for statistics collection using clipped pixel tracking
CN105631924A (en) * 2015-12-28 2016-06-01 北京像素软件科技股份有限公司 Method for implementing distortion effect in scene
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197678A (en) * 2007-12-27 2008-06-11 腾讯科技(深圳)有限公司 Picture identifying code generation method and generation device
US9342858B2 (en) * 2012-05-31 2016-05-17 Apple Inc. Systems and methods for statistics collection using clipped pixel tracking
US20170046833A1 (en) * 2015-08-10 2017-02-16 The Board Of Trustees Of The Leland Stanford Junior University 3D Reconstruction and Registration of Endoscopic Data
CN105631924A (en) * 2015-12-28 2016-06-01 北京像素软件科技股份有限公司 Method for implementing distortion effect in scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《基于Android的图像特效的设计与实现》;管胜;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120615(第6期);正文第27-31页 *

Also Published As

Publication number Publication date
CN107564085A (en) 2018-01-09

Similar Documents

Publication Publication Date Title
CN107993216B (en) Image fusion method and equipment, storage medium and terminal thereof
WO2016155377A1 (en) Picture display method and device
CN107507155B (en) Real-time processing method, device and computing device for edge optimization of video segmentation results
CN107808373A (en) Sample image synthetic method, device and computing device based on posture
CN107564085B (en) Image warping processing method, device, computing device and computer storage medium
CN105049718A (en) Image processing method and terminal
CN112541867B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
CN107665482B (en) Real-time processing method, device and computing device of video data for realizing double exposure
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
WO2014169579A1 (en) Color enhancement method and device
CN113723385B (en) Video processing method and device, neural network training method and device
CN106971165A (en) The implementation method and device of a kind of filter
CN111882627A (en) Image processing method, video processing method, device, equipment and storage medium
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN107547803B (en) Video segmentation result edge optimization processing method and device and computing equipment
WO2017173578A1 (en) Image enhancement method and device
CN107610149B (en) Image segmentation result edge optimization processing method and device and computing equipment
CN107959798B (en) Video data real-time processing method and device and computing equipment
CN115239861A (en) Face data enhancement method and device, computer equipment and storage medium
CN107682731A (en) Video data distortion processing method, device, computing device and storage medium
TWI711004B (en) Picture processing method and device
CN107743263B (en) Video data real-time processing method and device, and computing device
CN107705279B (en) Image data real-time processing method and device for realizing double exposure, and computing device
CN107770606A (en) Video data distortion processing method, device, computing device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210507

CF01 Termination of patent right due to non-payment of annual fee