CN107682731A - Video data distortion processing method, device, computing device and storage medium - Google Patents

Video data distortion processing method, device, computing device and storage medium Download PDF

Info

Publication number
CN107682731A
CN107682731A CN201711002704.2A CN201711002704A CN107682731A CN 107682731 A CN107682731 A CN 107682731A CN 201711002704 A CN201711002704 A CN 201711002704A CN 107682731 A CN107682731 A CN 107682731A
Authority
CN
China
Prior art keywords
data
pending
field picture
video data
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711002704.2A
Other languages
Chinese (zh)
Inventor
眭帆
眭一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201711002704.2A priority Critical patent/CN107682731A/en
Publication of CN107682731A publication Critical patent/CN107682731A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs

Abstract

The invention discloses a kind of video data distortion processing method, device, computing device and computer-readable storage medium, wherein, video data distortion processing method includes:Obtain the pending two field picture and the first noise data in video data;For each pixel in pending two field picture, according to the first noise data, the first twisting grain data are determined;Using the first twisting grain data, the color component value of pixel is handled;Obtain scalloping data corresponding with pending two field picture;According to scalloping data, distortion effects image is obtained;According to distortion effects image, frame processing image is obtained;Frame processing image is covered into the video data after pending two field picture is handled.Present invention employs deep learning method, complete scene cut processing with realizing the high accuracy of high efficiency, and the present invention is handled the color component value of pixel in video frame images using noise data, can readily obtain the video data with distortion effects.

Description

Video data distortion processing method, device, computing device and storage medium
Technical field
The present invention relates to image processing field, and in particular to a kind of video data distortion processing method, device, computing device And computer-readable storage medium.
Background technology
With the development of science and technology, the technology of image capture device also increasingly improves.The image collected becomes apparent from, differentiated Rate, display effect also greatly improve.But the video of existing recording be only dull recorded material in itself, possibly can not meet user Demand, user wish to video carry out personalisation process, such as, it is desirable to by the contents processing in video image into transmission The distortion effects that steam looks over.Prior art can be done further again to video manually after recorded video by user Reason.But so processing needs user to have a higher image processing techniques, and need in processing cost user it is more when Between, handle cumbersome, technical sophistication.And in the prior art, entered using function pair video datas such as sinusoidal or cosine mostly Row handles and obtains the video data with distortion effects, but the torsion of video data resulting after handling in this way It is bent ineffective, it is very stiff, not natural enough.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on State video data distortion processing method, device, computing device and the computer-readable storage medium of problem.
According to an aspect of the invention, there is provided a kind of video data distortion processing method, this method include:
Obtain the pending two field picture and the first noise data in video data;
For each pixel in pending two field picture, according to the first noise data, the first twisting grain number is determined According to;Using the first twisting grain data, the color component value of pixel is handled;
Obtain scalloping data corresponding with pending two field picture;
According to scalloping data, distortion effects image is obtained;
According to distortion effects image, frame processing image is obtained;
Frame processing image is covered into the video data after pending two field picture is handled.
Further, the pending two field picture obtained in video data further comprises:Obtain local video data and/or Pending two field picture in network video data.
Further, the pending two field picture obtained in video data further comprises:Obtain by multiple local pictures and/ Or the pending two field picture in the video data of multiple network picture synthesis.
Further, the pending two field picture obtained in local video data and/or network video data further comprises: Pending two field picture in the local video data and/or network video data of acquisition user's specified time section.
Further, obtain pending in the video data synthesized by multiple local pictures and/or multiple network pictures Two field picture further comprises:Obtain being regarded by what multiple local pictures and/or multiple network pictures synthesized for user's specified time section Pending two field picture of the frequency in.
Further, the first noise data includes multiple first color data;
According to the first noise data, determine that the first twisting grain data further comprise:
The first color data is extracted from the first noise data;
According to the first color data extracted, the first twisting grain data are determined;
Further, the first color data is extracted from the first noise data to further comprise:According to time parameter, from The first color data is extracted in first noise data.
Further, using the first twisting grain data, processing is carried out to the color component value of pixel and further comprised:
Using the first twisting grain data, it is determined that the first distortion offset corresponding with pixel;
According to the first distortion offset and pixel, it is determined that pixel corresponding with the first distortion offset;
The color component value of pixel is assigned to pixel corresponding with the first distortion offset.
Further, using the first twisting grain data, it is determined that the first distortion offset corresponding with pixel is further Including:Using the first twisting grain data and default torsion resistance coefficient, it is determined that the first distortion offset corresponding with pixel.
Further, according to scalloping data, obtain distortion effects image and further comprise:According to scalloping number According to, it is determined that basic effect image, and basic effect image is defined as distortion effects image.
Further, after the first noise data is obtained, this method also includes:
Obtain the second noise data;
Using the first noise data, the second noise data is handled, generates top layer haze effect textures;
According to scalloping data, obtaining distortion effects image is specially:According to scalloping data, it is determined that basic effect Image;Based on effect image addition top layer haze effect textures, obtain distortion effects image.
Further, the second noise data includes multiple second color data;
Using the first noise data, the second noise data is handled, generation top layer haze effect textures further wrap Include:
For each second color data in the second noise data, according to the first noise data, the second distortion is determined Data texturing;Using the second twisting grain data, it is determined that the second distortion offset corresponding with the second color data;According to second Offset and the second color data are distorted, offset is corresponding to offset object it is determined that being distorted with second;Second color data is assigned Value gives skew object corresponding to the second distortion offset;
Obtain noise twisting data corresponding with the second noise data;
According to noise twisting data, top layer haze effect textures are generated.
Further, further comprised according to noise twisting data, generation top layer haze effect textures:According to preset function And/or default addition color value and noise twisting data, translucent processing is carried out, generates top layer haze effect textures.
Further, according to the second distortion offset and the second color data, it is determined that corresponding with the second distortion offset Skew object further comprises:
According to the second distortion offset and the second color data, skew object to be determined is obtained;
Judge whether skew object to be determined exceedes default object range;If so, then according to preset algorithm and it is to be determined partially Object is moved, is calculated that offset is corresponding offsets object with the second distortion;If it is not, then by it is to be determined skew object be defined as with Skew object corresponding to second distortion offset.
Further, the first noise data is discrete chromatic noise data.
Further, the second noise data is continuous black and white noise data.
Further, according to distortion effects image, obtain frame processing image and further comprise:Distortion effects image is determined Image is handled for frame.
Further, this method also includes:
Scene cut processing is carried out to pending two field picture, obtains scene cut result corresponding with pending two field picture; Wherein, pending two field picture includes special object;
According to scene cut result corresponding with pending two field picture, the profile information of special object is determined;
According to distortion effects image, obtaining frame processing image is specially:Profile information, pending frame according to special object Image and distortion effects image, obtain bird caging effect image;Bird caging effect image is defined as frame processing image.
Further, profile information, pending two field picture and the distortion effects image according to special object, obtains local torsion Bent effect image further comprises:
According to the profile information of special object, topography is extracted from distortion effects image;
Fusion treatment is carried out to pending two field picture and topography, obtains bird caging effect image.
Further, this method also includes:Video data after processing is uploaded to one or more cloud video platform clothes Business device, so that cloud video platform server is shown video data in cloud video platform.
According to another aspect of the present invention, there is provided a kind of video data distorts processing unit, and the device includes:
Acquisition module, suitable for the pending two field picture and the first noise data in video data;
First processing module, suitable for for each pixel in pending two field picture, according to the first noise data, really Fixed first twisting grain data;Using the first twisting grain data, the color component value of pixel is handled;
First generation module, suitable for obtaining scalloping data corresponding with pending two field picture;
Second generation module, suitable for according to scalloping data, obtaining distortion effects image;
3rd generation module, suitable for according to distortion effects image, obtaining frame processing image;
Overlay module, suitable for frame processing image is covered into the video data after pending two field picture is handled.
Further, acquisition module is further adapted for:Wait to locate in acquisition local video data and/or network video data Manage two field picture.
Further, acquisition module is further adapted for:Obtain what is synthesized by multiple local pictures and/or multiple network pictures Pending two field picture in video data.
Further, acquisition module is further adapted for:Obtain the local video data and/or network of user's specified time section Pending two field picture in video data.
Further, acquisition module is further adapted for:Obtain user's specified time section by multiple local pictures and/or more Pending two field picture in the video data of individual network picture synthesis.
Further, the first noise data includes multiple first color data;
First processing module is further adapted for:
The first color data is extracted from the first noise data;
According to the first color data extracted, the first twisting grain data are determined;
Further, first processing module is further adapted for:According to time parameter, is extracted from the first noise data One color data.
Further, first processing module is further adapted for:
Using the first twisting grain data, it is determined that the first distortion offset corresponding with pixel;
According to the first distortion offset and pixel, it is determined that pixel corresponding with the first distortion offset;
The color component value of pixel is assigned to pixel corresponding with the first distortion offset.
Further, first processing module is further adapted for:Using the first twisting grain data and default torsion resistance coefficient, It is determined that the first distortion offset corresponding with pixel.
Further, the second generation module is further adapted for:According to scalloping data, it is determined that basic effect image, and Basic effect image is defined as distortion effects image.
Further, acquisition module is further adapted for:Obtain the second noise data;
The device also includes:Second processing module, suitable for utilizing the first noise data, at the second noise data Reason, generate top layer haze effect textures;
Second generation module is further adapted for:According to scalloping data, it is determined that basic effect image;Based on design sketch As addition top layer haze effect textures, distortion effects image is obtained.
Further, the second noise data includes multiple second color data;
Second processing module is further adapted for:
For each second color data in the second noise data, according to the first noise data, the second distortion is determined Data texturing;Using the second twisting grain data, it is determined that the second distortion offset corresponding with the second color data;According to second Offset and the second color data are distorted, offset is corresponding to offset object it is determined that being distorted with second;Second color data is assigned Value gives skew object corresponding to the second distortion offset;
Obtain noise twisting data corresponding with the second noise data;
According to noise twisting data, top layer haze effect textures are generated.
Further, Second processing module is further adapted for:According to preset function and/or default addition color value and make an uproar Acoustic warping data, translucent processing is carried out, generate top layer haze effect textures.
Further, Second processing module is further adapted for:
According to the second distortion offset and the second color data, skew object to be determined is obtained;
Judge whether skew object to be determined exceedes default object range;If so, then according to preset algorithm and it is to be determined partially Object is moved, is calculated that offset is corresponding offsets object with the second distortion;If it is not, then by it is to be determined skew object be defined as with Skew object corresponding to second distortion offset.
Further, the first noise data is discrete chromatic noise data.
Further, the second noise data is continuous black and white noise data.
Further, the 3rd generation module is further adapted for:Distortion effects image is defined as frame processing image.
Further, the device also includes:
Split module, suitable for carrying out scene cut processing to pending two field picture, obtain corresponding with pending two field picture Scene cut result;Wherein, pending two field picture includes special object;
Determining module, suitable for according to scene cut result corresponding with pending two field picture, determining the profile of special object Information;
3rd generation module is further adapted for:Profile information, pending two field picture and distortion effects according to special object Image, obtain bird caging effect image;Bird caging effect image is defined as frame processing image.
Further, the 3rd generation module is further adapted for:According to the profile information of special object, from distortion effects image In extract topography;Fusion treatment is carried out to pending two field picture and topography, obtains bird caging effect image.
Further, the device also includes:Uploading module, suitable for the video data after processing is uploaded into one or more Cloud video platform server, so that cloud video platform server is shown video data in cloud video platform.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes the above-mentioned video data distortion of computing device Operated corresponding to processing method.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with least one in storage medium Executable instruction, executable instruction make computing device operation as corresponding to above-mentioned video data distortion processing method.
According to technical scheme provided by the invention, the pending two field picture and the first noise data in video data are obtained, Then for each pixel in pending two field picture, according to the first noise data, the first twisting grain data are determined, profit With the first twisting grain data, the color component value of pixel is handled, obtains image corresponding with pending two field picture Twisting data, according to scalloping data, distortion effects image is obtained, then according to distortion effects image, obtain frame processing figure Picture, frame processing image is covered into the video data after pending two field picture is handled.Present invention employs deep learning method, Complete scene cut processing with realizing the high accuracy of high efficiency, and according to technical scheme provided by the invention, utilize noise number Handled according to the color component value to pixel in video frame images, the video counts with distortion effects can be readily obtained According to, it is not necessary to user is handled video manually, is realized the processing to video automatically, is improved video data treatment effeciency, Video data distortion processing mode is optimized, improves video data distortion effects.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the schematic flow sheet of video data distortion processing method according to an embodiment of the invention;
Fig. 2 shows the schematic flow sheet of video data distortion processing method in accordance with another embodiment of the present invention;
Fig. 3 shows the structured flowchart of video data distortion processing unit according to an embodiment of the invention;
Fig. 4 shows the structured flowchart of video data distortion processing unit in accordance with another embodiment of the present invention;
Fig. 5 shows a kind of structural representation of computing device according to embodiments of the present invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 shows the schematic flow sheet of video data distortion processing method according to an embodiment of the invention, such as Fig. 1 Shown, this method comprises the following steps:
Step S100, obtain pending two field picture and the first noise data in video data.
Wherein, the pending two field picture in the video data of acquisition can be pending in the local video data of user Two field picture, the pending two field picture in the video data of network can also be obtained.Or it can also obtain by multiple local pictures Pending two field picture in the video data of synthesis, or obtain pending in the video data synthesized by multiple network pictures Two field picture, or obtain the pending two field picture in the video data synthesized by multiple local pictures and multiple network pictures.When When user wants video data being processed into the video data with distortion effects, such as, it is desirable in video frame images Hold and be processed into the distortion effects with being looked over through steam, the pending frame figure in video data can be obtained in the step s 100 Picture and the first noise data.
Wherein, the first noise data includes multiple first color data, and specifically, the first noise data can be discrete coloured silk Coloured noise data, using the first discrete noise data, scalloping effect is favorably improved, the image after processing is had certainly So, preferable distortion effects.
Step S101, for each pixel in pending two field picture, according to the first noise data, determine the first torsion Bent data texturing.
In step S101, it is necessary to for each pixel in pending two field picture, according to the first noise data, really Fixed the first twisting grain data corresponding with the pixel.Specifically, the pixel in pending two field picture and the first noise number The first color data in is corresponding, then for each pixel in pending two field picture, according to the first noise number The first color data corresponding with the pixel in, it is determined that the first twisting grain data corresponding with the pixel, so as to pin The first different twisting grain data are determined to the pixel in pending two field picture, identical is all used with all pixels point Twisting grain data are compared, and the image after processing is had nature, preferable distortion effects.
Step S102, using the first twisting grain data, the color component value of pixel is handled.
For each pixel in pending two field picture, the first twisting grain number corresponding with the pixel is utilized According to processing such as color component value progress assignment of the pixel.When pending two field picture is coloured image, with pending frame Exemplified by the color mode of image uses rgb color pattern, the color component value of the pixel in pending two field picture include it is red, The color component value of green, blue three Color Channels, then in a particular application, those skilled in the art can be from corresponding to pixel The color component value of suitable Color Channel is selected to be handled or to all face in the color component value of three Color Channels The color component value of chrominance channel is all handled, and is not limited herein.For example, can be only to the face of the red color passage of pixel Colouring component value and the color component value of green color channel are handled.
Step S103, obtain scalloping data corresponding with pending two field picture.
Data obtained by after the color component value of each pixel in pending two field picture is handled Scalloping data as corresponding with pending two field picture.
Step S104, according to scalloping data, obtain distortion effects image.
After step S103 obtains scalloping data, in step S104, according to scalloping data, distorted Effect image.For example, user, which wants the full content in pending two field picture being processed into, has what is looked over through steam Distortion effects, then can be according to scalloping data, it is determined that basic effect image, the basic effect image is pending frame figure Full content as in all has the image of distortion effects, and basic effect image then is defined as into distortion effects image.
Step S105, according to distortion effects image, obtain frame processing image.
After distortion effects image has been obtained, distortion effects image can be defined as to frame processing image, also can be to distortion Effect image is further processed, and the image after processing is defined as into frame processing image.
Step S106, frame processing image is covered into the video data after pending two field picture is handled.
Pending two field picture is directly override using frame processing image, the video data after directly can be processed.Together When, the user of recording can also be immediately seen frame processing image.
According to the present embodiment provide video data distort processing method, obtain video data in pending two field picture and First noise data, then for each pixel in pending two field picture, according to the first noise data, determine the first torsion Bent data texturing, using the first twisting grain data, the color component value of pixel is handled, obtained and pending frame figure The scalloping data as corresponding to, according to scalloping data, distortion effects image is obtained, then according to distortion effects image, Frame processing image is obtained, frame processing image is covered into the video data after pending two field picture is handled.Present invention employs Deep learning method, complete scene cut processing with realizing the high accuracy of high efficiency, and according to technical side provided by the invention Case, the color component value of pixel in video frame images is handled using noise data, can readily obtained with torsion Qu Xiaoguo video data, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is improved video Data-handling efficiency, video data distortion processing mode is optimized, improves video data distortion effects.
Fig. 2 shows the schematic flow sheet of video data distortion processing method in accordance with another embodiment of the present invention, such as Shown in Fig. 2, this method comprises the following steps:
Step S200, obtain pending two field picture, the first noise data and the second noise data in video data.
Wherein, the pending two field picture in the video data of acquisition can be pending in the local video data of user Two field picture, the pending two field picture in the video data of network can also be obtained.Or it can also obtain by multiple local pictures Pending two field picture in the video data of synthesis, or obtain pending in the video data synthesized by multiple network pictures Two field picture, or obtain the pending two field picture in the video data synthesized by multiple local pictures and multiple network pictures.Separately Outside, the pending two field picture in the video data in user's specified time section can also only be obtained according to user's specified time section, Specifically, can obtain user's specified time section local video data and/or network video data in pending two field picture, Or it can also obtain in the video data synthesized by multiple local pictures and/or multiple network pictures of user's specified time section Pending two field picture.
Wherein, the first noise data includes multiple first color data, and the second noise data includes multiple second number of colours According to.Specifically, the first noise data is discrete chromatic noise data, and the second noise data is continuous black and white noise data, Using the first noise data, the second noise data is handled, generates top layer haze effect textures.Using discrete first Noise data, scalloping effect is favorably improved, makes the image after processing that there are nature, preferable distortion effects;Using even The second continuous noise data, the top layer haze effect textures with continuous haze effect can be generated, make the image after processing Effect is curled up with natural smog.
Step S201, for each pixel in pending two field picture, first is extracted from the first noise data Color data;According to the first color data extracted, the first twisting grain data are determined.
The first color data in pixel and the first noise data in pending two field picture is corresponding, then for treating Each pixel in two field picture is handled, the first number of colours corresponding with the pixel is extracted from the first noise data According to.So as to which the first different twisting grain data be determined for the pixel in pending two field picture, with all pixels point all Compared using identical twisting grain data, help to obtain nature, preferable distortion effects.
In order to further improve scalloping effect, first can be extracted from the first noise data according to time parameter Color data.Specifically, for same pixel in pending two field picture, when time parameter changes, made an uproar from first Sound extracting data goes out the first different color data as the first color data corresponding with the pixel.
In actual applications, the first noise data can be chromatic noise figure, then each pixel in chromatic noise figure The corresponding color component value of point is first color data.Let it be assumed, for the purpose of illustration, that the picture in pending two field picture Vegetarian refreshments is respectively A1, A2, A3 etc., and the pixel in chromatic noise figure is respectively B1, B2, B3 etc., in pending two field picture Pixel A1, when time parameter is time 1, the color component value corresponding to pixel B1 is extracted from chromatic noise figure As the first color data corresponding with pixel A1;When time parameter is time 2, pixel is extracted from chromatic noise figure Color component value corresponding to point B3 is as the first color data corresponding with pixel A1.
Specifically, can be corresponding with the pixel according to being extracted for each pixel in pending two field picture The first color data and default first calculate function, the first twisting grain data are calculated, wherein, those skilled in the art According to being actually needed default first can be set to calculate function, do not limited herein.
Step S202, using the first twisting grain data, it is determined that the first distortion offset corresponding with pixel.
Specifically, using the first twisting grain data and default torsion resistance coefficient, it is determined that corresponding with pixel first Distort offset.Those skilled in the art can realize the regulation to scalloping degree by the regulation to torsion resistance coefficient.
Step S203, according to the first distortion offset and pixel, it is determined that pixel corresponding with the first distortion offset.
After the first distortion offset corresponding with pixel is determined, so that it may according to the first distortion offset and pixel Point, it is determined that pixel corresponding with the first distortion offset.
Step S204, the color component value of pixel is assigned to pixel corresponding with the first distortion offset.
Determine with after the first corresponding pixel of distortion offset, by the color component value of pixel be assigned to Pixel corresponding to first distortion offset.For example, for the pixel A1 in pending two field picture, with the first distortion offset Corresponding pixel is pixel A2, then pixel A1 color component value is assigned into pixel A2, has pixel A2 There is pixel A1 color component value, so as to reach the effect of scalloping.
Assuming that the color component value of the pixel in pending two field picture includes the color of three Color Channels of red, green, blue Component value, then in a particular application, those skilled in the art can be from the color component of three Color Channels corresponding to pixel Selected in value suitable Color Channel color component value be assigned to the first corresponding pixel of distortion offset, or by institute The color component value for having Color Channel is all assigned to pixel corresponding with the first distortion offset, does not limit herein.For example, Can only by the color component value of the color component value of the red color passage of pixel and green color channel correspondingly be assigned to Pixel corresponding to first distortion offset.
Step S205, obtain scalloping data corresponding with pending two field picture.
Obtained by the color component value of each pixel in pending two field picture carries out after assignment processing Data are scalloping data corresponding with pending two field picture.
Step S206, according to scalloping data, it is determined that basic effect image.
Wherein, basic effect image is full content all images with distortion effects in pending two field picture.
Step S207, for each second color data in the second noise data, according to the first noise data, it is determined that Second twisting grain data.
The first color data in the second color data and the first noise data in second noise data is corresponding, then For each second color data in the second noise data, extracted from the first noise data and second color data Corresponding first color data, then according to the first color data corresponding with second color data for being extracted and preset the Two calculate function, and the second twisting grain data are calculated, so as to be determined for the second color data in the second noise data Different the second twisting grain data, compared with all second color data all use identical twisting grain data, help Effect is curled up in acquisition nature, preferable smog.
Wherein, those skilled in the art can set default second to calculate function according to being actually needed, and preset second and calculate letter Number can be identical with default first calculating function, also can be different from default first calculating function, does not limit herein.
Step S208, using the second twisting grain data, it is determined that the second distortion offset corresponding with the second color data.
Specifically, using the second twisting grain data and default smog torsion resistance coefficient, it is determined that with the second color data Corresponding second distortion offset.Those skilled in the art can be realized and smog is turned round by the regulation to smog torsion resistance coefficient Qu Chengdu regulation.
Step S209, according to the second distortion offset and the second color data, it is determined that corresponding with the second distortion offset Offset object.
Specifically, according to the second distortion offset and the second color data, skew object to be determined is obtained, then judges to treat It is determined that whether skew object exceedes default object range;If so, then it is calculated according to preset algorithm and skew object to be determined Offset is corresponding offsets object with the second distortion;If it is not, then skew object to be determined is defined as and the second distortion offset Corresponding skew object.
It is default due to that can exceed that according to the second distortion offset and the obtained skew object to be determined of the second color data Object range, therefore also need to judge whether skew object to be determined exceedes default object range.It is if to be determined inclined Move object and exceed default object range, then according to preset algorithm and skew object to be determined, be calculated and offset with the second distortion Skew object corresponding to amount, so as to realize the adjustment to offseting object, helps to be subsequently generated with continuous haze effect Top layer haze effect textures.If skew object to be determined, can be directly by skew pair to be determined not less than default object range As being defined as, offset is corresponding offsets object with the second distortion.
Step S210, the second color data is assigned to offset is corresponding offsets object with the second distortion.
After skew object corresponding with the second distortion offset is determined, the second color data is assigned to and second Distort skew object corresponding to offset.
In actual applications, the first noise data can be chromatic noise figure, and the second noise data is black and white noise pattern, then The color component value corresponding to each pixel in chromatic noise figure is first color data, in black and white noise pattern Each pixel corresponding to color component value be second color data.Let it be assumed, for the purpose of illustration, that colour is made an uproar Pixel in sound spectrogram is respectively B1, B2, B3 etc., and the pixel in black and white noise pattern is respectively C1, C2, C3 etc., it is assumed that some Second color data is the color component value corresponding to the pixel C1 in black and white noise pattern, for second color data, with Skew object corresponding to second distortion offset is pixel C3, then pixel C1 color component value is assigned into pixel C3, make pixel C3 that there is pixel C1 color component value, so as to reach the effect that smog curls up.
Step S211, obtain noise twisting data corresponding with the second noise data.
Data obtained by after each second color data in the second noise data carries out assignment processing are i.e. For noise twisting data corresponding with the second noise data.
Step S212, according to noise twisting data, generate top layer haze effect textures.
After noise twisting data has been obtained, according to noise twisting data, top layer haze effect textures are generated.Specifically Ground, it can carry out translucent processing according to preset function and/or default addition color value and noise twisting data, generate top layer Haze effect textures.Wherein, those skilled in the art can set preset function and default addition color value according to being actually needed, this Place does not limit.For example, preset function can be SIN function or cosine function etc., it can be golden yellow corresponding to preset addition color value Color value or red corresponding to color value etc..Add because the second noise data is black and white noise data, therefore according to default Add color value and noise twisting data, the top layer haze effect textures with colored smoke effect can be generated.For example, add when default Add color value for during color value, generation is the top layer haze effect textures with golden yellow haze effect corresponding to golden yellow.
Step S213, based on effect image addition top layer haze effect textures, obtain distortion effects image.
After basic effect image and top layer haze effect textures has been obtained, based on effect image addition top layer smog Effect textures, distortion effects image being obtained, the distortion effects image not only has distortion effects, improves scalloping effect, Also there is haze effect, be greatly enriched image effect.
Step S214, distortion effects image is defined as frame processing image.
Step S215, frame processing image is covered into the video data after pending two field picture is handled.
Pending two field picture is directly override using frame processing image, the video data after directly can be processed.Together When, the user of recording can also be immediately seen the image after present frame processing.
Step S216, the video data after processing is uploaded to one or more cloud video platform servers, so that cloud regards Frequency Platform Server is shown video data in cloud video platform.
Video data after processing can be stored in locally only to be watched for user, can also be straight by the video data after processing Connect and reach one or more cloud video platform servers, such as iqiyi.com, youku.com, fast video cloud video platform server, with For cloud video platform server video data is shown in cloud video platform.
In addition, in application scenes, pending two field picture includes special object, such as human body, and user only wants to pair Specific object region or nonspecific subject area in pending two field picture are distorted, and in this case, this method may be used also Including:Scene cut processing is carried out to pending two field picture, obtains scene cut result corresponding with pending two field picture;According to Scene cut result corresponding with pending two field picture, determine the profile information of special object.So it is not in step S214 Distortion effects image is defined as frame processing image, but profile information, pending two field picture and distortion according to special object Effect image, bird caging effect image is obtained, bird caging effect image is then defined as frame processing image.
Wherein, when carrying out scene cut processing to pending two field picture, deep learning method can be utilized.Deep learning It is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can use a variety of Mode represents, a series of such as vector of each pixel intensity value, or be more abstractively expressed as sides, the region of given shape Deng.And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or facial expression are known Not).Scene cut is carried out to pending two field picture using the dividing method of deep learning, obtained corresponding with pending two field picture Scene cut result.Specifically, scene cut network obtained using deep learning method etc. enters to pending two field picture The processing of row scene cut, obtains scene cut result corresponding with pending two field picture, then basis and pending two field picture pair The scene cut result answered, determine the profile information of special object.Assuming that special object is human body, then can be according to scene Segmentation result, the profile information of human body is determined, be human body so as to distinguish which region in pending two field picture, which region It is not human body.
After the profile information of special object is determined, so that it may according to the profile information of special object, from distortion effects Topography is extracted in image, fusion treatment then is carried out to pending two field picture and topography, obtains bird caging effect Fruit image.Specifically, it can determine which region is special object in distortion effects image according to the profile information of special object Region, which region are non-specific object regions, nonspecific subject area can be referred to as into background area, then from distortion effects figure The image of specific object region or the image of nonspecific subject area are extracted as topography as in.For example, when specific right As for human body when, user want the human region in pending two field picture is distorted, then according to human body profile information, The image of human region is extracted from distortion effects image as topography, then to pending two field picture and topography Fusion treatment is carried out, obtains bird caging effect image, the bird caging effect image is that there is only human region distortion to imitate Fruit and background area do not have the image of distortion effects;And for example, when special object is human body, user is wanted to pending frame figure Background area as in addition to human region is distorted, then according to the profile information of human body, from distortion effects image The image of background area is extracted as topography, fusion treatment then is carried out to pending two field picture and topography, obtained To bird caging effect image, the bird caging effect image is that only background area has distortion effects and human region does not have There is the image of distortion effects.
The video data provided according to the present embodiment distorts processing method, using a kind of noise data in video frame images The color component value of pixel is handled, and can readily obtain basic effect image;And using the noise data to another A kind of noise data is handled, additionally it is possible to obtain top layer haze effect textures, so as to obtain not only with distortion effects but also with The video data of haze effect, not only do not need user to handle manually video, realize the processing to video automatically, improve Video data treatment effeciency, video data distortion processing mode is optimized, is effectively improved video data distortion effects, also It is greatly enriched video data effect;In addition, can also be had according to scene cut result corresponding with pending two field picture There is the video data of bird caging effect, present invention employs deep learning method, completes with realizing the high accuracy of high efficiency Scene cut processing, effectively meet the individual demand of user.
Fig. 3 shows the structured flowchart of video data distortion processing unit according to an embodiment of the invention, such as Fig. 3 institutes Show, the device includes:Acquisition module 301, first processing module 302, the first generation module 303, the second generation module 304, Three generation modules 305 and overlay module 306.
Acquisition module 301 is suitable to:Obtain the pending two field picture and the first noise data in video data.
Wherein, the pending two field picture in the video data that acquisition module 301 obtains can be the local video counts of user Pending two field picture in, acquisition module 301 can also obtain the pending two field picture in the video data of network.Or obtain Modulus block 301 can also obtain the pending two field picture in the video data synthesized by multiple local pictures, or acquisition module 301 obtain the pending two field picture in the video data synthesized by multiple network pictures, or acquisition module 301 is obtained by multiple Pending two field picture in local picture and the video data of multiple network pictures synthesis.
Wherein, the first noise data includes multiple first color data, and specifically, the first noise data can be discrete coloured silk Coloured noise data, using the first discrete noise data, scalloping effect is favorably improved, the image after processing is had certainly So, preferable distortion effects.
First processing module 302 is suitable to:For each pixel in pending two field picture, according to the first noise number According to determining the first twisting grain data;Using the first twisting grain data, the color component value of pixel is handled.
First generation module 303 is suitable to:Obtain scalloping data corresponding with pending two field picture.
Second generation module 304 is suitable to:According to scalloping data, distortion effects image is obtained.
Alternatively, the second generation module 304 is further adapted for:According to scalloping data, it is determined that basic effect image, and Basic effect image is defined as distortion effects image.
3rd generation module 305 is suitable to:According to distortion effects image, frame processing image is obtained.
After distortion effects image has been obtained, distortion effects image can be defined as frame processing by the 3rd generation module 305 Image, distortion effects image can be also further processed, the image after processing is defined as frame processing image.
Overlay module 306 is suitable to:Frame processing image is covered into the video data after pending two field picture is handled.
The video data provided according to the present embodiment distorts processing unit, and acquisition module obtains pending in video data Two field picture and the first noise data, first processing module are directed to each pixel in pending two field picture, are made an uproar according to first Sound data, the first twisting grain data are determined, using the first twisting grain data, at the color component value of pixel Reason, the first generation module obtain scalloping data corresponding with pending two field picture, and the second generation module is according to scalloping Data, obtain distortion effects image, and the 3rd generation module obtains frame processing image, overlay module will according to distortion effects image Frame processing image covers the video data after pending two field picture is handled.Present invention employs deep learning method, realizes Scene cut processing is completed high efficiency high accuracy, and according to technical scheme provided by the invention, utilizes noise data pair The color component value of pixel is handled in video frame images, can readily obtain the video data with distortion effects, Do not need user to handle manually video, realize the processing to video automatically, improve video data treatment effeciency, optimize Video data distortion processing mode, improves video data distortion effects.
Fig. 4 shows the structured flowchart of video data distortion processing unit in accordance with another embodiment of the present invention, such as Fig. 4 Shown, the device includes:Acquisition module 401, first processing module 402, the first generation module 403, Second processing module 404, Second generation module 405, the 3rd generation module 406, overlay module 407 and uploading module 408.
Acquisition module 401 is suitable to:Obtain pending two field picture, the first noise data and the second noise number in video data According to.
Wherein, the pending two field picture in the video data that acquisition module 401 obtains can be the local video counts of user Pending two field picture in, acquisition module 401 can also obtain the pending two field picture in the video data of network.Or obtain Modulus block 401 can also obtain the pending two field picture in the video data synthesized by multiple local pictures, or acquisition module 401 obtain the pending two field picture in the video data synthesized by multiple network pictures, or acquisition module 401 is obtained by multiple Pending two field picture in local picture and the video data of multiple network pictures synthesis.In addition, acquisition module 401 can be with root According to user's specified time section, the pending two field picture in the video data in user's specified time section is only obtained, specifically, is obtained Module 401 can obtain user's specified time section local video data and/or network video data in pending two field picture, Or acquisition module 401 can also obtain being synthesized by multiple local pictures and/or multiple network pictures for user's specified time section Video data in pending two field picture.
Wherein, the first noise data includes multiple first color data, and the second noise data includes multiple second number of colours According to.Specifically, the first noise data is discrete chromatic noise data, and the second noise data is continuous black and white noise data, Using the first noise data, the second noise data is handled, generates top layer haze effect textures.
First processing module 402 is suitable to:For each pixel in pending two field picture, from the first noise data Extract the first color data;According to the first color data extracted, the first twisting grain data are determined;Utilize the first distortion Data texturing, the color component value of pixel is handled.
Wherein, first processing module 402 is further adapted for:According to time parameter, is extracted from the first noise data One color data.
First processing module 402 is further adapted for:Using the first twisting grain data, it is determined that corresponding with pixel first Distort offset;According to the first distortion offset and pixel, it is determined that pixel corresponding with the first distortion offset;By pixel The color component value of point is assigned to pixel corresponding with the first distortion offset.
First processing module 402 is further adapted for:Using the first twisting grain data and default torsion resistance coefficient, it is determined that with First distortion offset corresponding to pixel.
First generation module 403 is suitable to:Obtain scalloping data corresponding with pending two field picture.
Second processing module 404 is suitable to:Using the first noise data, the second noise data is handled, generates top layer Haze effect textures.
Specifically, Second processing module 404 is further adapted for:For each second number of colours in the second noise data According to according to the first noise data, determining the second twisting grain data;Using the second twisting grain data, it is determined that with the second color Second distortion offset corresponding to data;According to the second distortion offset and the second color data, it is determined that being offset with the second distortion Skew object corresponding to amount;Second color data is assigned to offset is corresponding offsets object with the second distortion;Obtain and the Noise twisting data corresponding to two noise datas;According to noise twisting data, top layer haze effect textures are generated.
Alternatively, Second processing module 404 is further adapted for:According to preset function and/or default addition color value and Noise twisting data, translucent processing is carried out, generate top layer haze effect textures.
Alternatively, Second processing module 404 is further adapted for:According to the second distortion offset and the second color data, obtain To skew object to be determined;Judge whether skew object to be determined exceedes default object range;If so, then according to preset algorithm and Skew object to be determined, is calculated that offset is corresponding offsets object with the second distortion;If it is not, then by skew object to be determined It is defined as that offset is corresponding offsets object with the second distortion.
Second generation module 405 is suitable to:According to scalloping data, it is determined that basic effect image;Based on effect image Top layer haze effect textures are added, obtain distortion effects image.
3rd generation module 406 is suitable to:According to distortion effects image, frame processing image is obtained.
Overlay module 407 is suitable to:Frame processing image is covered into the video data after pending two field picture is handled.
Uploading module 408, suitable for the video data after processing is uploaded into one or more cloud video platform servers, with For cloud video platform server video data is shown in cloud video platform.
Video data after processing can be stored in locally only to be watched for user, can also will be handled by uploading module 408 Video data afterwards is directly uploaded to one or more cloud video platform servers, such as iqiyi.com, youku.com, fast video cloud video Platform Server, so that cloud video platform server is shown video data in cloud video platform.
In addition, in application scenes, pending two field picture includes special object, such as human body, and user only wants to pair Specific object region or nonspecific subject area in pending two field picture are distorted, and in this case, the device also wraps Include:Split module 409 and determining module 410.
Segmentation module 409 is suitable to:Scene cut processing is carried out to pending two field picture, obtained corresponding with pending two field picture Scene cut result.
Determining module 410 is suitable to:According to scene cut result corresponding with pending two field picture, the wheel of special object is determined Wide information.
In this case, the 3rd generation module 406 is suitable to:Profile information, pending two field picture according to special object With distortion effects image, bird caging effect image is obtained;Bird caging effect image is defined as frame processing image.
Wherein, the 3rd generation module 406 is further adapted for:According to the profile information of special object, from distortion effects image In extract topography;Fusion treatment is carried out to pending two field picture and topography, obtains bird caging effect image.
The video data provided according to the present embodiment distorts processing unit, using a kind of noise data in video frame images The color component value of pixel is handled, and can readily obtain basic effect image;And using the noise data to another A kind of noise data is handled, additionally it is possible to obtain top layer haze effect textures, so as to obtain not only with distortion effects but also with The video data of haze effect, not only do not need user to handle manually video, realize the processing to video automatically, improve Video data treatment effeciency, video data distortion processing mode is optimized, is effectively improved video data distortion effects, also It is greatly enriched video data effect;In addition, can also be had according to scene cut result corresponding with pending two field picture There is the video data of bird caging effect, present invention employs deep learning method, completes with realizing the high accuracy of high efficiency Scene cut processing, effectively meet the individual demand of user.
Present invention also offers a kind of nonvolatile computer storage media, computer-readable storage medium is stored with least one can Execute instruction, executable instruction can perform the video data distortion processing method in above-mentioned any means embodiment.
Fig. 5 shows a kind of structural representation of computing device according to embodiments of the present invention, the specific embodiment of the invention The specific implementation to computing device does not limit.
As shown in figure 5, the computing device can include:Processor (processor) 502, communication interface (Communications Interface) 504, memory (memory) 506 and communication bus 508.
Wherein:
Processor 502, communication interface 504 and memory 506 complete mutual communication by communication bus 508.
Communication interface 504, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 502, for configuration processor 510, it can specifically perform above-mentioned video data distortion processing method embodiment In correlation step.
Specifically, program 510 can include program code, and the program code includes computer-managed instruction.
Processor 502 is probably central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 506, for depositing program 510.Memory 506 may include high-speed RAM memory, it is also possible to also include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 510 specifically can be used for so that the video data that processor 502 is performed in above-mentioned any means embodiment is turned round Bent processing method.The specific implementation of each step may refer to the phase in above-mentioned video data distortion Processing Example in program 510 Corresponding description in step and unit is answered, will not be described here.It is apparent to those skilled in the art that it is description Convenience and succinct, the equipment of foregoing description and the specific work process of module, may be referred to pair in preceding method embodiment Process description is answered, will not be repeated here.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) are come one of some or all parts in realizing according to embodiments of the present invention A little or repertoire.The present invention is also implemented as setting for performing some or all of method as described herein Standby or program of device (for example, computer program and computer program product).Such program for realizing the present invention can deposit Storage on a computer-readable medium, or can have the form of one or more signal.Such signal can be from because of spy Download and obtain on net website, either provide on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of video data distorts processing method, methods described includes:
Obtain the pending two field picture and the first noise data in video data;
For each pixel in the pending two field picture, according to first noise data, the first grai twisted is determined Manage data;Using the first twisting grain data, the color component value of the pixel is handled;
Obtain scalloping data corresponding with pending two field picture;
According to described image twisting data, distortion effects image is obtained;
According to the distortion effects image, frame processing image is obtained;
Frame processing image is covered into the video data after pending two field picture is handled.
2. according to the method for claim 1, wherein, the pending two field picture obtained in video data further wraps Include:Obtain the pending two field picture in local video data and/or network video data.
3. according to the method for claim 1, wherein, the pending two field picture obtained in video data further wraps Include:Obtain the pending two field picture in the video data synthesized by multiple local pictures and/or multiple network pictures.
4. the method according to claim 11, wherein, in the acquisition local video data and/or network video data Pending two field picture further comprises:In the local video data and/or the network video data that obtain user's specified time section Pending two field picture.
5. according to the method for claim 3, wherein, described obtain is closed by multiple local pictures and/or multiple network pictures Into video data in pending two field picture further comprise:Obtain user's specified time section by multiple local pictures and/ Or the pending two field picture in the video data of multiple network picture synthesis.
6. according to the method described in claim any one of 1-5, wherein, first noise data includes multiple first number of colours According to;
It is described according to first noise data, determine that the first twisting grain data further comprise:
The first color data is extracted from first noise data;
According to the first color data extracted, the first twisting grain data are determined.
7. the method according to claim 11, wherein, it is described to extract the first color data from first noise data Further comprise:According to time parameter, the first color data is extracted from first noise data.
8. a kind of video data distorts processing unit, described device includes:
Acquisition module, suitable for the pending two field picture and the first noise data in video data;
First processing module, suitable for for each pixel in the pending two field picture, according to the first noise number According to determining the first twisting grain data;Using the first twisting grain data, the color component value of the pixel is carried out Processing;
First generation module, suitable for obtaining scalloping data corresponding with pending two field picture;
Second generation module, suitable for according to described image twisting data, obtaining distortion effects image;
3rd generation module, suitable for according to the distortion effects image, obtaining frame processing image;
Overlay module, suitable for frame processing image is covered into the video data after pending two field picture is handled.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask the video data any one of 1-7 to distort corresponding to processing method to operate.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make operation corresponding to video data distortion processing method of the computing device as any one of claim 1-7.
CN201711002704.2A 2017-10-24 2017-10-24 Video data distortion processing method, device, computing device and storage medium Pending CN107682731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711002704.2A CN107682731A (en) 2017-10-24 2017-10-24 Video data distortion processing method, device, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711002704.2A CN107682731A (en) 2017-10-24 2017-10-24 Video data distortion processing method, device, computing device and storage medium

Publications (1)

Publication Number Publication Date
CN107682731A true CN107682731A (en) 2018-02-09

Family

ID=61142138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711002704.2A Pending CN107682731A (en) 2017-10-24 2017-10-24 Video data distortion processing method, device, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN107682731A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900904A (en) * 2018-07-27 2018-11-27 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN110944230A (en) * 2019-11-21 2020-03-31 北京达佳互联信息技术有限公司 Video special effect adding method and device, electronic equipment and storage medium
CN111445563A (en) * 2020-03-23 2020-07-24 腾讯科技(深圳)有限公司 Image generation method and related device
CN111681177A (en) * 2020-05-18 2020-09-18 腾讯科技(深圳)有限公司 Video processing method and device, computer readable storage medium and electronic equipment
WO2022068040A1 (en) * 2020-09-30 2022-04-07 北京完美赤金科技有限公司 Method and apparatus for generating tear effect image, and storage medium and electronic apparatus

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900904A (en) * 2018-07-27 2018-11-27 北京市商汤科技开发有限公司 Method for processing video frequency and device, electronic equipment and storage medium
CN110944230A (en) * 2019-11-21 2020-03-31 北京达佳互联信息技术有限公司 Video special effect adding method and device, electronic equipment and storage medium
CN110944230B (en) * 2019-11-21 2021-09-10 北京达佳互联信息技术有限公司 Video special effect adding method and device, electronic equipment and storage medium
CN111445563A (en) * 2020-03-23 2020-07-24 腾讯科技(深圳)有限公司 Image generation method and related device
CN111445563B (en) * 2020-03-23 2023-03-10 腾讯科技(深圳)有限公司 Image generation method and related device
CN111681177A (en) * 2020-05-18 2020-09-18 腾讯科技(深圳)有限公司 Video processing method and device, computer readable storage medium and electronic equipment
WO2022068040A1 (en) * 2020-09-30 2022-04-07 北京完美赤金科技有限公司 Method and apparatus for generating tear effect image, and storage medium and electronic apparatus

Similar Documents

Publication Publication Date Title
CN107682731A (en) Video data distortion processing method, device, computing device and storage medium
US11854072B2 (en) Applying virtual makeup products
US11854070B2 (en) Generating virtual makeup products
Ren et al. Low-light image enhancement via a deep hybrid network
CN107820027A (en) Video personage dresss up method, apparatus, computing device and computer-readable storage medium
CN108229279A (en) Face image processing process, device and electronic equipment
CN107862277A (en) Live dress ornament, which is dressed up, recommends method, apparatus, computing device and storage medium
CN107507155A (en) Video segmentation result edge optimization real-time processing method, device and computing device
CN107564085A (en) Scalloping processing method, device, computing device and computer-readable storage medium
CN107770606A (en) Video data distortion processing method, device, computing device and storage medium
CN107665482A (en) Realize the video data real-time processing method and device, computing device of double exposure
TW200416622A (en) Method and system for enhancing portrait images that are processed in a batch mode
CN106855996B (en) Gray-scale image coloring method and device based on convolutional neural network
CN108319894A (en) Fruit recognition methods based on deep learning and device
CN108876804A (en) It scratches as model training and image are scratched as methods, devices and systems and storage medium
CN107483892A (en) Video data real-time processing method and device, computing device
CN107613360A (en) Video data real-time processing method and device, computing device
CN110148088A (en) Image processing method, image rain removing method, device, terminal and medium
CN107547803A (en) Video segmentation result edge optimization processing method, device and computing device
CN113808277A (en) Image processing method and related device
Mejjati et al. Look here! a parametric learning based approach to redirect visual attention
CN107766803A (en) Video personage based on scene cut dresss up method, apparatus and computing device
CN107705279A (en) Realize the view data real-time processing method and device, computing device of double exposure
CN107743263A (en) Video data real-time processing method and device, computing device
CN107566853A (en) Realize the video data real-time processing method and device, computing device of scene rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180209

RJ01 Rejection of invention patent application after publication