CN107633228A - Video data handling procedure and device, computing device - Google Patents

Video data handling procedure and device, computing device Download PDF

Info

Publication number
CN107633228A
CN107633228A CN201710853663.1A CN201710853663A CN107633228A CN 107633228 A CN107633228 A CN 107633228A CN 201710853663 A CN201710853663 A CN 201710853663A CN 107633228 A CN107633228 A CN 107633228A
Authority
CN
China
Prior art keywords
video data
processing
field picture
information
pending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710853663.1A
Other languages
Chinese (zh)
Inventor
眭帆
眭一帆
邱学侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201710853663.1A priority Critical patent/CN107633228A/en
Publication of CN107633228A publication Critical patent/CN107633228A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of video data handling procedure and device, computing device, its method includes:Obtain video data;Video data is screened, obtains the pending two field picture for including special object;Scene cut processing is carried out to pending two field picture, obtains being directed to the foreground image of special object;The input information of external input sources is obtained, at least one information element is extracted from input information;Stylized processing is carried out to background image according at least one information element;Background image after foreground image and stylization processing is subjected to fusion treatment, the two field picture after being handled;Two field picture after processing is covered into the video data after pending two field picture is handled.Present invention employs deep learning method, completes scene cut processing and three dimensional stress processing with realizing the high accuracy of high efficiency.And user's technical merit is not limited, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is greatlyd save user time.

Description

Video data handling procedure and device, computing device
Technical field
The present invention relates to image processing field, and in particular to a kind of video data handling procedure and device, computing device.
Background technology
With the development of science and technology, the technology of image capture device also increasingly improves.Regarded using what image capture device was recorded Frequency also becomes apparent from, resolution ratio, display effect also greatly improve.But the video of existing recording is only dull recorded material sheet Body, the increasing individual requirement that user proposes can not be met.Prior art can be after recorded video, can be by user It is further again to video manually to be handled.But so processing needs user to have higher image processing techniques, and is locating The time for spending user more is needed during reason, handles cumbersome, technical sophistication.
Therefore, it is necessary to which a kind of video data handling procedure, reduction technology will while the individual requirement of user is met Seek threshold.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on State the video data handling procedure and device, computing device of problem.
According to an aspect of the invention, there is provided a kind of video data handling procedure, it includes:
Obtain video data;
Video data is screened, obtains the pending two field picture for including special object;
Scene cut processing is carried out to pending two field picture, obtains being directed to the foreground image of special object;
The input information of external input sources is obtained, at least one information element is extracted from input information;
Stylized processing is carried out to background image according at least one information element;
Background image after foreground image and stylization processing is subjected to fusion treatment, the two field picture after being handled;
Two field picture after processing is covered into the video data after pending two field picture is handled.
Alternatively, video data is obtained to further comprise:
Obtain local video data and/or network video data.
Alternatively, video data is obtained to further comprise:
Obtain the video data synthesized by multiple local pictures and/or multiple network pictures.
Alternatively, video data is screened, obtains the pending two field picture comprising special object and further comprise:
The video data of user's specified time section is screened, obtains the pending two field picture for including special object.
Alternatively, it is music to input information;At least one information element includes:Amplitude, frequency and/or tone color.
Alternatively, stylization processing is carried out to background image according at least one information element to further comprise:
Value according to amplitude, frequency and/or tone color chooses the changing pattern that stylized processing is carried out to background image;Its In, selected changing pattern is different and different according to the value of amplitude, frequency and/or tone color;
Stylized processing is carried out to background image using changing pattern.
Alternatively, background image is that the background image or default that scene cut handles to obtain is carried out to pending two field picture Background image.
Alternatively, before the background image after foreground image and stylization processing being carried out into fusion treatment, method also includes:
Special object is subjected to three dimensional stress processing.
Alternatively, before the background image after by foreground image and stylization processing carries out fusion treatment, method is also wrapped Include:
At least one dynamic effect to be loaded is generated according at least one information element.
Alternatively, at least one dynamic effect to be loaded is generated according at least one information element to further comprise:
According at least one information element obtain the colouring information of each dynamic effect to be loaded, positional information and/ Or angle information;
Each dynamic effect is generated according to colouring information, positional information and/or angle information.
Alternatively, colouring information, the position of each dynamic effect to be loaded are obtained according at least one information element Information and/or angle information further comprise:
Value according to amplitude, frequency and/or tone color obtains the colouring information of each dynamic effect to be loaded, position Information and/or angle information, wherein, colouring information, positional information and/or angle information are according to amplitude, frequency and/or tone color Value it is different and different.
Alternatively, the background image after foreground image and stylization processing is subjected to fusion treatment, the frame after being handled Image further comprises:
Background image after foreground image and stylization processing is subjected to fusion treatment and integral color processing, and is loaded onto A kind of few dynamic effect, the two field picture after being handled.
Alternatively, dynamic effect is light-illuminating effect.
Alternatively, method also includes:
Video data after processing is uploaded to one or more cloud video platform servers, for cloud video platform service Device is shown video data in cloud video platform.
According to another aspect of the present invention, there is provided a kind of video data processing apparatus, it includes:
Acquisition module, suitable for obtaining video data;
Module is screened, suitable for being screened to video data, obtains the pending two field picture for including special object;
Split module, suitable for carrying out scene cut processing to pending two field picture, obtain before being directed to special object Scape image;
Extraction module, suitable for obtaining the input information of external input sources, extracting at least one information from input information will Element;
Stylized module, suitable for carrying out stylized processing to background image according at least one information element;
Fusion Module, suitable for the background image after foreground image and stylization processing is carried out into fusion treatment, handled Two field picture afterwards;
Overlay module, suitable for the two field picture after processing is covered into the video data after pending two field picture is handled.
Alternatively, acquisition module is further adapted for:
Obtain local video data and/or network video data.
Alternatively, acquisition module is further adapted for:
Obtain the video data synthesized by multiple local pictures and/or multiple network pictures.
Alternatively, module is screened to be further adapted for:
The video data of user's specified time section is screened, obtains the pending two field picture for including special object.
Alternatively, it is music to input information;At least one information element includes:Amplitude, frequency and/or tone color.
Alternatively, stylized module is further adapted for:
Value according to amplitude, frequency and/or tone color chooses the changing pattern that stylized processing is carried out to background image;Its In, selected changing pattern is different and different according to the value of amplitude, frequency and/or tone color;Using changing pattern to the back of the body Scape image carries out stylized processing.
Alternatively, background image is that the background image or default that scene cut handles to obtain is carried out to pending two field picture Background image.
Alternatively, device also includes:
Three-dimensional process module, suitable for special object is carried out into three dimensional stress processing.
Alternatively, device also includes:
Generation module, suitable for generating at least one dynamic effect to be loaded according at least one information element.
Alternatively, generation module is further adapted for:
According at least one information element obtain the colouring information of each dynamic effect to be loaded, positional information and/ Or angle information;Each dynamic effect is generated according to colouring information, positional information and/or angle information.
Alternatively, generation module is further adapted for:
Value according to amplitude, frequency and/or tone color obtains the colouring information of each dynamic effect to be loaded, position Information and/or angle information, wherein, colouring information, positional information and/or angle information are according to amplitude, frequency and/or tone color Value it is different and different.
Alternatively, Fusion Module is further adapted for:
Background image after foreground image and stylization processing is subjected to fusion treatment and integral color processing, and is loaded onto A kind of few dynamic effect, the two field picture after being handled.
Alternatively, dynamic effect is light-illuminating effect.
Alternatively, device also includes:
Uploading module, suitable for the video data after processing is uploaded into one or more cloud video platform servers, for Cloud video platform server is shown video data in cloud video platform.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device above-mentioned Operated corresponding to video data handling procedure.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with the storage medium to A few executable instruction, the executable instruction make computing device be operated as corresponding to above-mentioned video data handling procedure.
According to video data handling procedure provided by the invention and device, computing device, video data is obtained;To video counts According to being screened, the pending two field picture for including special object is obtained;Scene cut processing is carried out to pending two field picture, Obtain being directed to the foreground image of special object;The input information of external input sources is obtained, at least one is extracted from input information Individual information element;Stylized processing is carried out to background image according at least one information element;At foreground image and stylization Background image after reason carries out fusion treatment, the two field picture after being handled;Two field picture after processing is covered to pending frame Image handled after video data.The present invention carries out scene cut processing to pending two field picture, obtains being directed to spy Determine the foreground image of object, stylized place is carried out to background image according at least one information element in the input information of extraction Reason, makes the style of background image and the input information match of external input sources.Again by after foreground image and stylization processing Background image carries out fusion treatment, makes the video after processing that the display with the input information match of external input sources integrally be presented Effect, and then the video after processing is directly obtained, present invention employs deep learning method, with realizing the high accuracy of high efficiency Complete scene cut processing and three dimensional stress processing.And user's technical merit is not limited, it is not necessary to which user enters to video manually Row processing, realizes the processing to video, greatlys save user time automatically.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the flow chart of video data handling procedure according to an embodiment of the invention;
Fig. 2 shows the flow chart of video data handling procedure in accordance with another embodiment of the present invention;
Fig. 3 shows the functional block diagram of video data processing apparatus according to an embodiment of the invention;
Fig. 4 shows the functional block diagram of video data processing apparatus in accordance with another embodiment of the present invention;
Fig. 5 shows a kind of structural representation of computing device according to an embodiment of the invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Fig. 1 shows the flow chart of video data handling procedure according to an embodiment of the invention.As shown in figure 1, regard Frequency data processing method specifically comprises the following steps:
Step S101, obtain video data.
The video data of acquisition can be the local video data of user, can also obtain the video data of network.Or The video data synthesized by multiple local pictures can also be obtained, or obtains the video data synthesized by multiple network pictures, Or obtain the video data synthesized by multiple local pictures and multiple network pictures.
Step S102, is screened to video data, obtains the pending two field picture for including special object.
Comprising many two field pictures, it is necessary to be screened to video data in video data.Because the present invention is to special object Handled, therefore the pending two field picture for including special object is obtained after being screened.
Step S103, scene cut processing is carried out to pending two field picture, obtains being directed to the foreground picture of special object Picture.
Special object, such as human body are contained in pending two field picture.Pending two field picture is carried out at scene cut Reason, mainly special object is split from pending two field picture, obtain pending two field picture be directed to it is specific right The foreground image of elephant, the foreground image can only include special object.
When carrying out scene cut processing to pending two field picture, deep learning method can be utilized.Deep learning is It is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can use a variety of sides Formula represents, such as vector of each pixel intensity value, or is more abstractively expressed as a series of sides, the region etc. of given shape. And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or human facial expression recognition). Such as scene cut can be carried out to pending two field picture, obtain before including human body using human body segmentation's method of deep learning Scape image.
Step S104, the input information of external input sources is obtained, at least one information element is extracted from input information.
The real-time input information of external input sources is obtained, at least one information element is extracted from real-time input information. When extracting information element, extracted according to specific external input sources.The information element of extract real-time is worked as according to acquisition When the input information of external input sources extracted, when the input information for the external input sources that each moment gets is different When, the specific value of the information element of extraction also can be different.
Step S105, stylized processing is carried out to background image according at least one information element.
, can be according to an information element when carrying out stylization processing to background image according at least one information element Stylized processing is carried out to background image, or stylized processing is carried out to background image according to multiple information elements.
Background image can be that the two field picture for treating processing carries out the pending two field picture institute that scene cut handles to obtain The background image used, or default background image.
Step S106, the background image after foreground image and stylization processing is subjected to fusion treatment, after being handled Two field picture.
Background image after foreground image and stylization processing is subjected to fusion treatment, in fusion, to make foreground image Background image after being handled with stylization can be merged preferably, when carrying out dividing processing to pending two field picture, to dividing The edge for the perspective process for cutting to obtain carries out translucent processing, the edge of special object is obscured, preferably to merge.
Step S107, the two field picture after processing is covered into the video data after pending two field picture is handled.
Corresponding pending two field picture is directly override using the two field picture after processing, after directly can be processed Video data.
According to video data handling procedure provided by the invention, video data is obtained;Video data is screened, obtained Pending two field picture comprising special object;Scene cut processing is carried out to pending two field picture, obtains being directed to specific The foreground image of object;The input information of external input sources is obtained, at least one information element is extracted from input information;Foundation At least one information element carries out stylized processing to background image;Background image after foreground image and stylization processing is entered Row fusion treatment, the two field picture after being handled;After the pending two field picture of two field picture covering after processing is handled Video data.The present invention carries out scene cut processing to pending two field picture, obtains being directed to the foreground image of special object, Stylized processing is carried out to background image according at least one information element in the input information of extraction, makes the wind of background image The input information match of lattice and external input sources.The background image after foreground image and stylization processing is carried out at fusion again Reason, make the video after processing that the display effect with the input information match of external input sources integrally be presented, and then directly obtain Video after processing, present invention employs deep learning method, completes scene cut processing with realizing the high accuracy of high efficiency. And user's technical merit is not limited, it is not necessary to which user is handled video manually, realizes the processing to video automatically, greatly It is big to save user time.
Fig. 2 shows the flow chart of video data handling procedure in accordance with another embodiment of the present invention.As shown in Fig. 2 Video data handling procedure specifically comprises the following steps:
Step S201, obtain video data.
The video data of acquisition can be the local video data of user, can also obtain the video data of network.Or The video data synthesized by multiple local pictures can also be obtained, or obtains the video data synthesized by multiple network pictures, Or obtain the video data synthesized by multiple local pictures and multiple network pictures.
Step S202, the video data of user's specified time section is screened, obtained pending comprising special object Two field picture.
Comprising many two field pictures, it is necessary to be screened to video data in video data.Meanwhile when screening, can be with According to user's specified time section, only the video data in user's specified time section is screened, without to other times The video data of section is screened.Such as because the second half section of video data is the climax period, often user's specified time section be regarding The second half section of frequency evidence.Therefore only the video data of user's specified time section is screened, obtains user's specified time section The pending two field picture of special object is included in video data.
Step S203, scene cut processing is carried out to pending two field picture, obtains being directed to the foreground picture of special object Picture.
The step will not be repeated here with reference to the description of the step S103 in the embodiment of figure 1.
Step S204, the input information of external input sources is obtained, at least one information element is extracted from input information.
The real-time input information of external input sources is obtained, at least one information element is extracted from real-time input information. When extracting information element, extracted according to specific external input sources.The input information of external input sources can be outside Music, sound etc..When such as to input information be music, the information element of extraction includes the information elements such as amplitude, frequency, tone color. The information element of extract real-time is extracted according to the input information of the external input sources at that time of acquisition, is obtained when each moment During the input information difference of the external input sources arrived, the specific value of the information element of extraction also can be different.
Step S205, stylized processing is carried out to background image according at least one information element.
Stylized processing is carried out to background image according at least one information element.Specially according to shaking in information element The value of width, frequency and/or tone color chooses the changing pattern that stylized processing is carried out to background image.Wherein, selected change Change pattern is different and different according to the value of amplitude, frequency and/or tone color.It can believe when choosing changing pattern according only to one Breath key element such as amplitude value is chosen, and can also be selected according to multiple information elements such as value of amplitude, frequency and tone color Take.Stylized processing is carried out to background image using the changing pattern of selection.Changing pattern can be included such as filter, foundation information Filter corresponding to select factors, filter of such as missing old times or old friends, Blues filter, handsome filter, sets background image according to the filter of selection Filter style corresponding to being set to.
Above-mentioned background image can be that the two field picture for treating processing carries out the pending frame that scene cut handles to obtain Background image used in image, or default background image.
Step S206, at least one dynamic effect to be loaded is generated according at least one information element.
One or more dynamic effects to be loaded can be generated according to an information element, or will according to multiple information Element generates a kind of dynamic effect to be loaded;Different dynamic effects can be generated according to different information elements.
Dynamic effect includes colouring information, positional information, angle information etc..Obtain and treat according at least one information element Colouring information, positional information and/or the angle information of each dynamic effect of loading.According to colouring information, positional information and/ Or angle information generates each dynamic effect.Specifically, the value according to the amplitude in information element, frequency and/or tone color Colouring information, positional information and/or the angle information of each dynamic effect to be loaded are obtained, wherein, colouring information, position Information and/or angle information are according to the different and different of the value of amplitude, frequency and/or tone color.If dynamic effect is light photograph Effect is penetrated, the value according to the amplitude in information element, frequency and/or tone color can generate the color letter of light-illuminating effect Breath, positional information, angle information etc..During generation, the colouring information of light-illuminating effect can be generated according to the value of amplitude;Or Positional information of the person according to the value generation light-illuminating effect of amplitude;Or generate light-illuminating effect according to the value of frequency Positional information etc..Specifically the light-illuminating effect color information of the generation of amplitude, frequency, the value of tone color and light, position Information, the corresponding relation of angle information do not limit herein.
Step S207, special object is subjected to three dimensional stress processing.
To make the display effect of the dynamic effect of loading more three-dimensional, special object can be subjected to three dimensional stress processing.It is special Determine object to illustrate by taking human face as an example, when such as dynamic effect being light-illuminating effect, when light is on the right side of human face During irradiation, it should be unable to be arrived in real life on the left of human face by light-illuminating., can be with after the processing of human face three dimensional stress Realize the display effect that will not be arrived on the left of human face by light-illuminating.But if being handled without three dimensional stress, human face is The image of two dimension, then it can also be arrived on the left of human face by light-illuminating, display effect can be untrue.
When special object is carried out into three dimensional stress processing, three dimensional stress processing can be carried out by deep learning.Specifically, such as make Human face is subjected to three dimensional stress processing with deep learning, extracts the key message of face.The key message can be specially to close Key point information, key area information, and/or key lines information etc..Embodiments of the invention are said by taking key point information as an example It is bright, but the key message of the present invention is not limited to key point information.It can be improved according to key point information using key point information The processing speed and efficiency of three dimensional stress processing are carried out, three dimensional stress processing directly can be carried out according to key point information, it is not necessary to again The complex operations such as subsequently calculating, analysis are carried out to key message.Meanwhile key point information is easy to extract, and extract accurate, progress The effect of three dimensional stress processing is more accurate.When carrying out three dimensional stress processing, the faceform of three-dimensional is first built.It is base to build threedimensional model Identity and Expression Reformation matrix in 3D face databases, the set for the key point information of a given face can Identity, Expression Reformation coefficient and rotation scaling translation parameters are tried to achieve by way of coordinate declines (coordinate descent) Euclidean distance is restrained, and then constructs the three-dimensional structure model of corresponding face.Human face is entered using three-dimensional structure model The processing of row three dimensional stress, obtains the face of three dimensional stress.It should be noted that the special object after three dimensional stress processing does not have texture special Reference ceases.Further extract the image texture information of special object in pending two field picture, image texture information record is treated The information such as the spatial color distribution of special object and light distribution in the two field picture of processing.In the image texture of extraction special object It can be used such as the methods of LBP (Local binary patterns) local binary patterns method, gray level co-occurrence matrixes during information Extracted.The special object after three dimensional stress processing is drawn according to the image texture information of the special object extracted, Obtain containing the three dimensional stress special object of textural characteristics.
Step S208, the background image after foreground image and stylization processing is carried out at fusion treatment and integral color Reason, and at least one dynamic effect is loaded, the two field picture after being handled.
Background image after foreground image and stylization processing is first subjected to fusion treatment, and carried out at overall tone Reason, so that the image after fusion is more natural.On this basis, at least one dynamic effect, realization and external input sources are loaded Input information match processing after two field picture.It is music such as to input information, and dynamic effect is light show radiation response Light-illuminating effect, background image are the background picture of discotheque style, and the two field picture after processing is integrally presented one kind and become with music Display effect of the personage of change in discotheque.
Further, to allow the background image after foreground image and stylization processing preferably to merge, to pending Two field picture when carrying out dividing processing, translucent processing is carried out to the edge of perspective process that segmentation obtains, obscures special object Edge, preferably to merge.
Step S209, the two field picture after processing is covered into the video data after pending two field picture is handled.
Corresponding pending two field picture is directly override using the two field picture after processing, after directly can be processed Video data.
Step S210, the video data after processing is uploaded to one or more cloud video platform servers, so that cloud regards Frequency Platform Server is shown video data in cloud video platform.
Video data after processing can be stored in locally only to be watched for user, can also be straight by the video data after processing Connect and reach one or more cloud video platform servers, such as iqiyi.com, youku.com, fast video cloud video platform server, with For cloud video platform server video data is shown in cloud video platform.
It is to be added according at least one information element of extraction, generation according to video data handling procedure provided by the invention At least one dynamic effect of load.Background image after foreground image and stylization processing is subjected to fusion treatment, adjustment is overall Tone, and the dynamic effect of load information key element generation, make the video after processing that the input letter with external input sources integrally be presented The display effect of manner of breathing matching.Meanwhile to make the display effect of the dynamic effect of loading more three-dimensional, special object can be entered Row three dimensional stress processing, so that the display effect of the video after processing is closer to truly.After the present invention can directly obtain processing Video, further, the video data after processing can also be directly uploaded to one or more cloud video platform servers, for Cloud video platform server is shown video data in cloud video platform.The present invention is not limited to user's technical merit, no Need user to handle manually video, realize the processing to video automatically, greatly save user time.
Fig. 3 shows the functional block diagram of video data processing apparatus according to an embodiment of the invention.As shown in figure 3, Video data processing apparatus includes following module:
Acquisition module 301, suitable for obtaining video data.
The video data that acquisition module 301 obtains can be the local video data of user, and acquisition module 301 can also obtain Take the video data of network.Either acquisition module 301 can also obtain the video data synthesized by multiple local pictures or obtain Modulus block 301 obtains the video data synthesized by multiple network pictures, or acquisition module 301 obtain by multiple local pictures and The video data of multiple network picture synthesis.
Module 302 is screened, suitable for being screened to video data, obtains the pending two field picture for including special object.
Video data is screened, it is necessary to screen module 302 comprising many two field pictures in video data.Due to the present invention Special object is handled, therefore screens after module 302 is screened and obtains the pending two field picture for including special object.
Module 302 is screened when screening, can also be according to user's specified time section, only to regarding in user's specified time section Frequency is according to being screened, without being screened to the video data of other times section.Such as due to the second half section of video data For the climax period, often user's specified time section is the second half section of video data.Therefore when examination module 302 is only specified to user Between the video data of section screened, obtain the pending frame for including special object in the video data of user's specified time section Image.
Split module 303, suitable for carrying out scene cut processing to pending two field picture, obtain being directed to special object Foreground image.
Pending two field picture contains special object, such as human body.Split module 303 and field is carried out to pending two field picture Scape dividing processing, mainly special object is split from pending two field picture, obtain pending two field picture and be directed to In the foreground image of special object, the foreground image can only include special object.
Split module 303 when carrying out scene cut processing to pending two field picture, deep learning method can be utilized. Deep learning is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can be with Represented using various ways, a series of such as vector of each pixel intensity value, or be more abstractively expressed as sides, given shape Region etc..And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or facial table Feelings identify).Scene point can be carried out using human body segmentation's method of deep learning to pending two field picture by such as splitting module 303 Cut, obtain including the foreground image of human body.
Extraction module 304, suitable for obtaining the input information of external input sources, at least one information is extracted from input information Key element.
Extraction module 304 obtains the real-time input information of external input sources, and at least one is extracted from real-time input information Individual information element.The input information of external input sources can be outside music, sound etc..When such as to input information be music, carry The information element that modulus block 304 extracts includes the information elements such as amplitude, frequency, tone color.Extraction module 304 is in extraction information element When, extracted according to specific external input sources.The information element of the extract real-time of extraction module 304 according to acquisition at that time The input information of external input sources is extracted, and when the input information difference for the external input sources that each moment gets, is carried The specific value for the information element that modulus block 304 extracts also can be different.
Stylized module 305, suitable for carrying out stylized processing to background image according at least one information element.
Stylized module 305 to background image according at least one information element when carrying out stylization processing, stylization Module 305 can carry out stylized processing according to an information element to background image, or stylized module 305 is according to multiple Information element carries out stylized processing to background image.
Specially stylized module 305 is chosen to background according to the value of the amplitude in information element, frequency and/or tone color Image carries out the changing pattern of stylized processing.Wherein, the changing pattern selected by stylized module 305 is according to amplitude, frequency And/or the value of tone color is different and different.Can be according only to an information element during the stylized selection of module 305 changing pattern As amplitude value is chosen, can also be chosen according to multiple information elements such as value of amplitude, frequency and tone color.Style Change module 305 and stylized processing is carried out to background image using the changing pattern chosen.Changing pattern can be included such as filter, wind Module of formatting 305 is according to filter, filter of such as missing old times or old friends, Blues filter, handsome filter, stylized mould corresponding to information element selection Block 305 background image is arranged to according to the filter of selection corresponding to filter style.
Background image can be that the two field picture for treating processing carries out the pending two field picture institute that scene cut handles to obtain The background image used, or default background image.
Fusion Module 306, suitable for the background image after foreground image and stylization processing is carried out into fusion treatment, obtain everywhere Two field picture after reason.
Background image after foreground image and stylization processing is carried out fusion treatment, Fusion Module 306 by Fusion Module 306 In fusion, to allow the background image after foreground image and stylization processing preferably to merge, segmentation module 303 is being treated When the two field picture of processing carries out dividing processing, the edge of the perspective process obtained to segmentation carries out translucent processing, obscures specific The edge of object, so that Fusion Module 306 preferably merges.
Overlay module 307, suitable for the two field picture after processing is covered into the video counts after pending two field picture is handled According to.
Overlay module 307 using the two field picture after processing directly override corresponding to pending two field picture, directly can be with Video data after being handled.
According to video data processing apparatus provided by the invention, video data is obtained;Video data is screened, obtained Pending two field picture comprising special object;Scene cut processing is carried out to pending two field picture, obtains being directed to specific The foreground image of object;The input information of external input sources is obtained, at least one information element is extracted from input information;Foundation At least one information element carries out stylized processing to background image;Background image after foreground image and stylization processing is entered Row fusion treatment, the two field picture after being handled;After the pending two field picture of two field picture covering after processing is handled Video data.The present invention carries out scene cut processing to pending two field picture, obtains being directed to the foreground image of special object, Stylized processing is carried out to background image according at least one information element in the input information of extraction, makes the wind of background image The input information match of lattice and external input sources.The background image after foreground image and stylization processing is carried out at fusion again Reason, make the video after processing that the display effect with the input information match of external input sources integrally be presented, and then directly obtain Video after processing, present invention employs deep learning method, completes scene cut processing with realizing the high accuracy of high efficiency. And user's technical merit is not limited, it is not necessary to which user is handled video manually, realizes the processing to video automatically, greatly It is big to save user time.
Fig. 4 shows the functional block diagram of video data processing apparatus in accordance with another embodiment of the present invention.Such as Fig. 4 institutes Show, be that video data processing apparatus also includes with Fig. 3 differences:
Generation module 308, suitable for generating at least one dynamic effect to be loaded according at least one information element.
Generation module 308 can generate one or more dynamic effects to be loaded, Huo Zhesheng according to an information element A kind of dynamic effect to be loaded is generated according to multiple information elements into module 308;Generation module 308 will according to different information Element can generate different dynamic effects.
Dynamic effect includes colouring information, positional information, angle information etc..Generation module 308 is according at least one letter Cease colouring information, positional information and/or angle information that key element obtains each dynamic effect to be loaded.Generation module 308 Each dynamic effect is generated according to colouring information, positional information and/or angle information.Specifically, generation module 308 is according to letter The value for ceasing the amplitude in key element, frequency and/or tone color obtains the colouring information of each dynamic effect to be loaded, position letter Breath and/or angle information, wherein, colouring information, positional information and/or angle information take according to amplitude, frequency and/or tone color Value it is different and different.If dynamic effect is light-illuminating effect, generation module 308 is according to the amplitude in information element, frequency And/or the value of tone color can generate the colouring information of light-illuminating effect, positional information, angle information etc..Generation module 308 During generation, the colouring information of light-illuminating effect can be generated according to the value of amplitude;Or generation module 308 is according to amplitude Value generates the positional information of light-illuminating effect;Or generation module 308 generates light-illuminating effect according to the value of frequency Positional information etc..Specifically the light-illuminating effect color information of the generation of amplitude, frequency, the value of tone color and light, position Information, the corresponding relation of angle information do not limit herein.
Three-dimensional process module 309, suitable for special object is carried out into three dimensional stress processing.
To make the display effect of the dynamic effect of loading more three-dimensional, three-dimensional process module 309 can enter special object The processing of row three dimensional stress.Special object illustrates by taking human face as an example, when such as dynamic effect being light-illuminating effect, works as light When being irradiated on the right side of from human face, it should be unable to be arrived in real life on the left of human face by light-illuminating.Three-dimensional process module 309 by after the processing of human face three dimensional stress, it is possible to achieve the display effect that will not be arrived on the left of human face by light-illuminating.But such as Fruit is handled without three dimensional stress, and human face is the image of two dimension, then can also be arrived on the left of human face by light-illuminating, display effect Fruit can be untrue.
When special object is carried out three dimensional stress processing by three-dimensional process module 309, three dimensional stress can be carried out by deep learning Processing.Specifically, human face is carried out three dimensional stress processing by such as three-dimensional process module 309 using deep learning, face is extracted Key message.The key message can be specially key point information, key area information, and/or key lines information etc..The present invention Embodiment illustrated by taking key point information as an example, but the present invention key message be not limited to key point information.Use pass Key point information can improve the processing speed and efficiency that three dimensional stress processing is carried out according to key point information, can be directly according to key Point information carries out three dimensional stress processing, it is not necessary to carries out the complex operations such as subsequently calculating, analysis to key message again.It is meanwhile crucial Point information is easy to extract, and extracts accurately, and the effect of progress three dimensional stress processing is more accurate.Three-dimensional process module 309 carries out three-dimensional When changing processing, the faceform of three-dimensional is first built.It is based on the identity in 3D face databases and expression weight to build threedimensional model Structure matrix, the set for the key point information of a given face, (coordinate can be declined by coordinate Descent mode) tries to achieve identity, and Expression Reformation coefficient and rotation scaling translation parameters restrain Euclidean distance, and then build Go out the three-dimensional structure model of corresponding face.Human face is carried out three dimensional stress by three-dimensional process module 309 using three-dimensional structure model Processing, obtains the face of three dimensional stress.It should be noted that the special object after three dimensional stress processing does not have texture feature information. Three-dimensional process module 309 further extracts the image texture information of special object in pending two field picture, image texture information It has recorded the information such as the spatial color distribution of special object and light distribution in pending two field picture.Three-dimensional process module 309 Such as LBP (Local binary patterns) local binary mould can be used when extracting the image texture information of special object The methods of formula method, gray level co-occurrence matrixes, is extracted.Three-dimensional process module 309 is according to the image line of the special object extracted Reason information is drawn to the special object after three dimensional stress processing, obtains containing the three dimensional stress special object of textural characteristics.
After above-mentioned module is performed, Fusion Module 306 first carries out the background image after foreground image and stylization processing Fusion treatment, and overall tone processing is carried out, so that the image after fusion is more natural.On this basis, Fusion Module 306 At least one dynamic effect is loaded, is realized and the two field picture after the processing of the input information match of external input sources.Such as input Information is music, and dynamic effect is the light-illuminating effect of light show radiation response, and background image is the Background of discotheque style A kind of display effect with the personage of music change in discotheque is integrally presented in piece, the two field picture after processing.
Uploading module 310, suitable for the video data after processing is uploaded into one or more cloud video platform servers, with For cloud video platform server video data is shown in cloud video platform.
Video data after processing can be stored in locally only to be watched for user, can also will be handled by uploading module 310 Video data afterwards is directly uploaded to one or more cloud video platform servers, such as iqiyi.com, youku.com, fast video cloud video Platform Server, so that cloud video platform server is shown video data in cloud video platform.
It is to be added according at least one information element of extraction, generation according to video data processing apparatus provided by the invention At least one dynamic effect of load.Background image after foreground image and stylization processing is subjected to fusion treatment, adjustment is overall Tone, and the dynamic effect of load information key element generation, make the video after processing that the input letter with external input sources integrally be presented The display effect of manner of breathing matching.Meanwhile to make the display effect of the dynamic effect of loading more three-dimensional, special object can be entered Row three dimensional stress processing, so that the display effect of the video after processing is closer to truly.After the present invention can directly obtain processing Video, further, the video data after processing can also be directly uploaded to one or more cloud video platform servers, for Cloud video platform server is shown video data in cloud video platform.The present invention is not limited to user's technical merit, no Need user to handle manually video, realize the processing to video automatically, greatly save user time.
Present invention also provides a kind of nonvolatile computer storage media, the computer-readable storage medium is stored with least One executable instruction, the computer executable instructions can perform the video data handling procedure in above-mentioned any means embodiment.
Fig. 5 shows a kind of structural representation of computing device according to an embodiment of the invention, of the invention specific real Specific implementation of the example not to computing device is applied to limit.
As shown in figure 5, the computing device can include:Processor (processor) 502, communication interface (Communications Interface) 504, memory (memory) 506 and communication bus 508.
Wherein:
Processor 502, communication interface 504 and memory 506 complete mutual communication by communication bus 508.
Communication interface 504, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 502, for configuration processor 510, it can specifically perform in above-mentioned video data handling procedure embodiment Correlation step.
Specifically, program 510 can include program code, and the program code includes computer-managed instruction.
Processor 502 is probably central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 506, for depositing program 510.Memory 506 may include high-speed RAM memory, it is also possible to also include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 510 specifically can be used for so that processor 502 is performed at the video data in above-mentioned any means embodiment Reason method.In program 510 specific implementation of each step may refer to corresponding steps in above-mentioned video data Processing Example and Corresponding description, will not be described here in unit.It is apparent to those skilled in the art that for description convenience and Succinctly, the specific work process of the equipment of foregoing description and module, the corresponding process that may be referred in preceding method embodiment are retouched State, will not be repeated here.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) are realized in the device of video data processing according to embodiments of the present invention The some or all functions of some or all parts.The present invention is also implemented as being used to perform method as described herein Some or all equipment or program of device (for example, computer program and computer program product).Such reality The program of the existing present invention can store on a computer-readable medium, or can have the form of one or more signal. Such signal can be downloaded from internet website and obtained, and either be provided or in the form of any other on carrier signal There is provided.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of video data handling procedure, it includes:
Obtain video data;
The video data is screened, obtains the pending two field picture for including special object;
Scene cut processing is carried out to pending two field picture, obtains being directed to the foreground image of the special object;
The input information of external input sources is obtained, at least one information element is extracted from the input information;
Stylized processing is carried out to background image according at least one information element;
Background image after the foreground image and stylization processing is subjected to fusion treatment, the two field picture after being handled;
Two field picture after processing is covered into the video data after pending two field picture is handled.
2. according to the method for claim 1, wherein, the acquisition video data further comprises:
Obtain local video data and/or network video data.
3. according to the method for claim 1, wherein, the acquisition video data further comprises:
Obtain the video data synthesized by multiple local pictures and/or multiple network pictures.
4. according to the method any one of claim 1-3, wherein, it is described that the video data is screened, obtain Pending two field picture comprising special object further comprises:
The video data of user's specified time section is screened, obtains the pending two field picture for including special object.
5. according to the method any one of claim 1-4, wherein, the input information is music;It is described at least one Information element includes:Amplitude, frequency and/or tone color.
6. according to the method any one of claim 1-5, wherein, it is described according at least one information element to the back of the body Scape image carries out stylization processing and further comprised:
Value according to the amplitude, frequency and/or tone color chooses the changing pattern that stylized processing is carried out to background image;Its In, selected changing pattern is different and different according to the value of the amplitude, frequency and/or tone color;
Stylized processing is carried out to background image using the changing pattern.
7. according to the method any one of claim 1-6, wherein, the background image is to the pending frame figure The background image or default background image obtained as carrying out scene cut to handle.
8. a kind of video data processing apparatus, it includes:
Acquisition module, suitable for obtaining video data;
Module is screened, suitable for being screened to the video data, obtains the pending two field picture for including special object;
Split module, suitable for carrying out scene cut processing to pending two field picture, obtain before being directed to the special object Scape image;
Extraction module, suitable for obtaining the input information of external input sources in real time, at least one letter is extracted from the input information Cease key element;
Stylized module, suitable for carrying out stylized processing to background image according at least one information element;
Fusion Module, suitable for the background image after the foreground image and stylization processing is carried out into fusion treatment, handled Two field picture afterwards;
Overlay module, suitable for the two field picture after processing is covered into the video data after pending two field picture is handled.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask and operated corresponding to the video data handling procedure any one of 1-7.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make operation corresponding to video data handling procedure of the computing device as any one of claim 1-7.
CN201710853663.1A 2017-09-20 2017-09-20 Video data handling procedure and device, computing device Pending CN107633228A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710853663.1A CN107633228A (en) 2017-09-20 2017-09-20 Video data handling procedure and device, computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710853663.1A CN107633228A (en) 2017-09-20 2017-09-20 Video data handling procedure and device, computing device

Publications (1)

Publication Number Publication Date
CN107633228A true CN107633228A (en) 2018-01-26

Family

ID=61102318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710853663.1A Pending CN107633228A (en) 2017-09-20 2017-09-20 Video data handling procedure and device, computing device

Country Status (1)

Country Link
CN (1) CN107633228A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151575A (en) * 2018-10-16 2019-01-04 Oppo广东移动通信有限公司 Multimedia data processing method and device, computer readable storage medium
CN110189246A (en) * 2019-05-15 2019-08-30 北京字节跳动网络技术有限公司 Image stylization generation method, device and electronic equipment
CN111862104A (en) * 2019-04-26 2020-10-30 利亚德照明股份有限公司 Video cutting method and system based on large-scale urban night scene
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
WO2020248767A1 (en) * 2019-06-11 2020-12-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for stylizing video frames
CN112312178A (en) * 2020-07-29 2021-02-02 上海和煦展览有限公司 Multimedia image processing system of multimedia exhibition room
CN112969007A (en) * 2021-02-02 2021-06-15 东北大学 Video post-production method oriented to virtual three-dimensional background

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106231368A (en) * 2015-12-30 2016-12-14 深圳超多维科技有限公司 Main broadcaster's class interaction platform stage property rendering method and device, client
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN107005624A (en) * 2014-12-14 2017-08-01 深圳市大疆创新科技有限公司 The method and system of Video processing
CN107172485A (en) * 2017-04-25 2017-09-15 北京百度网讯科技有限公司 A kind of method and apparatus for being used to generate short-sighted frequency

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107005624A (en) * 2014-12-14 2017-08-01 深圳市大疆创新科技有限公司 The method and system of Video processing
CN106231368A (en) * 2015-12-30 2016-12-14 深圳超多维科技有限公司 Main broadcaster's class interaction platform stage property rendering method and device, client
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN107172485A (en) * 2017-04-25 2017-09-15 北京百度网讯科技有限公司 A kind of method and apparatus for being used to generate short-sighted frequency

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151575A (en) * 2018-10-16 2019-01-04 Oppo广东移动通信有限公司 Multimedia data processing method and device, computer readable storage medium
CN109151575B (en) * 2018-10-16 2021-12-14 Oppo广东移动通信有限公司 Multimedia data processing method and device and computer readable storage medium
CN111862104A (en) * 2019-04-26 2020-10-30 利亚德照明股份有限公司 Video cutting method and system based on large-scale urban night scene
CN110189246A (en) * 2019-05-15 2019-08-30 北京字节跳动网络技术有限公司 Image stylization generation method, device and electronic equipment
CN110189246B (en) * 2019-05-15 2023-02-28 北京字节跳动网络技术有限公司 Image stylization generation method and device and electronic equipment
WO2020248767A1 (en) * 2019-06-11 2020-12-17 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method, system, and computer-readable medium for stylizing video frames
CN112312178A (en) * 2020-07-29 2021-02-02 上海和煦展览有限公司 Multimedia image processing system of multimedia exhibition room
CN112037121A (en) * 2020-08-19 2020-12-04 北京字节跳动网络技术有限公司 Picture processing method, device, equipment and storage medium
WO2022037634A1 (en) * 2020-08-19 2022-02-24 北京字节跳动网络技术有限公司 Picture processing method and apparatus, device, and storage medium
CN112969007A (en) * 2021-02-02 2021-06-15 东北大学 Video post-production method oriented to virtual three-dimensional background
CN112969007B (en) * 2021-02-02 2022-04-12 东北大学 Video post-production method oriented to virtual three-dimensional background

Similar Documents

Publication Publication Date Title
CN107633228A (en) Video data handling procedure and device, computing device
CN107613360A (en) Video data real-time processing method and device, computing device
JP7090113B2 (en) Line drawing generation
CN109670558A (en) It is completed using the digital picture of deep learning
CN107547804A (en) Realize the video data handling procedure and device, computing device of scene rendering
Collomosse et al. Cubist style rendering from photographs
CN109964255B (en) 3D printing using 3D video data
Liang et al. Spatial-separated curve rendering network for efficient and high-resolution image harmonization
Grabli et al. Programmable style for NPR line drawing
CN107483892A (en) Video data real-time processing method and device, computing device
Zamuda et al. Vectorized procedural models for animated trees reconstruction using differential evolution
EP3591618A2 (en) Method and apparatus for converting 3d scanned objects to avatars
US20160086365A1 (en) Systems and methods for the conversion of images into personalized animations
Yang et al. A stylized approach for pencil drawing from photographs
CN108124489A (en) Information processing method and device, cloud processing equipment and computer program product
CN107547803A (en) Video segmentation result edge optimization processing method, device and computing device
CN111127309A (en) Portrait style transfer model training method, portrait style transfer method and device
CN107743263A (en) Video data real-time processing method and device, computing device
Governi et al. Digital bas-relief design: A novel shape from shading-based method
CN107590817A (en) Image capture device Real-time Data Processing Method and device, computing device
CN107578369A (en) Video data handling procedure and device, computing device
CN107566853A (en) Realize the video data real-time processing method and device, computing device of scene rendering
CN107592475A (en) Video data handling procedure and device, computing device
Stoppel et al. LinesLab: A Flexible Low‐Cost Approach for the Generation of Physical Monochrome Art
CN113487475B (en) Interactive image editing method, system, readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180126

RJ01 Rejection of invention patent application after publication