CN107613161A - Video data handling procedure and device, computing device based on virtual world - Google Patents

Video data handling procedure and device, computing device based on virtual world Download PDF

Info

Publication number
CN107613161A
CN107613161A CN201710948050.6A CN201710948050A CN107613161A CN 107613161 A CN107613161 A CN 107613161A CN 201710948050 A CN201710948050 A CN 201710948050A CN 107613161 A CN107613161 A CN 107613161A
Authority
CN
China
Prior art keywords
field picture
video data
pending
special object
foreground image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710948050.6A
Other languages
Chinese (zh)
Inventor
眭帆
眭一帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201710948050.6A priority Critical patent/CN107613161A/en
Publication of CN107613161A publication Critical patent/CN107613161A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of video data handling procedure based on virtual world and device, computing device, its method includes:Obtain video data;Video data is screened, obtains the pending two field picture for including special object;Scene cut processing is carried out to pending two field picture, obtains being directed to the foreground image of special object;Drawing three-dimensional scene;The key message of special object is extracted from pending two field picture, positional information of the foreground image in three-dimensional scenic is obtained according to key message;According to positional information, foreground image and three-dimensional scenic are subjected to fusion treatment, the two field picture after being handled;Two field picture after processing is covered into the video data after pending two field picture is handled.Present invention employs deep learning method, completes scene cut processing with realizing the high accuracy of high efficiency.And user's technical merit is not limited, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is greatlyd save user time.

Description

Video data handling procedure and device, computing device based on virtual world
Technical field
The present invention relates to image processing field, and in particular to a kind of video data handling procedure and dress based on virtual world Put, computing device.
Background technology
With the development of science and technology, the technology of image capture device also increasingly improves.Regarded using what image capture device was recorded Frequency also becomes apparent from, resolution ratio, display effect also greatly improve.But the video of existing recording is only dull recorded material sheet Body, the increasing individual requirement that user proposes can not be met.Prior art can be after recorded video, can be by user It is further again to video manually to be handled.But so processing needs user to have higher image processing techniques, and is locating The time for spending user more is needed during reason, handles cumbersome, technical sophistication.
Therefore, it is necessary to which a kind of video data handling procedure based on virtual world, is meeting the individual requirement of user Reduce technical requirements threshold simultaneously.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on State the video data handling procedure and device, computing device based on virtual world of problem.
According to an aspect of the invention, there is provided a kind of video data handling procedure based on virtual world, it includes:
Obtain video data;
Video data is screened, obtains the pending two field picture for including special object;
Scene cut processing is carried out to pending two field picture, obtains being directed to the foreground image of special object;
Drawing three-dimensional scene;
The key message of special object is extracted from pending two field picture, obtaining foreground image according to key message exists Positional information in three-dimensional scenic;
According to positional information, foreground image and three-dimensional scenic are subjected to fusion treatment, the two field picture after being handled;
Two field picture after processing is covered into the video data after pending two field picture is handled.
Alternatively, video data is obtained to further comprise:
Obtain local video data and/or network video data.
Alternatively, video data is obtained to further comprise:
Obtain the video data synthesized by multiple local pictures and/or multiple network pictures.
Alternatively, video data is screened, obtains the pending two field picture comprising special object and further comprise:
The video data of user's specified time section is screened, obtains the pending two field picture for including special object.
Alternatively, key message is key point information;
The key message of special object is extracted from pending two field picture, obtaining foreground image according to key message exists Positional information in three-dimensional scenic further comprises:
The key point information positioned at special object is extracted from pending two field picture.
Alternatively, the key message of special object is extracted from pending two field picture, before being obtained according to key message Positional information of the scape image in three-dimensional scenic further comprises:
According to the key point information of special object, the distance between at least two key points with symmetric relation are calculated;
According to the distance between at least two key points with symmetric relation, foreground image is obtained in three-dimensional scenic Depth location information.
Alternatively, the key message of special object is extracted from pending two field picture, before being obtained according to key message Positional information of the scape image in three-dimensional scenic further comprises:
According to key point information, positional information of the special object in pending two field picture is obtained;
According to positional information of the special object in pending two field picture, a left side of the foreground image in three-dimensional scenic is obtained Right positional information.
Alternatively, method also includes:
Obtain the terrain information of three-dimensional scenic;
According to right position information in three-dimensional scenic of the terrain information of three-dimensional scenic, foreground image and/or depth position Confidence ceases, and obtains upper-lower position information of the foreground image in three-dimensional scenic.
Alternatively, according to positional information, foreground image and three-dimensional scenic are subjected to fusion treatment, the frame figure after being handled As further comprising:
According to depth location information, right position information and/or upper-lower position information of the foreground image in three-dimensional scenic, Foreground image and three-dimensional scenic are subjected to fusion treatment, the two field picture after being handled.
Alternatively, before the two field picture after being handled, method also includes:
Effect textures are drawn in the specific region of the special object of foreground image.
Alternatively, three-dimensional scenic includes the Weather information of real-time change.
Alternatively, three-dimensional scenic includes transformable Lighting information.
Alternatively, method also includes:
Video data after processing is uploaded to one or more cloud video platform servers, for cloud video platform service Device is shown video data in cloud video platform.
According to another aspect of the present invention, there is provided a kind of video data processing apparatus based on virtual world, it includes:
Acquisition module, suitable for obtaining video data;
Module is screened, suitable for being screened to video data, obtains the pending two field picture for including special object;
Split module, suitable for carrying out scene cut processing to pending two field picture, obtain before being directed to special object Scape image;
Drafting module, suitable for drawing three-dimensional scene;
Extraction module, suitable for extracting the key message of special object from pending two field picture, according to key message Obtain positional information of the foreground image in three-dimensional scenic;
Fusion Module, suitable for according to positional information, foreground image and three-dimensional scenic being carried out into fusion treatment, after obtaining processing Two field picture;
Overlay module, suitable for the two field picture after processing is covered into the video data after pending two field picture is handled.
Alternatively, acquisition module is further adapted for:
Obtain local video data and/or network video data.
Alternatively, acquisition module is further adapted for:
Obtain the video data synthesized by multiple local pictures and/or multiple network pictures.
Alternatively, module is screened to be further adapted for:
The video data of user's specified time section is screened, obtains the pending two field picture for including special object.
Alternatively, key message is key point information;
Extraction module is further adapted for:The key point information positioned at special object is extracted from pending two field picture.
Alternatively, extraction module further comprises:
First position module, suitable for the key point information according to special object, calculate at least two with symmetric relation The distance between key point;According to the distance between at least two key points with symmetric relation, foreground image is obtained three Tie up the depth location information in scene.
Alternatively, extraction module further comprises:
Second place module, suitable for according to key point information, obtaining position of the special object in pending two field picture Information;According to positional information of the special object in pending two field picture, left and right of the foreground image in three-dimensional scenic is obtained Positional information.
Alternatively, device also includes:
3rd position module, suitable for obtaining the terrain information of three-dimensional scenic;According to the terrain information of three-dimensional scenic, foreground picture As the right position information and/or depth location information in three-dimensional scenic, it is upper and lower in three-dimensional scenic to obtain foreground image Positional information.
Alternatively, Fusion Module is further adapted for:
According to depth location information, right position information and/or upper-lower position information of the foreground image in three-dimensional scenic, Foreground image and three-dimensional scenic are subjected to fusion treatment, the two field picture after being handled.
Alternatively, device also includes:
Effect textures are drawn in textures module, the specific region suitable for the special object in foreground image.
Alternatively, three-dimensional scenic includes the Weather information of real-time change.
Alternatively, three-dimensional scenic includes transformable Lighting information.
Alternatively, device also includes:
Uploading module, suitable for the video data after processing is uploaded into one or more cloud video platform servers, for Cloud video platform server is shown video data in cloud video platform.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and Communication bus, the processor, the memory and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device above-mentioned Based on operation corresponding to the video data handling procedure of virtual world.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with the storage medium to A few executable instruction, the executable instruction make the computing device video data handling procedure based on virtual world as described above Corresponding operation.
According to the video data handling procedure provided by the invention based on virtual world and device, computing device, storage Jie Matter, obtain video data;Video data is screened, obtains the pending two field picture for including special object;To pending Two field picture carry out scene cut processing, obtain being directed to the foreground image of special object;Drawing three-dimensional scene;From pending The key message of special object is extracted in two field picture, obtaining position of the foreground image in three-dimensional scenic according to key message believes Breath;According to positional information, foreground image and three-dimensional scenic are subjected to fusion treatment, the two field picture after being handled;After handling Two field picture cover the video data after pending two field picture is handled.The present invention screens to video data, obtains After taking the pending two field picture comprising special object, the foreground image of special object is partitioned into from pending two field picture. According to the key message that special object is extracted from pending two field picture, position of the foreground image in three-dimensional scenic is obtained Information, it is easy to be merged foreground image and three-dimensional scenic, the video after obtained processing shows special object positioned at three Tie up the effect in scene.Present invention employs deep learning method, completes at scene cut with realizing the high accuracy of high efficiency Reason.And user's technical merit is not limited, it is not necessary to which user is handled video manually, the automatic place realized to video Reason, greatlys save user time.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows the flow of the video data handling procedure according to an embodiment of the invention based on virtual world Figure;
Fig. 2 shows the flow of the video data handling procedure in accordance with another embodiment of the present invention based on virtual world Figure;
Fig. 3 shows the functional block of the video data processing apparatus according to an embodiment of the invention based on virtual world Figure;
Fig. 4 shows the function of the video data processing apparatus in accordance with another embodiment of the present invention based on virtual world Block diagram;
Fig. 5 shows a kind of structural representation of computing device according to an embodiment of the invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure Completely it is communicated to those skilled in the art.
Special object can be any objects such as human body in image, plant, animal in the present invention, with people in embodiment Illustrated exemplified by body, but be not limited only to human body.
Fig. 1 shows the flow of the video data handling procedure according to an embodiment of the invention based on virtual world Figure.As shown in figure 1, the video data handling procedure based on virtual world specifically comprises the following steps:
Step S101, obtain video data.
The video data of acquisition can be the local video data of user, can also obtain the video data of network.Or The video data synthesized by multiple local pictures can also be obtained, or obtains the video data synthesized by multiple network pictures, Or obtain the video data synthesized by multiple local pictures and multiple network pictures.
Step S102, is screened to video data, obtains the pending two field picture for including special object.
Comprising many two field pictures, it is necessary to be screened to video data in video data.Because the present invention is to special object Handled, therefore the pending two field picture for including special object is obtained after being screened.
Step S103, scene cut processing is carried out to pending two field picture, obtains being directed to the foreground picture of special object Picture.
Pending two field picture contains special object, such as human body.Scene cut processing is carried out to pending two field picture, Mainly special object is split from pending two field picture, obtains being directed to the foreground image of special object, before this Scape image can only include special object.
When carrying out scene cut processing to pending two field picture, deep learning method can be utilized.Deep learning is It is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can use a variety of sides Formula represents, such as vector of each pixel intensity value, or is more abstractively expressed as a series of sides, the region etc. of given shape. And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or human facial expression recognition). Such as scene cut can be carried out to pending two field picture, obtain before including human body using human body segmentation's method of deep learning Scape image.Further, when carrying out the foreground image comprising human body that scene cut obtains to pending two field picture, can obtain To human body all images or only obtain most of image of human body, do not limit herein.
Step S104, drawing three-dimensional scene.
Three-dimensional scenic can be three-dimensional virtual scene, can also be three-dimensional scenic by the processing of real scene three dimensional stress. Such as forest, waterfall, the various objects in lake can be included in three-dimensional scenic, do not limit the particular content of three-dimensional scenic in addition.
The Weather information of real-time change, such as cloudy day, fine day, different Changes in weather of raining are further comprises in three-dimensional scenic Scene.The Weather information of real-time change make it that three-dimensional scenic is more true, the effect of presentation is more raw in the three-dimensional scenic of drafting It is dynamic.Transformable Lighting information is further comprises in three-dimensional scenic, can be solar irradiation effect during such as fine day, can have when rainy Lightning lighting effect, the lighting effect that dark scene can also have firefly to dance in the air (could be arranged to when firefly dances in the air Specified location is danced in the air, and when can also be according to follow-up fusion, be danced in the air around special object) etc., so that whole three-dimensional scenic is more Add coordination.
The technology of drawing three-dimensional scene can use any rendering technique, not limit herein.
Step S105, the key message of special object is extracted from pending two field picture, is obtained according to key message Positional information of the foreground image in three-dimensional scenic.
The key message of special object is extracted from pending two field picture, the key message can be specially key point letter Breath, key area information, and/or key lines information etc..Embodiments of the invention illustrate by taking key point information as an example, but this The key message of invention is not limited to key point information.It can be improved using key point information and position is obtained according to key point information The processing speed and efficiency of information, positional information directly can be obtained according to key point information, it is not necessary to key message is entered again The complex operations such as the follow-up calculating of row, analysis.Meanwhile key point information is easy to extract, and extract the effect for accurately, obtaining positional information Fruit is more accurate.Due to typically obtaining positional information by the key point information of the marginal position of special object, therefore, from treating When being extracted in the two field picture of reason, the key point information positioned at special object edge can be extracted.When special object is human body, The key point information of extraction includes the key point information positioned at face edge, positioned at the key point information at human body edge etc..
Positional information in three-dimensional scenic specifically included left and right in three-dimensional scenic, up and down, depth location information, respectively X-axis in three-dimensional scenic, y-axis, each position information on z-axis direction are corresponded to.According to the spy extracted from pending two field picture Determine the key point information of object, positional information of the foreground image in three-dimensional scenic can be determined accordingly.
Step S106, according to positional information, foreground image and three-dimensional scenic are subjected to fusion treatment, the frame after being handled Image.
According to positional information, foreground image is arranged on corresponding opening position in three-dimensional scenic, makes foreground image and three-dimensional Scene is merged, the two field picture after being handled.To allow foreground image preferably to be merged with three-dimensional scenic, place is being treated When the two field picture of reason carries out dividing processing, the edge of the perspective process obtained to segmentation carries out translucent processing, and it is specific right to obscure The edge of elephant, preferably to merge.
Step S107, the two field picture after processing is covered into the video data after pending two field picture is handled.
Corresponding pending two field picture is directly override using the two field picture after processing, after directly can be processed Video data.
According to the video data handling procedure provided by the invention based on virtual world, video data is obtained;To video counts According to being screened, the pending two field picture for including special object is obtained;Scene cut processing is carried out to pending two field picture, Obtain being directed to the foreground image of special object;Drawing three-dimensional scene;Special object is extracted from pending two field picture Key message, positional information of the foreground image in three-dimensional scenic is obtained according to key message;According to positional information, by foreground picture As carrying out fusion treatment, the two field picture after being handled with three-dimensional scenic;Two field picture after processing is covered to pending frame figure As the video data after being handled.The present invention screens to video data, obtains pending comprising special object After two field picture, the foreground image of special object is partitioned into from pending two field picture.Carried according to from pending two field picture The key message of special object is taken out, positional information of the foreground image in three-dimensional scenic is obtained, is easy to foreground image and three Dimension scene is merged, and the video after obtained processing shows the effect that special object is located in three-dimensional scenic.The present invention adopts With deep learning method, scene cut processing is completed with realizing the high accuracy of high efficiency.And user's technical merit is not done Limitation, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is greatlyd save user time.
Fig. 2 shows the flow of the video data handling procedure in accordance with another embodiment of the present invention based on virtual world Figure.As shown in Fig. 2 the video data handling procedure based on virtual world specifically comprises the following steps:
Step S201, obtain video data.
The video data of acquisition can be the local video data of user, can also obtain the video data of network.Or The video data synthesized by multiple local pictures can also be obtained, or obtains the video data synthesized by multiple network pictures, Or obtain the video data synthesized by multiple local pictures and multiple network pictures.
Step S202, the video data of user's specified time section is screened, obtained pending comprising special object Two field picture.
Comprising many two field pictures, it is necessary to be screened to video data in video data.Meanwhile when screening, can be with According to user's specified time section, only the video data in user's specified time section is screened, without to other times The video data of section is screened.Such as because the second half section of video data is the climax period, often user's specified time section be regarding The second half section of frequency evidence.Therefore only the video data of user's specified time section is screened, obtains user's specified time section The pending two field picture of special object is included in video data.
Step S203, scene cut processing is carried out to pending two field picture, obtains being directed to the foreground picture of special object Picture.
Step S204, drawing three-dimensional scene.
Above step will not be repeated here with reference to the description of the step S103-S104 in the embodiment of figure 1.
Step S205, effect textures are drawn in the specific region of the special object of foreground image.
When obtaining being directed in the foreground image of special object, when special object is only a part, such as obtain only including people The foreground image of the body upper part of the body.At this point it is possible to draw effect textures in the specific region of the special object of foreground image, hidden Gear or beautification.Specifically, the effect textures such as cloud can be drawn in the lower section of upper half of human body, sky is swum in form human body In effect.Effect textures can be arranged to different effect textures according to the difference of three-dimensional scenic, special object, so that effect Textures and three-dimensional scenic, the style of special object, display effect etc. are mutually echoed, and overall consistent effect is presented.
Step S206, the key point information of special object is extracted from pending two field picture.
The key point information of special object is extracted from pending two field picture, key point information includes special object side The key point information of edge, key point information of specific region of special object etc. can also be included, it is convenient subsequently according to key point Information is calculated.
Step S207, according to the key point information of special object, calculate at least two key points with symmetric relation it Between distance.
Step S208, according to the distance between at least two key points with symmetric relation, foreground image is obtained three Tie up the depth location information in scene.
Because special object is different from the distance of image capture device, cause special object in pending two field picture Size is also inconsistent.As human body and image capture device it is distant when, human body presented in pending two field picture it is smaller, Human body and image capture device it is closer to the distance when, human body presents larger in pending two field picture.According to special object Key point information, the distance between at least two key points with symmetric relation can be calculated.Such as calculate face edge The distance between key point untill where two canthus.According between at least two key points with symmetric relation away from From with reference to the actual range of special object, it can be deduced that the distance of special object and image capture device., can be with according to distance Depth location information of the foreground image in three-dimensional scenic is obtained, i.e., when foreground image merges with three-dimensional scenic, foreground image is set Put the specific depth location information in three-dimensional scenic.Between key point untill such as calculating where face edge Liang Ge canthus Distance, obtain the distant of human body and image capture device, due to human body presented in pending two field picture it is smaller, point The foreground image for cutting to obtain is also smaller, and the depth location that foreground image is arranged in three-dimensional scenic is also relatively deep, shows foreground picture As the deep place in three-dimensional scenic, human body also less effect in three-dimensional scenic.Or calculate two, face edge eye The distance between key point untill where angle, the closer to the distance of human body and image capture device is obtained, because human body is being waited to locate Present larger in the two field picture of reason, the depth location that foreground image is arranged in three-dimensional scenic can be earlier, shows prospect Image position more forward in three-dimensional scenic, human body effect also larger in three-dimensional scenic.Foreground image is in three-dimensional scenic In depth location information it is related to the distance between at least two key points with symmetric relation.
Step S209, according to key point information, obtain positional information of the special object in pending two field picture.
Step S210, according to positional information of the special object in pending two field picture, foreground image is obtained in three-dimensional Right position information in scene.
Positional information can obtain after being calculated by the key point information of special object, obtain special object and waiting to locate Specific position in the two field picture of reason.Positional information of the special object in pending two field picture includes special object herein The positional informations such as right position information, upper-lower position information, special object rotation angle information in pending two field picture. According to positional information of the special object in pending two field picture, left and right position of the foreground image in three-dimensional scenic can be obtained Confidence ceases.Wherein, right position information of the foreground image in three-dimensional scenic and special object are in pending two field picture Right position information is corresponding.Further, can also according to upper-lower position information of the special object in pending two field picture, Rotation angle information, upper-lower position information of the foreground image in three-dimensional scenic, rotation angle information etc. are set.
Step S211, obtain the terrain information of three-dimensional scenic.
Step S212, according to right position information in three-dimensional scenic of the terrain information of three-dimensional scenic, foreground image and/ Or depth location information, obtain upper-lower position information of the foreground image in three-dimensional scenic.
The terrain information of three-dimensional scenic is obtained, wherein, terrain information includes such as step, stone, the various landform in lake The left and right in three-dimensional scenic, up and down, depth location information.According to the terrain information of three-dimensional scenic, comprehensive foreground image exists Right position information, depth location information in three-dimensional scenic etc., upper bottom of the foreground image in three-dimensional scenic can be obtained Confidence ceases.Specifically, right position information, depth location information according to foreground image in three-dimensional scenic, can first be determined The landform of three-dimensional scenic at current right position information, depth location information.When the landform is step, according to step topography Upper-lower position information, upper-lower position information of the adjustment foreground image in three-dimensional scenic, avoid the occurrence of special object and be set Situation among step.Or when the landform is stone, according to the upper-lower position information of stone landform, adjust foreground image Upper-lower position information in three-dimensional scenic, the situation that special object is stuck in stone is avoided the occurrence of, can be by special object It is arranged on stone or special object is arranged on the position at stone rear.Upper-lower position letter of the foreground image in three-dimensional scenic Breath can be with right position information of the special object in three-dimensional scenic and/or different, the corresponding three-dimensional of depth location information The terrain information of scene is different and changes.Specific change is configured according to performance.
Step S213, according to depth location information of the foreground image in three-dimensional scenic, right position information and/or up and down Positional information, foreground image and three-dimensional scenic are subjected to fusion treatment, the two field picture after being handled.
According to depth location information, right position information and/or upper bottom of the obtained foreground image in three-dimensional scenic Confidence is ceased, and foreground image is arranged on into corresponding opening position in three-dimensional scenic, foreground image is merged with three-dimensional scenic, obtains Two field picture after to processing.
Step S214, the two field picture after processing is covered into the video data after pending two field picture is handled.
Corresponding pending two field picture is directly override using the two field picture after processing, after directly can be processed Video data.
Step S215, the video data after processing is uploaded to one or more cloud video platform servers, so that cloud regards Frequency Platform Server is shown video data in cloud video platform.
Video data after processing can be stored in locally only to be watched for user, can also be straight by the video data after processing Connect and reach one or more cloud video platform servers, such as iqiyi.com, youku.com, fast video cloud video platform server, with For cloud video platform server video data is shown in cloud video platform.
According to the video data handling procedure provided by the invention based on virtual world, extracted from pending two field picture The key point information of special object, according to key point information, obtain depth of the foreground image in three-dimensional scenic, right position letter Breath etc..Further according to the terrain information in three-dimensional scenic, upper-lower position information of the adjustment foreground image in three-dimensional scenic so that special Determine object reasonably can be merged with three-dimensional scenic, and real display effect is presented in the video for making to obtain after fusion, avoids Special object is only provided in three-dimensional scenic in appearance video, without considering in three-dimensional scenic caused by specific terrain information Show mistake.Meanwhile effect textures also are drawn in the specific region of the special object of foreground image, it is specific right to enrich, beautify The display effect of elephant.Further, the video data after processing can also be directly uploaded to one or more cloud video platform clothes Business device, so that cloud video platform server is shown video data in cloud video platform.The present invention to user's technical merit not It is limited, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is greatlyd save user time.
Fig. 3 shows the functional block of the video data processing apparatus according to an embodiment of the invention based on virtual world Figure.As shown in figure 3, the video data processing apparatus based on virtual world includes following module:
Acquisition module 301, suitable for obtaining video data.
The video data that acquisition module 301 obtains can be the local video data of user, and acquisition module 301 can also obtain Take the video data of network.Either acquisition module 301 can also obtain the video data synthesized by multiple local pictures or obtain Modulus block 301 obtains the video data synthesized by multiple network pictures, or acquisition module 301 obtain by multiple local pictures and The video data of multiple network picture synthesis.
Module 302 is screened, suitable for being screened to video data, obtains the pending two field picture for including special object.
Video data is screened, it is necessary to screen module 302 comprising many two field pictures in video data.Due to the present invention Special object is handled, therefore screens after module 302 is screened and obtains the pending two field picture for including special object.
Module 302 is screened when screening, can also be according to user's specified time section, only to regarding in user's specified time section Frequency is according to being screened, without being screened to the video data of other times section.Such as due to the second half section of video data For the climax period, often user's specified time section is the second half section of video data.Therefore when examination module 302 is only specified to user Between the video data of section screened, obtain the pending frame for including special object in the video data of user's specified time section Image.
Split module 303, suitable for carrying out scene cut processing to pending two field picture, obtain being directed to special object Foreground image.
Pending two field picture contains special object, such as human body.Split module 303 and field is carried out to pending two field picture Scape dividing processing, mainly special object is split from pending two field picture, obtain pending two field picture and be directed to In the foreground image of special object, the foreground image can only include special object.
Split module 303 when carrying out scene cut processing to pending two field picture, deep learning method can be utilized. Deep learning is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can be with Represented using various ways, a series of such as vector of each pixel intensity value, or be more abstractively expressed as sides, given shape Region etc..And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or facial table Feelings identify).Scene point can be carried out using human body segmentation's method of deep learning to pending two field picture by such as splitting module 303 Cut, obtain including the foreground image of human body.Further, split module 303 to obtain to pending two field picture progress scene cut During the foreground image comprising human body arrived, all images of human body can be obtained or only obtain most of image of human body, this Place does not limit.
Drafting module 304, suitable for drawing three-dimensional scene.
The three-dimensional scenic that drafting module 304 is drawn can be three-dimensional virtual scene, and drafting module 304 can also be by truly Scene three dimensional stress processing be three-dimensional scenic.Such as forest, waterfall, the various objects in lake can be included in three-dimensional scenic, in addition The particular content of three-dimensional scenic is not limited.
The Weather information of real-time change is further comprises in the three-dimensional scenic that drafting module 304 is drawn, as the cloudy day, fine day, under The scene of the different Changes in weather such as rain.The Weather information of real-time change causes three in the three-dimensional scenic that drafting module 304 is drawn It is more lively to tie up the effect that scene is more true, presents.Transformable light is further comprises in the three-dimensional scenic that drafting module 304 is drawn According to information, it can be solar irradiation effect during such as fine day, can have lightning lighting effect when rainy, dark scene can also have Lighting effect that firefly dances in the air (firefly that drafting module 304 is drawn could be arranged to dance in the air in specified location when dancing in the air, Can be plotted as dancing in the air around special object according to follow-up Fusion Module 306 in fusion) etc., so that whole three dimensional field Scape is more coordinated.Drafting module 304 can use any rendering technique using the technology of drawing three-dimensional scene, not limit herein It is fixed.
Extraction module 305, suitable for extracting the key message of special object from pending two field picture, believed according to key Breath obtains positional information of the foreground image in three-dimensional scenic.
Extraction module 305 extracts the key message of special object from pending two field picture, and the key message can have Body is key point information, key area information, and/or key lines information etc..Embodiments of the invention are by taking key point information as an example Illustrate, but the key message of the present invention is not limited to key point information.It can be improved according to key using key point information The processing speed and efficiency of point acquisition of information positional information, positional information directly can be obtained according to key point information, it is not necessary to The complex operations such as subsequently calculating, analysis are carried out to key message again.Meanwhile key point information is easy to extract, and extract accurately, obtain Take the effect of positional information more accurate.Believe due to typically obtaining position by the key point information of the marginal position of special object Breath, therefore, extraction module 305 can be extracted positioned at the pass at special object edge when being extracted from pending two field picture Key point information.When special object is human body, the key point information that extraction module 305 extracts includes the key positioned at face edge Put information, positioned at the key point information at human body edge etc..
Positional information in three-dimensional scenic specifically included left and right in three-dimensional scenic, up and down, depth location information, respectively X-axis in three-dimensional scenic, y-axis, each position information on z-axis direction are corresponded to.Extraction module 305 is according to from pending two field picture The key point information of the special object of middle extraction, positional information of the foreground image in three-dimensional scenic can be determined accordingly.
Fusion Module 306, suitable for according to positional information, foreground image and three-dimensional scenic being carried out into fusion treatment, obtained everywhere Two field picture after reason.
Foreground image is arranged on corresponding opening position in three-dimensional scenic, makes prospect by Fusion Module 306 according to positional information Image is merged with three-dimensional scenic, the two field picture after being handled.To allow Fusion Module 306 that foreground image is more preferable Merged with three-dimensional scenic, when splitting module 303 to pending two field picture progress dividing processing, to splitting at obtained prospect The edge of reason carries out translucent processing, the edge of special object is obscured, so that Fusion Module 306 preferably merges.
Overlay module 307, suitable for the two field picture after processing is covered into the video counts after pending two field picture is handled According to.
Overlay module 307 using the two field picture after processing directly override corresponding to pending two field picture, directly can be with Video data after being handled.
According to the video data processing apparatus provided by the invention based on virtual world, video data is obtained;To video counts According to being screened, the pending two field picture for including special object is obtained;Scene cut processing is carried out to pending two field picture, Obtain being directed to the foreground image of special object;Drawing three-dimensional scene;Special object is extracted from pending two field picture Key message, positional information of the foreground image in three-dimensional scenic is obtained according to key message;According to positional information, by foreground picture As carrying out fusion treatment, the two field picture after being handled with three-dimensional scenic;Two field picture after processing is covered to pending frame figure As the video data after being handled.The present invention screens to video data, obtains pending comprising special object After two field picture, the foreground image of special object is partitioned into from pending two field picture.Carried according to from pending two field picture The key message of special object is taken out, positional information of the foreground image in three-dimensional scenic is obtained, is easy to foreground image and three Dimension scene is merged, and the video after obtained processing shows the effect that special object is located in three-dimensional scenic.The present invention adopts With deep learning method, scene cut processing is completed with realizing the high accuracy of high efficiency.And user's technical merit is not done Limitation, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is greatlyd save user time.
Fig. 4 shows the function of the video data processing apparatus in accordance with another embodiment of the present invention based on virtual world Block diagram.As shown in figure 4, being with Fig. 3 differences, the video data processing apparatus based on virtual world also includes:
Effect textures are drawn in textures module 308, the specific region suitable for the special object in foreground image.
Obtain being directed in the foreground image of special object when splitting module 303, when special object is only a part, such as divide Cut the foreground image that module 303 obtains only including upper half of human body.Now, textures module 308 can be in the specific of foreground image Effect textures are drawn in the specific region of object, are blocked or are beautified.Specifically, textures module 308 can be in upper half of human body Lower section draw effect textures such as cloud, float skyborne effect to form human body.Effect textures can be according to three dimensional field Scape, the difference of special object are arranged to different effect textures so that effect textures and three-dimensional scenic, special object style, Display effect etc. is mutually echoed, and overall consistent effect is presented.
Extraction module 305 further comprises first position module 309 and second place module 310.
First position module 309, suitable for the key point information according to special object, calculate at least two with symmetric relation The distance between individual key point;According to the distance between at least two key points with symmetric relation, obtain foreground image and exist Depth location information in three-dimensional scenic.
Because special object is different from the distance of image capture device, cause special object in pending two field picture Size is also inconsistent.As human body and image capture device it is distant when, human body presented in pending two field picture it is smaller, Human body and image capture device it is closer to the distance when, human body presents larger in pending two field picture.First position module 309 According to the key point information of special object, the distance between at least two key points with symmetric relation can be calculated.Such as First position module 309 calculate where face edge Liang Ge canthus untill the distance between key point.According to symmetrical The distance between at least two key points of relation, with reference to the actual range of special object, it can be deduced that special object and image The distance of collecting device.First position module 309 can obtain depth location of the foreground image in three-dimensional scenic according to distance When information, i.e. first position module 309 obtain foreground image and merged with three-dimensional scenic, foreground image, which is arranged in three-dimensional scenic, to be had The depth location information of body.Between key point untill as where first position module 309 calculates face edge Liang Ge canthus Distance, obtain the distant of human body and image capture device, due to human body presented in pending two field picture it is smaller, point The foreground image for cutting to obtain is also smaller, and first position module 309 obtains the depth location that foreground image is arranged in three-dimensional scenic Also it is relatively deep, show foreground image deep place in three-dimensional scenic, human body also less effect in three-dimensional scenic.Or the One position module 309 calculate where face edge Liang Ge canthus untill the distance between key point, obtain human body and image Collecting device it is closer to the distance, because human body presents larger in pending two field picture, first position module 309 obtains prospect The depth location that image is arranged in three-dimensional scenic can be earlier, shows foreground image position more forward in three-dimensional scenic Put, human body effect also larger in three-dimensional scenic.Depth location information of the foreground image in three-dimensional scenic is symmetrical with having The distance between at least two key points of relation correlation.
Second place module 310, suitable for according to key point information, obtaining position of the special object in pending two field picture Confidence ceases;According to positional information of the special object in pending two field picture, a left side of the foreground image in three-dimensional scenic is obtained Right positional information.
Second place module 310 is calculated by the key point information of special object, obtains special object pending Two field picture in specific position.Positional information of the special object in pending two field picture exists including special object herein The positional informations such as right position information, upper-lower position information, special object rotation angle information in pending two field picture.The Positional information of two position modules 310 according to special object in pending two field picture, can obtain foreground image in three-dimensional Right position information in scene.Wherein, right position information of the foreground image in three-dimensional scenic is being waited to locate with special object Right position information in the two field picture of reason is corresponding.Further, second place module 310 can also treated according to special object Upper-lower position information, rotation angle information in the two field picture of processing, upper-lower position of the foreground image in three-dimensional scenic is set Information, rotation angle information etc..
3rd position module 311, suitable for obtaining the terrain information of three-dimensional scenic;According to the terrain information of three-dimensional scenic, preceding Right position information and/or depth location information of the scape image in three-dimensional scenic, obtain foreground image in three-dimensional scenic Upper-lower position information.
3rd position module 311 obtain three-dimensional scenic terrain information, wherein, terrain information include as step, stone, The left and right in three-dimensional scenic of the various landform such as lake, upper and lower, depth location information.3rd position module 311 is according to three-dimensional The terrain information of scene, right position information of the comprehensive foreground image in three-dimensional scenic, depth location information etc., can be obtained Upper-lower position information of the foreground image in three-dimensional scenic.Specifically, the 3rd position module 311 according to foreground image in three dimensional field Right position information, depth location information in scape, it can first determine current right position information, three at depth location information Tie up the landform of scene.When the landform is step, the 3rd position module 311 is according to the upper-lower position information of step topography, adjustment Upper-lower position information of the foreground image in three-dimensional scenic, avoids the occurrence of the situation that special object is arranged among step.Or For person when the landform is stone, the 3rd position module 311 is according to the upper-lower position information of stone landform, and adjustment foreground image is three The upper-lower position information in scene is tieed up, the situation that special object is stuck in stone is avoided the occurrence of, special object can be set On stone or special object is arranged on the position at stone rear.Upper-lower position information meeting of the foreground image in three-dimensional scenic With different, the corresponding three-dimensional scenic of right position information of the special object in three-dimensional scenic and/or depth location information Terrain information it is different and change.Specific change is configured according to performance.
Fusion Module 306 can be believed according to depth location of the foreground image that above-mentioned each module obtains in three-dimensional scenic Breath, right position information and/or upper-lower position information, foreground image and three-dimensional scenic are subjected to fusion treatment, after obtaining processing Two field picture.
Uploading module 312, suitable for the video data after processing is uploaded into one or more cloud video platform servers, with For cloud video platform server video data is shown in cloud video platform.
Video data after processing can be stored in locally only to be watched for user, can also will be handled by uploading module 312 Video data afterwards is directly uploaded to one or more cloud video platform servers, such as iqiyi.com, youku.com, fast video cloud video Platform Server, so that cloud video platform server is shown video data in cloud video platform.
According to the video data processing apparatus provided by the invention based on virtual world, extracted from pending two field picture The key point information of special object, according to key point information, obtain depth of the foreground image in three-dimensional scenic, right position letter Breath etc..Further according to the terrain information in three-dimensional scenic, upper-lower position information of the adjustment foreground image in three-dimensional scenic so that special Determine object reasonably can be merged with three-dimensional scenic, and real display effect is presented in the video for making to obtain after fusion, avoids Special object is only provided in three-dimensional scenic in appearance video, without considering in three-dimensional scenic caused by specific terrain information Show mistake.Meanwhile effect textures also are drawn in the specific region of the special object of foreground image, it is specific right to enrich, beautify The display effect of elephant.Further, the video data after processing can also be directly uploaded to one or more cloud video platform clothes Business device, so that cloud video platform server is shown video data in cloud video platform.The present invention to user's technical merit not It is limited, it is not necessary to which user is handled video manually, is realized the processing to video automatically, is greatlyd save user time.
Present invention also provides a kind of nonvolatile computer storage media, the computer-readable storage medium is stored with least One executable instruction, the computer executable instructions can perform the video based on virtual world in above-mentioned any means embodiment Data processing method.
Fig. 5 shows a kind of structural representation of computing device according to an embodiment of the invention, of the invention specific real Specific implementation of the example not to computing device is applied to limit.
As shown in figure 5, the computing device can include:Processor (processor) 502, communication interface (Communications Interface) 504, memory (memory) 506 and communication bus 508.
Wherein:
Processor 502, communication interface 504 and memory 506 complete mutual communication by communication bus 508.
Communication interface 504, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 502, for configuration processor 510, it can specifically perform the above-mentioned video data processing based on virtual world Correlation step in embodiment of the method.
Specifically, program 510 can include program code, and the program code includes computer-managed instruction.
Processor 502 is probably central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 506, for depositing program 510.Memory 506 may include high-speed RAM memory, it is also possible to also include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 510 specifically can be used for so that processor 502 perform in above-mentioned any means embodiment based on virtual generation The video data handling procedure on boundary.The specific implementation of each step may refer to the above-mentioned video based on virtual world in program 510 Corresponding description in corresponding steps and unit in data processing entities, will not be described here.Those skilled in the art can To be well understood, for convenience and simplicity of description, the equipment of foregoing description and the specific work process of module, may be referred to Corresponding process description in preceding method embodiment, will not be repeated here.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein. Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect, Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore, Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice Microprocessor or digital signal processor (DSP) realize the video data according to embodiments of the present invention based on virtual world The some or all functions of some or all parts in the device of processing.The present invention is also implemented as being used to perform this In described method some or all equipment or program of device (for example, computer program and computer program Product).Such program for realizing the present invention can store on a computer-readable medium, either can be with one or more The form of individual signal.Such signal can be downloaded from internet website and obtained, either provide on carrier signal or with Any other form provides.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame Claim.

Claims (10)

1. a kind of video data handling procedure based on virtual world, it includes:
Obtain video data;
The video data is screened, obtains the pending two field picture for including special object;
Scene cut processing is carried out to the pending two field picture, obtains being directed to the foreground image of the special object;
Drawing three-dimensional scene;
The key message of the special object is extracted from the pending two field picture, institute is obtained according to the key message State positional information of the foreground image in the three-dimensional scenic;
According to the positional information, the foreground image and the three-dimensional scenic are subjected to fusion treatment, the frame after being handled Image;
Two field picture after processing is covered into the video data after pending two field picture is handled.
2. according to the method for claim 1, wherein, the acquisition video data further comprises:
Obtain local video data and/or network video data.
3. according to the method for claim 1, wherein, the acquisition video data further comprises:
Obtain the video data synthesized by multiple local pictures and/or multiple network pictures.
4. according to the method any one of claim 1-3, wherein, it is described that the video data is screened, obtain Pending two field picture comprising special object further comprises:
The video data of user's specified time section is screened, obtains the pending two field picture for including special object.
5. according to the method any one of claim 1-4, wherein, the key message is key point information;
The key message that the special object is extracted from the pending two field picture, is obtained according to the key message Further comprise to positional information of the foreground image in the three-dimensional scenic:
The key point information positioned at the special object is extracted from the pending two field picture.
6. according to the method for claim 5, wherein, it is described extracted from the pending two field picture it is described specific right The key message of elephant, it is further that positional information of the foreground image in the three-dimensional scenic is obtained according to the key message Including:
According to the key point information of the special object, the distance between at least two key points with symmetric relation are calculated;
According to the distance between at least two key points with symmetric relation, the foreground image is obtained in the three-dimensional scenic In depth location information.
7. according to the method for claim 5, wherein, it is described extracted from the pending two field picture it is described specific right The key message of elephant, it is further that positional information of the foreground image in the three-dimensional scenic is obtained according to the key message Including:
According to the key point information, positional information of the special object in pending two field picture is obtained;
According to positional information of the special object in pending two field picture, the foreground image is obtained in the three dimensional field Right position information in scape.
8. a kind of video data processing apparatus based on virtual world, it includes:
Acquisition module, suitable for obtaining video data;
Module is screened, suitable for being screened to the video data, obtains the pending two field picture for including special object;
Split module, suitable for carrying out scene cut processing to the pending two field picture, obtain being directed to the special object Foreground image;
Drafting module, suitable for drawing three-dimensional scene;
Extraction module, suitable for extracting the key message of the special object from the pending two field picture, according to described Key message obtains positional information of the foreground image in the three-dimensional scenic;
Fusion Module, suitable for according to the positional information, the foreground image and the three-dimensional scenic being carried out into fusion treatment, obtained Two field picture after to processing;
Overlay module, suitable for the two field picture after processing is covered into the video data after pending two field picture is handled.
9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask operation corresponding to the video data handling procedure based on virtual world any one of 1-7.
10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make video data handling procedure based on virtual world of the computing device as any one of claim 1-7 corresponding Operation.
CN201710948050.6A 2017-10-12 2017-10-12 Video data handling procedure and device, computing device based on virtual world Pending CN107613161A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710948050.6A CN107613161A (en) 2017-10-12 2017-10-12 Video data handling procedure and device, computing device based on virtual world

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710948050.6A CN107613161A (en) 2017-10-12 2017-10-12 Video data handling procedure and device, computing device based on virtual world

Publications (1)

Publication Number Publication Date
CN107613161A true CN107613161A (en) 2018-01-19

Family

ID=61068115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710948050.6A Pending CN107613161A (en) 2017-10-12 2017-10-12 Video data handling procedure and device, computing device based on virtual world

Country Status (1)

Country Link
CN (1) CN107613161A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109698914A (en) * 2018-12-04 2019-04-30 广州华多网络科技有限公司 A kind of lightning special efficacy rendering method, device, equipment and storage medium
CN111609854A (en) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 Three-dimensional map construction method based on multiple depth cameras and sweeping robot
CN111626919A (en) * 2020-05-08 2020-09-04 北京字节跳动网络技术有限公司 Image synthesis method and device, electronic equipment and computer-readable storage medium
CN113223012A (en) * 2021-04-30 2021-08-06 北京字跳网络技术有限公司 Video processing method and device and electronic device
CN113949827A (en) * 2021-09-30 2022-01-18 安徽尚趣玩网络科技有限公司 Video content fusion method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system
CN101309389A (en) * 2008-06-19 2008-11-19 深圳华为通信技术有限公司 Method, apparatus and terminal synthesizing visual images
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN106899781A (en) * 2017-03-06 2017-06-27 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and electronic equipment
CN107547804A (en) * 2017-09-21 2018-01-05 北京奇虎科技有限公司 Realize the video data handling procedure and device, computing device of scene rendering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system
CN101309389A (en) * 2008-06-19 2008-11-19 深圳华为通信技术有限公司 Method, apparatus and terminal synthesizing visual images
CN106791347A (en) * 2015-11-20 2017-05-31 比亚迪股份有限公司 A kind of image processing method, device and the mobile terminal using the method
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106899781A (en) * 2017-03-06 2017-06-27 宇龙计算机通信科技(深圳)有限公司 A kind of image processing method and electronic equipment
CN107547804A (en) * 2017-09-21 2018-01-05 北京奇虎科技有限公司 Realize the video data handling procedure and device, computing device of scene rendering

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周凡: "视频影像增强虚拟三维场景的注册与渲染方法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109698914A (en) * 2018-12-04 2019-04-30 广州华多网络科技有限公司 A kind of lightning special efficacy rendering method, device, equipment and storage medium
CN109698914B (en) * 2018-12-04 2022-03-01 广州方硅信息技术有限公司 Lightning special effect rendering method, device, equipment and storage medium
CN111609854A (en) * 2019-02-25 2020-09-01 北京奇虎科技有限公司 Three-dimensional map construction method based on multiple depth cameras and sweeping robot
CN111626919A (en) * 2020-05-08 2020-09-04 北京字节跳动网络技术有限公司 Image synthesis method and device, electronic equipment and computer-readable storage medium
CN111626919B (en) * 2020-05-08 2022-11-15 北京字节跳动网络技术有限公司 Image synthesis method and device, electronic equipment and computer readable storage medium
CN113223012A (en) * 2021-04-30 2021-08-06 北京字跳网络技术有限公司 Video processing method and device and electronic device
CN113223012B (en) * 2021-04-30 2023-09-29 北京字跳网络技术有限公司 Video processing method and device and electronic device
CN113949827A (en) * 2021-09-30 2022-01-18 安徽尚趣玩网络科技有限公司 Video content fusion method and device

Similar Documents

Publication Publication Date Title
CN107613161A (en) Video data handling procedure and device, computing device based on virtual world
US11417130B2 (en) System and method for facilitating graphic-recognition training of a recognition model
CN107547804A (en) Realize the video data handling procedure and device, computing device of scene rendering
US20210241500A1 (en) Method and system for prov iding photorealistic changes for digital image
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN108109161B (en) Video data real-time processing method and device based on self-adaptive threshold segmentation
CN112669448B (en) Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology
US20160086365A1 (en) Systems and methods for the conversion of images into personalized animations
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
CN107680105B (en) Video data real-time processing method and device based on virtual world and computing equipment
CN107743263B (en) Video data real-time processing method and device and computing equipment
CN107566853A (en) Realize the video data real-time processing method and device, computing device of scene rendering
CN108171716A (en) Video personage based on the segmentation of adaptive tracing frame dresss up method and device
CN107578369A (en) Video data handling procedure and device, computing device
CN107563962A (en) Video data real-time processing method and device, computing device
Liu et al. Stereo-based bokeh effects for photography
CN116342377A (en) Self-adaptive generation method and system for camouflage target image in degraded scene
CN107221027A (en) A kind of method that User Defined content is embedded in oblique photograph threedimensional model
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation
CN107633547A (en) Realize the view data real-time processing method and device, computing device of scene rendering
Liu et al. Fog effect for photography using stereo vision
CN107680170A (en) View synthesis method and device based on virtual world, computing device
CN107610237A (en) Image capture device Real-time Data Processing Method and device, computing device
CN114972646A (en) Method and system for extracting and modifying independent ground objects of live-action three-dimensional model
CN108109158B (en) Video crossing processing method and device based on self-adaptive threshold segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180119

RJ01 Rejection of invention patent application after publication