CN109587556A - Method for processing video frequency, video broadcasting method, device, equipment and storage medium - Google Patents
Method for processing video frequency, video broadcasting method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109587556A CN109587556A CN201910005161.2A CN201910005161A CN109587556A CN 109587556 A CN109587556 A CN 109587556A CN 201910005161 A CN201910005161 A CN 201910005161A CN 109587556 A CN109587556 A CN 109587556A
- Authority
- CN
- China
- Prior art keywords
- video
- image frame
- frame
- video flowing
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Abstract
This application provides a kind of method for processing video frequency, device, equipment and storage mediums.The described method includes: obtaining the first video flowing and the second video flowing;For the first picture frame and the second picture frame of identical time stamp in the first video flowing and the second video flowing, the picture material of target area in the first picture frame is replaced using the second picture frame, generates target image frame;Target video stream is generated according to target image frame.In the embodiment of the present application, since replaced picture material being shown in the target video stream that ultimately generates, it is not the picture material for shooting the screen of display equipment by camera and obtaining, therefore avoid using camera to display equipment display content shoot when, because there is reflection on the screen of a display device in light irradiation, there are problems that interfering line (such as moire fringes) in the video pictures for causing camera to shoot, so as to clearly view the display content for showing equipment in video.
Description
Technical field
The invention relates to technical field of image processing, in particular to a kind of method for processing video frequency, video playing side
Method, device, equipment and storage medium.
Background technique
With the rapid development of internet, continuous rise of giving lessons online.Since it is not limited by when and where, as long as
The place that can connect network can be carried out viewing study, compensate for tradition give lessons mode (academics and students must simultaneously exist
Same place) the drawbacks of and deficiency.
Currently, giving lessons online is to be shot by camera to scene of giving lessons, it then will be on the video of camera shooting
Reach the network platform, other users can the direct viewing video.Wherein, scene of giving lessons is teacher to display equipment (such as intelligence
TV) in the scene explained of content that shows.It is shot, teacher, display equipment and display can be set by camera
Display content in standby merges in same picture to be showed.
It is above-mentioned in the related technology, using camera to display equipment in display content shoot when, due to light irradiate
There is reflection on the screen of a display device, generates moire fringes in the video pictures for causing camera to shoot, and then user can not
Clearly view the display content in display equipment captured by video.
Summary of the invention
The embodiment of the present application provides a kind of method for processing video frequency, video broadcasting method, device, equipment and storage medium,
Can be used for solve in the related technology using camera to display equipment in display content shoot when, due to light be radiated at it is aobvious
Show there is reflection on the screen of equipment, generates moire fringes in the video pictures for causing camera to shoot, and then user can not be clear
Ground views the problem of display content in display equipment captured in video.The technical solution is as follows:
On the one hand, the embodiment of the present application provides a kind of method for processing video frequency, which comprises
The first video flowing and the second video flowing are obtained, first video flowing is to be carried out by the first camera to reality scene
Obtained video flowing is shot, includes the display equipment with content display function, second video flowing in the reality scene
It is by the display content video flowing generated of the display equipment;
For the first picture frame and the second image of identical time stamp in first video flowing and second video flowing
Frame generates target image frame using the picture material of target area in second picture frame replacement the first image frame;Its
In, the target area is display area of the screen of the display equipment in the first image frame;
Target video stream is generated according to the target image frame.
On the other hand, this application provides a kind of video broadcasting methods, which comprises
Video acquisition request is sent to video distribution platform;
Receive the target video stream that the video distribution platform is sent according to video acquisition request, the target video
Stream is the video flowing generated according at least to the first video flowing and the second video flowing, wherein first video flowing is taken the photograph by first
It include the display equipment with content display function in the reality scene as head is shot to obtain to reality scene, it is described
Second video flowing is obtained from the display equipment;
The target video stream is played, the screen to the display equipment is not present in the picture frame of the target video stream
Interference line caused by being shot.
Another aspect, the embodiment of the present application provide a kind of video process apparatus, and described device includes:
Module is obtained, is by the first camera for the first video flowing of acquisition and the second video flowing, first video flowing
It include the display equipment with content display function in the reality scene to the video flowing that reality scene is shot,
Second video flowing is by the display content video flowing generated of the display equipment;
Replacement module, for the first image for identical time stamp in first video flowing and second video flowing
Frame and the second picture frame are generated using the picture material of target area in second picture frame replacement the first image frame
Target image frame;Wherein, the target area is display area of the screen of the display equipment in the first image frame;
Generation module, for generating target video stream according to the target image frame.
Another aspect, the embodiment of the present application provide a kind of video play device, and described device includes:
Sending module, for sending video acquisition request to video distribution platform;
Receiving module, the target video sent for receiving the video distribution platform according to video acquisition request
Stream, the target video stream are the video flowings generated according at least to the first video flowing and the second video flowing, wherein first view
Frequency stream is shot to obtain by the first camera to reality scene, includes having content display function in the reality scene
Show equipment, second video flowing is obtained from the display equipment;
Playing module is not present in the picture frame of the target video stream to described for playing the target video stream
The screen of display equipment interferes line caused by being shot.
In another aspect, the embodiment of the present application provides a kind of computer equipment, the computer equipment include processor and
Memory, is stored at least one instruction, at least a Duan Chengxu, code set or instruction set in the memory, and described at least one
Item instruction, an at least Duan Chengxu, the code set or instruction set are loaded by the processor and are executed to realize above-mentioned view
Frequency processing method, or realize above-mentioned video broadcasting method.
Optionally, the computer equipment is video processing equipment or video playback apparatus.
Also on the one hand, the embodiment of the present application provides a kind of computer readable storage medium, be stored in the storage medium to
Few an instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, code set or instruction
Collection is loaded by processor and is executed to realize above-mentioned method for processing video frequency, or realizes above-mentioned video broadcasting method.
Also on the one hand, the embodiment of the present application provides a kind of computer program product, when the computer program product is performed
When, it is used to execute method for processing video frequency described in above-mentioned aspect, or execute video broadcasting method described in above-mentioned aspect.
In technical solution provided by the embodiments of the present application, by using the display content of display equipment the second view generated
The second picture frame in frequency stream, when replacing identical in the first video flowing that the first camera shoots reality scene
Between in the first picture frame for stabbing target area picture material, generate target image frame, then target view is generated by target image frame
Frequency flows;It is not to be shot by camera since replaced picture material being shown in the target video stream that ultimately generates
The picture material that the screen of display equipment obtains, therefore avoid and shot using display content of the camera to display equipment
When, since light irradiation has reflection on the screen of a display device, there is interference line in the video pictures for causing camera to shoot
The problem of (such as moire fringes), so as to clearly view the display content for showing equipment in video.
Detailed description of the invention
Fig. 1 is the schematic diagram for the implementation environment that the application one embodiment provides;
Fig. 2 is the flow chart for the method for processing video frequency that the application one embodiment provides;
Fig. 3 illustrates the process schematic of the position in spotting region in the first picture frame;
Fig. 4 illustrates the process signal that the second picture frame replaces the picture material of target area in the first picture frame
Figure;
Fig. 5 is the flow chart for the method for processing video frequency that another embodiment of the application provides;
Fig. 6 illustrates the process schematic for obtaining third picture frame;
Fig. 7 illustrates the schematic diagram for obtaining error image frame;
Fig. 8 illustrates a kind of flow chart of complete method for processing video frequency;
Fig. 9 is the flow chart for the video broadcasting method that another embodiment of the application provides;
Figure 10 is the block diagram for the video process apparatus that the application one embodiment provides;
Figure 11 is the block diagram for the video process apparatus that another embodiment of the application provides;
Figure 12 is the block diagram for the video play device that the application one embodiment provides;
Figure 13 is the structural block diagram for the computer equipment that the application one embodiment provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party
Formula is described in further detail.
Referring to FIG. 1, the schematic diagram of the implementation environment provided it illustrates the application one embodiment.The implementation environment can
To include: display equipment 100, camera 200, video processing equipment 300 and video playback apparatus 400.
Display equipment 100 can be such as tablet computer, PC (PersonalComputer, personal computer), multimedia
Playback equipment, smart television, electronic whiteboard etc. have the electronic equipment of content display function.Above content can be text, unreal
Lamp piece, picture, video etc., the embodiment of the present application is not construed as limiting this.
Camera 200 is a kind of video input apparatus, mainly includes the components such as camera lens, imaging sensor and power supply.Camera shooting
First 200 have the basic functions such as video camera/propagation and still image capture, it is after acquiring image by camera lens, by imaging
Photosensory assembly circuit and control assembly in head carry out being processed and converted to video processing equipment 300 to know to video/image
Then other digital signal is transmitted by parallel port or USB (Universal Serial Bus, universal serial bus) connection
It is handled into video processing equipment 300.Optionally, the display of video processing terminal 300 can be shown after treatment
Video/image.Camera 200 can be individual camera, also may be mounted in video camera or other equipment.Camera
200 can shoot reality scene, include display equipment 100 in the reality scene.For example, the reality scene can be and award
Class scene, speech scene, discussion conference scenario, interview scene etc..
Video processing equipment 300, which can be PC or server etc., can carry out the electronic equipment of video processing.It is optional
Ground, installation and operation has the application program with video processing function in video processing equipment 300, and hereinafter referred to as " video processing is answered
With program ".
Video playback apparatus 400, which can be mobile phone, tablet computer, PC, multimedia play equipment, smart television etc., has view
The electronic equipment of frequency playing function.Optionally, installation and operation has the application with video playback capability in video playback apparatus 400
Program.
Video processing equipment 300 can video be sent to video distribution platform by treated, for video playback apparatus
400 obtain from the video distribution platform and play video.Optionally, video processing equipment 300 is in video distribution platform
Equipment, video processing equipment 300 can directly by treated, video be supplied to video playback apparatus 400.
Referring to FIG. 2, the flow chart of the method for processing video frequency provided it illustrates the application one embodiment.In this implementation
In example, mainly it is applied to illustrate in the video processing equipment 300 of implementation environment shown in Fig. 1 in this way.This method can
To comprise the following steps (201~203):
Step 201, the first video flowing and the second video flowing are obtained.
First video flowing is the video flowing shot by the first camera to reality scene, and the reality scene is i.e. existing
The scene grown directly from seeds in living, may include the display equipment with content display function.Optionally, further include in the reality scene
People.For example, may include that display equipment and teacher, teacher set display when reality scene is to give lessons scene, in reality scene
Standby display content (such as teaching courseware) is explained.It can be with for another example when reality scene is interview scene, in reality scene
Including display equipment and interviewee, interviewee can be explained the display content (such as resume) of display equipment.
In addition, reality scene can also be speech scene.First video flowing shoots reality scene due to being video camera
Video flowing can generate in the video pictures for causing camera to shoot since light irradiation has reflection on the screen of a display device
Interfere line.Above-mentioned interference line can be moire fringes.In addition, can also there are problems that distortion, such as reflective, shade, obscure.
Second video flowing is the display content video flowing generated by above-mentioned display equipment.Above-mentioned display content refers to aobvious
Show content shown in the screen of equipment, such as text, lantern slide, picture, video.
Video processing equipment can be from other places, such as network, another equipment, movable memory equipment (such as USB flash disk, flash memory
Card, CD etc.) etc. acquisitions the first video flowing having had been taken and the second video flowing for having generated, can also be from local acquisition
Stored the first video flowing and the second video flowing.
Video flowing is made of a series of single image, and every piece image is known as a picture frame.A series of figure
As frame forms video flowing with certain frame per second.Above-mentioned frame per second refers to the amount of images per second for including.
Step 202, for the first picture frame and the second image of identical time stamp in the first video flowing and the second video flowing
Frame replaces the picture material of target area in the first picture frame using the second picture frame, generates target image frame.
The first picture frame and the second picture frame of identical time stamp refer to, the picture material of target area in the first video flowing
Two picture frames identical with the display content in the second video flowing.In the two picture frames, belong to the image of the first video flowing
Frame is the first picture frame, and the picture frame for belonging to the second video flowing is the second picture frame.Above-mentioned target area is the screen for showing equipment
Display area of the curtain in the first picture frame.Video processing equipment demarcates the position of the target area in the first picture frame, with
Guarantee that the picture frame in the second video flowing can accurately be replaced the target area in the first picture frame by video processing equipment.
The position for demarcating the target area can be realized by mode identification technology, such as use SIFT (Scale
Invariant Feature Transform, scale invariant feature conversion) Feature Correspondence Algorithm spotting region position,
Image is carried out smoothly with the Gaussian function of different scale (standard deviation), then the difference of smoother rear image, difference are big
Pixel be exactly the apparent point of feature, by these point can in the first picture frame spotting region position.It is exemplary
Ground, as shown in figure 3, using SIFT feature matching algorithm, detected in the screen 500 of display equipment characteristic point A, characteristic point B,
Then characteristic point C and characteristic point D exists by SIFT feature vector, i.e. transformation parameter is calculated according to the SIFT feature vector
Found out in first picture frame 10 with match in characteristic point A, characteristic point B, characteristic point C and characteristic point D characteristic point A ', characteristic point
B ', characteristic point C ' and characteristic point D ', thus calibrating viewing area of the screen 500 of display equipment in the first picture frame 10
Domain, the i.e. position of target area 501.Furthermore it is also possible to manually carry out reconnaissance using the target area is manually demarcated, complete
Calibration to the position of target area.As long as the relative position of camera and display equipment remains unchanged, only need to target
The location position in region is primary;When relative position changes, need again to demarcate the position of target area.One
In a little other embodiments, above-mentioned mode identification technology can also be SURF, and (Speed Up Robust Features, it is steady to accelerate
Feature) algorithm or AKAZE (Accelerated KAZE accelerates local feature matching) algorithm.With the development of technology, may be used also
To be other algorithms, the embodiment of the present application is not construed as limiting this.
Optionally, when in the picture frame (such as the first picture frame) in the first video flowing include display equipment completely show in
Rong Shi, in the first picture frame behind the position in spotting region, video processing equipment passes through perspective transform (Perspective
Transformation) the second picture frame is converted at the position of target area, i.e. generation target image frame.Perspective transform is
Refer to and utilize the centre of perspectivity, picture point, the condition of target point three point on a straight line, makes image-bearing surface (perspective plane) around trace by chasles theorem
(axis of homology) rotates a certain angle, destroys original projected light harness, is still able to maintain the transformation that projecting figure is constant on perspective plane.
During shooting the first video flowing, due to usually there is an inclination angle between camera and horizontal plane, rather than
It is directly perpendicular to display equipment, therefore the first picture frame is inclined.But since the second video flowing is the display for showing equipment
Content video flowing generated, therefore the second picture frame is positive.As shown in figure 4, can be by the second image using perspective transform
Frame 20 transforms to inclination angle identical with the first picture frame 10, to be aligned with the first picture frame 10, i.e. in the second picture frame 20
Four characteristic points A, B, C and D are aligned, so that the second image with four characteristic point A ', B ', C ' and D ' in the first picture frame 10
Frame 20 and the first picture frame 10 are matched more preferable.
Step 203, target video stream is generated according to target image frame.
Due to including multiple images frame in the first video flowing and the second video flowing, multiple target images can be generated
Frame, multiple target image frames arrange in a certain order, that is, produce target video stream.
In conclusion being given birth in technical solution provided by the embodiments of the present application by using the display content of display equipment
At the second video flowing in the second picture frame, replace the first video that the first camera shoots reality scene
In stream in the first picture frame of identical time stamp target area picture material, target image frame is generated, then by target image frame
Generate target video stream;Since replaced picture material being shown in the target video stream that ultimately generates, be not by
The picture material that the screen of camera shooting display equipment obtains, therefore avoid using camera in the display of display equipment
When appearance is shot, since light irradiation has reflection on the screen of a display device, in the video pictures for causing camera to shoot
There are problems that interfering line (such as moire fringes), so as to clearly view the display content for showing equipment in video.
Referring to FIG. 5, the flow chart of the method for processing video frequency provided it illustrates another embodiment of the application.In this reality
It applies in example, is mainly applied to illustrate in the video processing equipment 300 of implementation environment shown in Fig. 1 in this way.This method
May include the following steps (501~506):
Step 501, the first video flowing, the second video flowing and third video flowing are obtained.
First video flowing is the video flowing shot by the first camera to reality scene, is wrapped in the reality scene
Include the display equipment with content display function.It optionally, further include people in the reality scene.
Second video flowing is the display content video flowing generated by above-mentioned display equipment.Above-mentioned display content refers to aobvious
Show content shown in the screen of equipment, such as text, lantern slide, picture, video.
Third video flowing is shot by polaroid (Polarizer) to reality scene by second camera
Video flowing, polaroid are used to stop to show the light that equipment issues and natural light are allowed to penetrate.Polaroid full name is polarized light piece,
The polarization direction of controllable particular beam.For light beam when passing through polaroid, the direction of vibration light vertical with polaroid transmission axis will
It is absorbed, is only left the direction of vibration light parallel with polaroid transmission axis through light.
When showing that equipment is liquid crystal display, there are two polaroids to be attached to glass substrate two respectively in liquid crystal display die set
Side, down polaroid are used to be converted to the light beam that backlight generates polarised light, and upper polaroid is for parsing after liquid crystal electrical modulation
Polarised light, generate comparison of light and shade, to generate display picture.The imaging of liquid crystal display die set must rely on polarised light, few
Any polaroid, liquid crystal display die set cannot all show image.In the embodiment of the present application, the polarization of liquid crystal display sending
The direction of vibration of light and polaroid transmission axis are perpendicular, and therefore, polaroid can stop the light of liquid crystal display sending and permit
Perhaps natural light penetrates.As shown in fig. 6,600 normal photographing of the first camera, obtains including the first of the display content of display equipment
Picture frame 10.Polaroid 701 is placed in front of second camera 700, to show the (figure of light 702 that the screen 500 of equipment issues
In be represented by dotted lines) be blocked, natural light 703 (being indicated in figure with solid line) penetrate, obtain third picture frame 30.Third picture frame
The screen (i.e. target area 501) that equipment is shown in 30 is black.
Step 502, for the first picture frame and the second image of identical time stamp in the first video flowing and the second video flowing
Frame, the position in spotting region in the first picture frame.
Step 503, the second picture frame is converted at the position of target area by perspective transform, generates that treated the
One picture frame.
Step 201~203 introduced in step 501~503 and figure 2 above embodiment are same or like, for details, reference can be made to
Introduction explanation in figure 2 above embodiment, details are not described herein again.
When including that display equipment completely shows content in the first picture frame, the second picture frame is converted into target area
Position at, treated that the first picture frame can be used as target image frame for generation, is produced according to this target image frame
Target video stream.
When the display content and people in the first picture frame including display equipment, and show the display content part of equipment by people
When blocking, after the second picture frame is converted at the position of target area, people can be sheltered from, so as to cause treated first
Picture frame is shown not exclusively.In order to show complete people in the target video stream ultimately generated and show the display of equipment
Content needs to be further processed.It mainly include following several steps.
Step 504, for the first picture frame and third image of identical time stamp in the first video flowing and third video flowing
First picture frame and the pixel value at same position in third picture frame are subtracted each other, obtain error image frame by frame.
The first picture frame and third picture frame of identical time stamp refer to, content and third video are shown in the first video flowing
Identical two picture frames of content are shown in stream.In the two picture frames, the picture frame for belonging to the first video flowing is the first image
Frame, the picture frame for belonging to third video flowing is third picture frame.Picture in first picture frame and third picture frame at same position
Plain value is subtracted each other, available error image frame.
Error image frame calculation is as follows:
M (x, y)=A (x, y)-B (x, y);
Wherein, x indicates the pixel of width direction;The pixel of y expression short transverse;A (x, y) is indicated in the first picture frame
The pixel value of pixel at the position (x, y);B (x, y) indicates the pixel value of pixel at the position (x, y) in third picture frame;M (x, y)
Indicate the pixel value of pixel at the position (x, y) in error image frame.
Illustratively, as shown in fig. 7, pixel value phase in the first picture frame 10 and third picture frame 30 at same position
Subtract, obtains error image frame 40.
It, can be by the position and angle of the first camera and second camera and the parameter of camera before shooting
(exposure, white balance, tone etc.) etc. is adjusted to completely the same, ideally, is in off-position in the screen of display equipment
When, the first camera and the available two the same video flowings of second camera.But since polaroid has light
Certain barrier effect, therefore phase in the third video flowing of second camera shooting and the first video flowing of the first camera shooting
Third picture frame with timestamp is partially darker than the first picture frame, at this time we can be made by relevant treatment third picture frame and
The brightness of first picture frame is consistent.
Illustratively, processing step is as follows: calculating the pixel value in the first picture frame and third picture frame at same position
The average value of difference;The pixel value of each pixel in third picture frame is added with the average value, the third that obtains that treated
Picture frame.
That is:
B'(x, y)=B (x, y)+Δ k;
Wherein, X indicates the first total pixel of picture frame width direction;Y indicates the first total pixel of picture frame short transverse;(X,
Y the pixel quantity of the first picture frame) is indicated;Δ k indicates the average value of pixel value in the first picture frame and third picture frame;
A (x, y) indicates the pixel value of pixel at the position (x, y) in the first picture frame;B (x, y) indicates position (x, y) in third picture frame
Locate the pixel value of pixel;B ' (x, y) indicates the pixel value of pixel at the position (x, y) in treated third picture frame.
Optionally, the first picture frame and the pixel value at same position in third picture frame are subtracted each other, obtains error image
Frame, comprising: the first picture frame and the pixel value at same position in treated third picture frame are subtracted each other, error image is obtained
Frame.That is: M (x, y)=A (x, y)-B ' (x, y).By improving the brightness of third picture frame, so that the brightness of third picture frame
Brightness with the first picture frame is consistent, thus reduce difference in setting value picture frame, obtained error image frame
It is more accurate.
Optionally, after step 504, following steps can also be performed: by object region in error image frame
Pixel value is adjusted to max pixel value, and by the pixel value in other regions in error image frame in addition to object region
It is adjusted to zero, the error image frame that obtains that treated.Wherein, object region be in error image frame pixel value be greater than it is default
The region of threshold value, treated error image frame is used for and the first picture frame, treated, and the first picture frame generates target image
Frame.Above-mentioned preset threshold can be preset empirical value, and the embodiment of the present application is not construed as limiting this.By adjusting differential chart
As the pixel value of object region in frame, contrast can be improved, amplify difference so that it is subsequent using error image frame into
Row operation generates more accurate when target image frame.
Step 505, according to the first picture frame, treated the first picture frame and error image frame, target image frame is generated.
Error image frame is obtained by the first picture frame and third picture frame, due to the first camera and second camera
Resolution ratio it is identical, therefore the resolution ratio of the first picture frame and third picture frame is identical, therefore error image frame and first
The resolution ratio of picture frame is also identical.Further, point of the first picture frame, treated the first picture frame and error image frame
Resolution is identical, that is, their size be it is the same, accordingly, the pixel value at same position can carry out operation,
To generate target image frame.
Illustratively, steps are as follows for calculating: by the pixel value phase of max pixel value and each pixel in error image frame
Subtract, the error image frame that obtains that treated;By the first picture frame and the pixel at same position in treated error image frame
Value is multiplied, and obtains the first intermediate image frame;By the pixel in treated the first picture frame and error image frame at same position
Value is multiplied, and obtains the second intermediate image frame;By the pixel in the first intermediate image frame and the second intermediate image frame at same position
Value is added, and obtains third intermediate image frame;By the pixel value of each pixel in third intermediate image frame divided by max pixel value,
Obtain target image frame.That is:
F (x, y)=(A (x, y) * (255-M (x, y))+D (x, y) * M (x, y))/255;
Wherein, A (x, y) indicates the pixel value of pixel at the position (x, y) in the first picture frame;M (x, y) indicates error image
In frame at the position (x, y) pixel pixel value;D (x, y) indicates the picture of pixel at the position (x, y) in treated the first picture frame
Element value;F (x, y) indicates the pixel value of pixel at the position (x, y) in target image frame, and 255 be max pixel value.
Using error image frame, complete people can be shown in target image frame and is shown in the display of equipment
Hold.
Step 506, target video stream is generated according to target image frame.
Multiple target image frames arrange in a certain order, that is, produce target video stream.
In conclusion the second picture frame is converted by perspective transform in technical solution provided by the embodiments of the present application
Display area of the screen in the first picture frame for showing equipment, first picture frame that generates that treated;Pass through the first image again
Frame and the third picture frame shot by polaroid, setting value picture frame;Then according to the first picture frame, treated first
Picture frame and error image frame generate target image frame;It not only avoids using camera to the display content in display equipment
When being shot, since light irradiation has reflection on the screen of a display device, the video pictures for causing camera to shoot occur
The problem of distortion;And can completely be shown the image of the people in the first picture frame by error image frame, so as to
The display content and the explanation of people for showing equipment in video are clearly viewed, reality scene is more truly demonstrated out.
In the following, being directed to a specific reality scene, that is, technical solution provided by the present application is introduced in scene of giving lessons
Explanation.In the embodiment of the present application, illustrated for showing that equipment is smart television.
As shown in figure 8, teacher explains for the content shown in smart television.First camera 600 is directly to this
Teacher explains scene and shoots, and obtains the first video flowing.It include display equipment in the first picture frame 10 in first video flowing
And teacher.User can directly obtain the second video flowing from smart television, can be and generated by the teaching courseware of teacher's explanation
Video flowing.Second camera 700 is explained scene to the teacher through polarizing film 701 and is shot, and third video flowing is obtained,
In, polarizing film 701 blocks the light 702 (dotted line in figure indicates) of smart television sending and allows natural light 703 (in figure
Solid line indicates) it penetrates, therefore, the display area of smart television is black in the third picture frame 30 in third video flowing.
After getting the first video flowing, the second video flowing and third video flowing, in video processing equipment
Reason, obtains final target video stream.Mainly including the following steps:
1, for the first picture frame 10 and the second picture frame 20 of identical time stamp in the first video flowing and the second video flowing,
The second image 20 is converted at the position of target area 501 by perspective transform, generates treated the first picture frame 40.
2, for the first picture frame 10 and third picture frame 30 of identical time stamp in the first video flowing and third video flowing,
First picture frame 10 and the pixel value at same position in third picture frame 30 are subtracted each other, error image frame 50 is obtained.
3, based on the first picture frame 10, treated the first picture frame 40 by error image frame 50 is synthesized to the
In one picture frame 10, i.e., to same position in the first picture frame 10, treated the first picture frame 40 and error image frame 50 at
Pixel value carry out correspondingly operation, to obtain target image frame 60.
4, by multiple target image frames 60 to arrange in a certain order, that is, target video stream is produced.
Referring to FIG. 9, the flow chart of the video broadcasting method provided it illustrates another embodiment of the application.In this reality
It applies in example, is mainly applied to illustrate in the video playback apparatus 400 of implementation environment shown in Fig. 1 in this way.This method
May include the following steps (901~903):
Step 901, video acquisition request is sent to video distribution platform.
When user need play a certain video in video playback apparatus when, user can by the video playback apparatus to
Video distribution platform sends the acquisition request for corresponding to the video, to get the video.
Above-mentioned video playing platform can be a video library for storing video.The video playing platform can be net
Network platform, video processing equipment can video be sent to the video playing platform by treated, sets receiving video playing
After the video acquisition request that preparation is sent, the video can be sent to the video playback apparatus.Above-mentioned video acquisition request refers to view
What frequency playback equipment was sent, for obtaining the request of video.
Optionally, video processing equipment belongs to video distribution platform, at this point, video processing equipment is receiving video playing
After the video acquisition request that equipment is sent, video processing equipment can be directly by treated video (target as described above
Video flowing) it is supplied to video playback apparatus.
Step 902, the target video stream that video distribution platform is sent according to video acquisition request is received.
Video distribution platform, can be to the video playing after receiving the video acquisition request of video playback apparatus transmission
Equipment sends target video stream, and correspondingly, video playback apparatus receives the target video stream that video distribution platform is sent.
Target video stream is the video flowing generated according at least to the first video flowing and the second video flowing, wherein the first video
Stream is shot to obtain by the first camera to reality scene, includes the display with content display function in the reality scene
Equipment;Second video flowing is obtained from display equipment.Optionally, which is scene of giving lessons, and video processing terminal can be right
Video camera shoots the video flowing that scene obtains of giving lessons and is handled, and obtains target video stream;Video processing terminal can be by the mesh
Video stream is marked to video playing platform;User can be used video playback apparatus and obtain the mesh from video playing platform request
Mark video flowing.Optionally, which can also be speech scene.
Optionally, above-mentioned target video stream is replaced in the first video flowing using the second picture frame in the second video flowing
The video flowing generated after the picture material of target area in first picture frame;Target area is to show the screen of equipment
Display area in one picture frame.
It has been described in detail in figure 2 above and Fig. 5 embodiment about the detailed process for generating target video stream,
Details are not described herein again.
Step 903, target video stream is played.
Video playback apparatus can play the target video stream after receiving target video stream.Optionally, video is broadcast
Putting installation and operation in equipment has the application program with video playback capability, and video playback apparatus plays mesh by the application program
Mark video flowing.
In the picture frame of above-mentioned target video stream there is no to display equipment screen shot caused by interfere line
(such as moire fringes).In addition, in the picture frame of target video stream also there is no distortion the problem of, such as reflective, shade, obscure.
In conclusion in technical solution provided by the embodiments of the present application, by sending video acquisition to video distribution platform
Request, and the target video stream is played after the target video stream for receiving the transmission of video distribution platform.Wherein, target video
Stream is the video flowing generated according to the first video flowing shot to reality scene, includes having content in reality scene
The display equipment of display function, shot there is no the screen to display equipment in the picture frame of target video stream caused by
It interferes line (such as moire fringes).The display content in display equipment is shot using camera compared to direct broadcasting
Video flowing is avoided since light irradiation has reflection on the screen of a display device, in the video pictures for causing camera to shoot
There are problems that interfering line (such as moire fringes), so as to clearly view the display content for showing equipment in video.
Following is the application Installation practice, can be used for executing the application embodiment of the method.It is real for the application device
Undisclosed details in example is applied, the application embodiment of the method is please referred to.
Referring to FIG. 10, the block diagram of the video process apparatus provided it illustrates the application one embodiment.Device tool
Have and realize the exemplary function of the above method, the function it is real can also to be executed corresponding software by hardware realization by hardware
It is existing.The device can be video processing equipment, also can be set on video processing equipment.The device 1000 may include: to obtain
Modulus block 1010, replacement module 1020 and generation module 1030.
Module 1010 is obtained, for the first video flowing of acquisition and the second video flowing, first video flowing is taken the photograph by first
It include that there is the display of content display function to set in the reality scene as the video flowing that head shoots reality scene
Standby, second video flowing is by the display content video flowing generated of the display equipment.
Replacement module 1020, for first for identical time stamp in first video flowing and second video flowing
Picture frame and the second picture frame replace the picture material of target area in the first image frame using second picture frame,
Generate target image frame;Wherein, the target area is display of the screen of the display equipment in the first image frame
Region.
Generation module 1030, for generating target video stream according to the target image frame.
In conclusion being given birth in technical solution provided by the embodiments of the present application by using the display content of display equipment
At the second video flowing in the second picture frame, replace the first video that the first camera shoots reality scene
In stream in the first picture frame of identical time stamp target area picture material, target image frame is generated, then by target image frame
Generate target video stream;Since replaced picture material being shown in the target video stream that ultimately generates, be not by
The picture material that the screen of camera shooting display equipment obtains, therefore avoid using camera in the display of display equipment
When appearance is shot, since light irradiation has reflection on the screen of a display device, in the video pictures for causing camera to shoot
There are problems that interfering line (such as moire fringes), so as to clearly view the display content for showing equipment in video.
In the alternative embodiment provided based on Figure 10 embodiment, as shown in figure 11, the replacement module 1020, packet
It includes: calibration unit 1021 and the first generation unit 1022.
Unit 1021 is demarcated, for demarcating the position of the target area in the first image frame.
First generation unit 1022, for second picture frame to be converted into the target area by perspective transform
At position, the target image frame is generated.
In another alternative embodiment provided based on Figure 10 embodiment, as shown in figure 11, the replacement module 1020,
It include: calibration unit 1021, converter unit 1023, computing unit 1024 and the second generation unit 1025.
Unit 1021 is demarcated, for demarcating the position of the target area in the first image frame.
Converter unit 1023, for second picture frame to be converted into the position of the target area by perspective transform
Place generates treated the first picture frame.
Computing unit 1024, for described first for identical time stamp in first video flowing and third video flowing
Picture frame and third picture frame subtract each other the first image frame and the pixel value at same position in the third picture frame,
Obtain error image frame;Wherein, the third video flowing is carried out by polaroid to the reality scene by second camera
Shoot obtained video flowing.
Second generation unit 1025 will treated first picture frame for using the first image frame as painting canvas
It is synthesized on the painting canvas by the error image frame mask, obtains the target image frame.
Optionally, second generation unit 1025, is used for:
The pixel value of max pixel value and each pixel in the error image frame is subtracted each other, the difference that obtains that treated
Picture frame;The first image frame is multiplied with the pixel value at same position in treated the error image frame, is obtained
First intermediate image frame;By treated the first picture frame and the pixel value phase at same position in the error image frame
Multiply, obtains the second intermediate image frame;At same position in first intermediate image frame and second intermediate image frame
Pixel value is added, and obtains third intermediate image frame;By the pixel value of each pixel in the third intermediate image frame divided by institute
Max pixel value is stated, the target image frame is obtained.
Optionally, as shown in figure 11, the replacement module 1020 further includes processing unit 1026.
The processing unit 1026, for calculating in the first image frame and the third picture frame at same position
The average value of pixel value;The pixel value of each pixel in the third picture frame is added with the average value, is obtained
Treated third picture frame.
The computing unit 1024, for by identical bits in the first image frame and treated the third picture frame
The pixel value at the place of setting subtracts each other, and obtains the error image frame.
Optionally, as shown in figure 11, the replacement module 1020 further includes adjustment unit 1027.
The adjustment unit 1027, for the pixel value of object region in the error image frame to be adjusted to maximum
Pixel value, and the pixel value in other regions in the error image frame in addition to the object region is adjusted to
Zero, the error image frame that obtains that treated;Wherein, the object region is that pixel value is greater than in advance in the error image frame
If the region of threshold value, treated the error image frame is used for and the first image frame, treated first image
Frame generates the target image frame.
It is described existing in another alternative embodiment based on Figure 10 embodiment or the offer of any of the above-described alternative embodiment
Real field scape includes give lessons scene, speech scene.
Figure 12 is please referred to, it illustrates the block diagrams for the video play device that the application one embodiment provides.Device tool
Have and realize the exemplary function of the above method, the function it is real can also to be executed corresponding software by hardware realization by hardware
It is existing.The device can be video playback apparatus, also can be set on video playback apparatus.The device 1200 may include: hair
Send module 1201, receiving module 1202 and playing module 1203.
Sending module 1201, for sending video acquisition request to video distribution platform.
Receiving module 1202 is regarded for receiving the video distribution platform according to the target that video acquisition request is sent
Frequency flows, and the target video stream is the video flowing generated according at least to the first video flowing and the second video flowing, wherein described first
Video flowing is shot to obtain by the first camera to reality scene, includes having content display function in the reality scene
Display equipment, second video flowing obtains from the display equipment.
Playing module 1203, for playing the target video stream, in the picture frame of the target video stream there is no pair
The screen of the display equipment interferes line caused by being shot.
In conclusion in technical solution provided by the embodiments of the present application, by sending video acquisition to video distribution platform
Request, and the target video stream is played after the target video stream for receiving the transmission of video distribution platform.Wherein, target video
Stream is the video flowing generated according to the first video flowing shot to reality scene, includes having content in reality scene
The display equipment of display function, shot there is no the screen to display equipment in the picture frame of target video stream caused by
It interferes line (such as moire fringes).The display content in display equipment is shot using camera compared to direct broadcasting
Video flowing is avoided since light irradiation has reflection on the screen of a display device, in the video pictures for causing camera to shoot
There are problems that interfering line (such as moire fringes), so as to clearly view the display content for showing equipment in video
In the alternative embodiment provided based on Figure 12 embodiment, the target video stream is using the second video flowing
In the second picture frame replace the picture material of the target area in the first picture frame in first video flowing after generate
Video flowing;Wherein, the target area is display area of the screen of the display equipment in the first image frame.
It should be noted that device provided by the above embodiment, when realizing its function, only with above-mentioned each functional module
It divides and carries out for example, can according to need in practical application and be completed by different functional modules above-mentioned function distribution,
The internal structure of equipment is divided into different functional modules, to complete all or part of the functions described above.In addition,
Apparatus and method embodiment provided by the above embodiment belongs to same design, and specific implementation process is detailed in embodiment of the method, this
In repeat no more.
Figure 13 is please referred to, it illustrates the structural block diagram for the computer equipment that the application one embodiment provides, the calculating
Machine equipment can be video processing equipment, be also possible to video playback apparatus.In general, computer equipment 1300 includes: processing
Device 1301 and memory 1302.
Processor 1301 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 1301 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 1301 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 1301 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 1301 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 1302 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 1302 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 1302 can
Storage medium is read for storing at least one instruction, at least one instruction performed by processor 1301 for realizing this Shen
Please in embodiment of the method provide method for processing video frequency or video broadcasting method.
In some embodiments, video processing equipment 1300 is also optional includes: peripheral device interface 1303 and at least one
A peripheral equipment.Bus or signal wire phase can be passed through between processor 1301, memory 1302 and peripheral device interface 1303
Even.Each peripheral equipment can be connected by bus, signal wire or circuit board with peripheral device interface 1303.Specifically, peripheral
Equipment may include: communication interface 1304, display screen 1305, voicefrequency circuit 1306,1307 video circuit 1308 of CCD camera assembly
At least one of with power supply 1309.
It will be understood by those skilled in the art that structure shown in Figure 13 does not constitute the limit to video processing equipment 1300
It is fixed, it may include perhaps combining certain components than illustrating more or fewer components or being arranged using different components.
In the exemplary embodiment, a kind of computer equipment is additionally provided, the computer equipment includes processor and deposits
Reservoir, is stored at least one instruction, at least a Duan Chengxu, code set or instruction set in the memory, and described at least one
Instruction, an at least Duan Chengxu, the code set or instruction set are loaded by the processor and are executed to realize above-mentioned video
Processing method or video broadcasting method.
In the exemplary embodiment, a kind of computer readable storage medium is additionally provided, is stored in the storage medium
At least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, institute
It states code set or described instruction collection and realizes above-mentioned method for processing video frequency or view when being executed by the processor of video processing equipment
Frequency playback method.
It should be understood that referenced herein " multiple " refer to two or more."and/or", description association
The incidence relation of object indicates may exist three kinds of relationships, for example, A and/or B, can indicate: individualism A exists simultaneously A
And B, individualism B these three situations.Character "/" typicallys represent the relationship that forward-backward correlation object is a kind of "or".
The foregoing is merely the exemplary embodiments of the application, all in spirit herein not to limit the application
Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.
Claims (15)
1. a kind of method for processing video frequency, which is characterized in that the described method includes:
The first video flowing and the second video flowing are obtained, first video flowing is to be shot by the first camera to reality scene
Obtained video flowing, includes the display equipment with content display function in the reality scene, second video flowing be by
The display content video flowing generated of the display equipment;
For the first picture frame and the second picture frame of identical time stamp in first video flowing and second video flowing, adopt
With the picture material of target area in second picture frame replacement the first image frame, target image frame is generated;Wherein, institute
Stating target area is display area of the screen of the display equipment in the first image frame;
Target video stream is generated according to the target image frame.
2. the method according to claim 1, wherein described replace first figure using second picture frame
As the picture material of target area in frame, target image frame is generated, comprising:
The position of the target area is demarcated in the first image frame;
Second picture frame is converted at the position of the target area by perspective transform, generates the target image
Frame.
3. the method according to claim 1, wherein described replace first figure using second picture frame
As the picture material of target area in frame, target image frame is generated, comprising:
The position of the target area is demarcated in the first image frame;
Second picture frame is converted at the position of the target area by perspective transform, generates treated the first figure
As frame;
It, will for the first image frame and third picture frame of identical time stamp in first video flowing and third video flowing
The first image frame subtracts each other with the pixel value at same position in the third picture frame, obtains error image frame;Wherein, institute
Stating third video flowing is to pass through the video flowing that polaroid shoots the reality scene by second camera;
Using the first image frame as painting canvas, treated by described in, and the first picture frame is synthesized by the error image frame mask
Onto the painting canvas, the target image frame is obtained.
4. according to the method described in claim 3, it is characterized in that, described using the first image frame as painting canvas, by the place
The first picture frame after reason is synthesized on the painting canvas by the error image frame mask, obtains the target image frame, is wrapped
It includes:
The pixel value of max pixel value and each pixel in the error image frame is subtracted each other, the error image that obtains that treated
Frame;
The first image frame is multiplied with the pixel value at same position in treated the error image frame, obtains first
Intermediate image frame;
Treated first picture frame is multiplied with the pixel value at same position in the error image frame, obtains second
Intermediate image frame;
First intermediate image frame is added with the pixel value at same position in second intermediate image frame, obtains third
Intermediate image frame;
By the pixel value of each pixel in the third intermediate image frame divided by the max pixel value, the target figure is obtained
As frame.
5. according to the method described in claim 3, it is characterized in that, described by the first image frame and the third picture frame
Pixel value at middle same position subtracts each other, before obtaining error image frame, further includes:
Calculate the average value of the pixel value in the first image frame and the third picture frame at same position;
The pixel value of each pixel in the third picture frame is added with the average value, the third image that obtains that treated
Frame;
It is described to subtract each other the first image frame and the pixel value at same position in the third picture frame, obtain error image
Frame, comprising:
The first image frame and the pixel value at same position in treated the third picture frame are subtracted each other, obtained described
Error image frame.
6. according to the method described in claim 3, it is characterized in that, described by the first image frame and the third picture frame
Pixel value at middle same position subtracts each other, after obtaining error image frame, further includes:
The pixel value of object region in the error image frame is adjusted to max pixel value, and by the error image
The pixel value in other regions in frame in addition to the object region is adjusted to zero, the error image frame that obtains that treated;
Wherein, the object region is the region that pixel value is greater than preset threshold in the error image frame, the processing
Error image frame afterwards is used for and the first image frame, treated first picture frame generate the target image frame.
7. method according to any one of claims 1 to 6, which is characterized in that the reality scene include give lessons scene, drill
Say scene.
8. a kind of video broadcasting method, which is characterized in that the described method includes:
Video acquisition request is sent to video distribution platform;
The target video stream that the video distribution platform is sent according to video acquisition request is received, the target video stream is
The video flowing generated according at least to the first video flowing and the second video flowing, wherein first video flowing passes through the first camera
Reality scene is shot to obtain, includes the display equipment with content display function in the reality scene, described second
Video flowing is obtained from the display equipment;
The target video stream is played, is not present in the picture frame of the target video stream and the screen of the display equipment is carried out
Line is interfered caused by shooting.
9. according to the method described in claim 8, it is characterized in that, the target video stream is using the in the second video flowing
Two picture frames replace the video that the picture material of the target area in the first picture frame in first video flowing generates later
Stream;Wherein, the target area is display area of the screen of the display equipment in the first image frame.
10. a kind of video process apparatus, which is characterized in that described device includes:
Module is obtained, for obtaining the first video flowing and the second video flowing, first video flowing is by the first camera to existing
The video flowing that real field scape is shot includes the display equipment with content display function in the reality scene, described
Second video flowing is by the display content video flowing generated of the display equipment;
Replacement module, for for identical time stamp in first video flowing and second video flowing the first picture frame and
Second picture frame generates target using the picture material of target area in second picture frame replacement the first image frame
Picture frame;Wherein, the target area is display area of the screen of the display equipment in the first image frame;
Generation module, for generating target video stream according to the target image frame.
11. device according to claim 10, which is characterized in that the replacement module, comprising:
Unit is demarcated, for demarcating the position of the target area in the first image frame;
First generation unit, for second picture frame to be converted at the position of the target area by perspective transform,
Generate the target image frame.
12. device according to claim 10, which is characterized in that the replacement module, comprising:
Unit is demarcated, for demarcating the position of the target area in the first image frame;
Converter unit is generated for second picture frame to be converted at the position of the target area by perspective transform
Treated the first picture frame;
Computing unit, for for identical time stamp in first video flowing and third video flowing the first image frame and
The first image frame and the pixel value at same position in the third picture frame are subtracted each other, obtain difference by third picture frame
Picture frame;Wherein, the third video flowing is to be shot to obtain to the reality scene by polaroid by second camera
Video flowing;
Second generation unit is used for using the first image frame as painting canvas, described in the first picture frame process that treated by described in
Error image frame mask is synthesized on the painting canvas, obtains the target image frame.
13. a kind of video play device, which is characterized in that described device includes:
Sending module, for sending video acquisition request to video distribution platform;
Receiving module, the target video stream sent for receiving the video distribution platform according to video acquisition request, institute
Stating target video stream is the video flowing generated according at least to the first video flowing and the second video flowing, wherein first video flowing
Reality scene is shot to obtain by the first camera, includes the display with content display function in the reality scene
Equipment, second video flowing are obtained from the display equipment;
Playing module is not present in the picture frame of the target video stream to the display for playing the target video stream
The screen of equipment interferes line caused by being shot.
14. a kind of computer equipment, which is characterized in that the computer equipment includes processor and memory, the memory
In be stored at least one instruction, at least a Duan Chengxu, code set or instruction set, at least one instruction, described at least one
Duan Chengxu, the code set or instruction set are loaded as the processor and are executed to realize as described in any one of claim 1 to 7
Method, or realize such as the described in any item methods of claim 8 to 9.
15. a kind of computer readable storage medium, which is characterized in that be stored at least one instruction, extremely in the storage medium
A few Duan Chengxu, code set or instruction set, at least one instruction, an at least Duan Chengxu, the code set or instruction
Collection is loaded by processor and is executed to realize method as described in any one of claim 1 to 7, or realizes such as claim 8
To 9 described in any item methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910005161.2A CN109587556B (en) | 2019-01-03 | 2019-01-03 | Video processing method, video playing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910005161.2A CN109587556B (en) | 2019-01-03 | 2019-01-03 | Video processing method, video playing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109587556A true CN109587556A (en) | 2019-04-05 |
CN109587556B CN109587556B (en) | 2021-10-15 |
Family
ID=65915963
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910005161.2A Active CN109587556B (en) | 2019-01-03 | 2019-01-03 | Video processing method, video playing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109587556B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189271A (en) * | 2019-05-24 | 2019-08-30 | 深圳市子瑜杰恩科技有限公司 | The noise remove method and Related product of reflecting background |
CN112492375A (en) * | 2021-01-18 | 2021-03-12 | 新东方教育科技集团有限公司 | Video processing method, storage medium, electronic device and video live broadcast system |
CN112702641A (en) * | 2020-12-23 | 2021-04-23 | 杭州海康威视数字技术股份有限公司 | Video processing method, camera, recording and playing host, system and storage medium |
CN112907454A (en) * | 2019-11-19 | 2021-06-04 | 杭州海康威视数字技术股份有限公司 | Method and device for acquiring image, computer equipment and storage medium |
CN112954137A (en) * | 2021-02-08 | 2021-06-11 | 联想(北京)有限公司 | Image processing method and device and image processing equipment |
CN113497957A (en) * | 2020-03-18 | 2021-10-12 | 摩托罗拉移动有限责任公司 | Electronic device and method for capturing images from an external display of a remote electronic device |
CN113766137A (en) * | 2021-09-23 | 2021-12-07 | 联想(北京)有限公司 | Image processing method and device |
CN115546043A (en) * | 2022-03-31 | 2022-12-30 | 荣耀终端有限公司 | Video processing method and related equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101883270A (en) * | 2010-06-10 | 2010-11-10 | 上海海事大学 | Method for inserting related media in independent video streams |
JP2014029566A (en) * | 2012-07-03 | 2014-02-13 | Interactive Communication Design Co Ltd | Image processing apparatus, image processing method, and image processing program |
CN105245784A (en) * | 2014-06-26 | 2016-01-13 | 深圳锐取信息技术股份有限公司 | Shooting processing method and shooting processing device for projection region in multimedia classroom |
CN106572385A (en) * | 2015-10-10 | 2017-04-19 | 北京佳讯飞鸿电气股份有限公司 | Image overlaying method for remote training video presentation |
CN108281052A (en) * | 2018-02-09 | 2018-07-13 | 郑州市第十中学 | A kind of on-line teaching system and online teaching method |
-
2019
- 2019-01-03 CN CN201910005161.2A patent/CN109587556B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101883270A (en) * | 2010-06-10 | 2010-11-10 | 上海海事大学 | Method for inserting related media in independent video streams |
JP2014029566A (en) * | 2012-07-03 | 2014-02-13 | Interactive Communication Design Co Ltd | Image processing apparatus, image processing method, and image processing program |
CN105245784A (en) * | 2014-06-26 | 2016-01-13 | 深圳锐取信息技术股份有限公司 | Shooting processing method and shooting processing device for projection region in multimedia classroom |
CN106572385A (en) * | 2015-10-10 | 2017-04-19 | 北京佳讯飞鸿电气股份有限公司 | Image overlaying method for remote training video presentation |
CN108281052A (en) * | 2018-02-09 | 2018-07-13 | 郑州市第十中学 | A kind of on-line teaching system and online teaching method |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110189271A (en) * | 2019-05-24 | 2019-08-30 | 深圳市子瑜杰恩科技有限公司 | The noise remove method and Related product of reflecting background |
CN112907454A (en) * | 2019-11-19 | 2021-06-04 | 杭州海康威视数字技术股份有限公司 | Method and device for acquiring image, computer equipment and storage medium |
CN112907454B (en) * | 2019-11-19 | 2023-08-08 | 杭州海康威视数字技术股份有限公司 | Method, device, computer equipment and storage medium for acquiring image |
CN113497957A (en) * | 2020-03-18 | 2021-10-12 | 摩托罗拉移动有限责任公司 | Electronic device and method for capturing images from an external display of a remote electronic device |
CN112702641A (en) * | 2020-12-23 | 2021-04-23 | 杭州海康威视数字技术股份有限公司 | Video processing method, camera, recording and playing host, system and storage medium |
CN112492375A (en) * | 2021-01-18 | 2021-03-12 | 新东方教育科技集团有限公司 | Video processing method, storage medium, electronic device and video live broadcast system |
CN112954137A (en) * | 2021-02-08 | 2021-06-11 | 联想(北京)有限公司 | Image processing method and device and image processing equipment |
CN112954137B (en) * | 2021-02-08 | 2023-03-21 | 联想(北京)有限公司 | Image processing method and device and image processing equipment |
CN113766137A (en) * | 2021-09-23 | 2021-12-07 | 联想(北京)有限公司 | Image processing method and device |
CN115546043A (en) * | 2022-03-31 | 2022-12-30 | 荣耀终端有限公司 | Video processing method and related equipment |
CN115546043B (en) * | 2022-03-31 | 2023-08-18 | 荣耀终端有限公司 | Video processing method and related equipment thereof |
Also Published As
Publication number | Publication date |
---|---|
CN109587556B (en) | 2021-10-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109587556A (en) | Method for processing video frequency, video broadcasting method, device, equipment and storage medium | |
Jones et al. | Achieving eye contact in a one-to-many 3D video teleconferencing system | |
US8570319B2 (en) | Perceptually-based compensation of unintended light pollution of images for projection display systems | |
US8854412B2 (en) | Real-time automatic scene relighting in video conference sessions | |
Cossairt et al. | Light field transfer: global illumination between real and synthetic objects | |
CN108074241B (en) | Quality scoring method and device for target image, terminal and storage medium | |
CN104869476A (en) | Video playing method for preventing candid shooting based on psychological vision modulation | |
US10341546B2 (en) | Image processing apparatus and image processing method | |
US20140204083A1 (en) | Systems and methods for real-time distortion processing | |
CN111724310B (en) | Training method of image restoration model, image restoration method and device | |
Zhong et al. | Reproducing reality with a high-dynamic-range multi-focal stereo display | |
EP3268930B1 (en) | Method and device for processing a peripheral image | |
KR20170013704A (en) | Method and system for generation user's vies specific VR space in a Projection Environment | |
KR20190059712A (en) | 360 VR image conversion system and method using multiple images | |
CN110677557B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
Eilertsen | The high dynamic range imaging pipeline | |
US9432620B2 (en) | Determining a synchronization relationship | |
US10529057B2 (en) | Image processing apparatus and image processing method | |
US8878894B2 (en) | Estimating video cross-talk | |
US8692865B2 (en) | Reducing video cross-talk in a visual-collaborative system | |
CN111192305B (en) | Method and apparatus for generating three-dimensional image | |
US20180124321A1 (en) | Image processing apparatus and image processing method | |
James et al. | Colour-Managed LED Walls for Virtual Production | |
Park et al. | Projector compensation framework using differentiable rendering | |
CN112887655B (en) | Information processing method and information processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |