CN109510975A - A kind of extracting method of video image, equipment and system - Google Patents

A kind of extracting method of video image, equipment and system Download PDF

Info

Publication number
CN109510975A
CN109510975A CN201910053769.2A CN201910053769A CN109510975A CN 109510975 A CN109510975 A CN 109510975A CN 201910053769 A CN201910053769 A CN 201910053769A CN 109510975 A CN109510975 A CN 109510975A
Authority
CN
China
Prior art keywords
equipment
image
rendering
data
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910053769.2A
Other languages
Chinese (zh)
Other versions
CN109510975B (en
Inventor
孟宪民
李小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengxin Oriental Culture Ltd By Share Ltd
Original Assignee
Hengxin Oriental Culture Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengxin Oriental Culture Ltd By Share Ltd filed Critical Hengxin Oriental Culture Ltd By Share Ltd
Priority to CN201910053769.2A priority Critical patent/CN109510975B/en
Publication of CN109510975A publication Critical patent/CN109510975A/en
Application granted granted Critical
Publication of CN109510975B publication Critical patent/CN109510975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

This application discloses a kind of extracting method of video image, equipment and systems, are related to field of image processing.The main technical schemes of the application are as follows: creation left and right virtual camera obtains view port data by the left and right virtual camera;The view port data creation left and right figure got according to the left and right virtual camera;The left and right figure is merged into rendering and obtains texture image data;The texture image data is sent to VR equipment.The application is using the technical solution for creating two Softcams in video extraction equipment, the video image in video extraction equipment can more quickly be acquired, so that it is more accurate to be transferred to the image shown after VR equipment, and it does not need that right and left eyes of another video extraction equipment to render VR equipment respectively are additionally arranged, reduce cost, and can be realized VR equipment right and left eyes synchronized update, it preferably can simulate and present perspective view.

Description

A kind of extracting method of video image, equipment and system
Technical field
This application involves technical field of image processing more particularly to a kind of extracting methods of video image, equipment and system.
Background technique
It is well known that real world is the real 3 D stereo world, and the existing display equipment overwhelming majority can only It shows two-dimensional signal, feeling of immersion can not be given.In order to make scene and the object of display have Deep Canvas, people are in many Aspect is attempted, and the research of 3D display technology experienced the development of more than ten years, achieves very great successes.
Current 3D display technology mainly has following a few classes:
(1) use the stereo technology of optical principle: the technology mainly utilizes prism, polarizing film, perspective or grating Optical mirror slip realize that a sub-picture is all to form two width different image point by the optical filtering of optical mirror slip or polarization theory It is not presented in the right and left eyes of people, to form stereo-picture.The technology is influenced by optical mirror slip and environment, can not be very clear Clear shows most true picture to user.
(2) virtual real stereographic projection technology: the video signal output of two computers respectively with two projectors Video signal input terminal is connected.It is installed with shading box respectively in the front end of two projectors, in the front end of two shading boxes point It is not installed with polariscope, the polarization axis direction of two polariscopes is mutually perpendicular to.Two projectors of this mode are equivalent to people's The video content of eyes, output can cause the visual difference of right and left eyes by polariscope, to generate in the brain three-dimensional Image.The technology needs two hosts to be rendered into respective associated equipment respectively, at high cost, and is unable to synchronized update.
Summary of the invention
The application provides a kind of extracting method of video image, comprising: creation left and right virtual camera passes through the left and right Virtual camera obtains view port data;The view port data creation left and right figure got according to the left and right virtual camera;By institute It states left and right figure merging rendering and obtains texture image data;The texture image data is sent to VR equipment.
It is as above, wherein to further include between the camera for initializing the left and right cameras after creation left and right virtual camera Away from the average value for eyes interpupillary distance;It, will be between the camera of the left and right cameras in response to the newest camera spacing of VR equipment Away from being set as the newest camera spacing.
As above, wherein further include creation acquisition viewport before creation left and right virtual camera, and initializes acquisition viewport View port data;Wherein, the view port data of initialization acquisition viewport specifically includes creation equipment, context, exchanging chain, post-processing object And viewport, the post-processing object is set as exporting to screen using the context, and initialize view port data.
As above, wherein view port data is obtained by left and right virtual camera, following sub-step is specifically included: being described Left and right virtual camera loads real time image collection plug-in unit respectively and initializes acquisition plug-in unit;Wash with watercolours is called in the acquisition plug-in unit Hardware interface is contaminated, current page rendering data is obtained from post-processing object in real time using exchanging chain by switch contexts, uses Current page rendering data updates view port data.
As above, wherein initialization acquisition plug-in unit specifically includes following sub-step: obtaining scene viewport, pass through scene visual Mouth obtains the width height and required interface of current window;Application layer renderer is created, and obtains viewport resource data;Pass through application Layer renderer obtains the resource of top-level windows;The resources constrained for the top-level windows that will acquire, which is converted to rendering hardware interface, to be known Other type.
As above, wherein view port data is updated using current page rendering data, specially according to the height of current page And width, rgb value of each pixel in the page is obtained line by line, rgb value is inputted into CPU single thread GPU, is used Rgb value updates view port data.
The application also provides a kind of video extraction equipment, including such as lower component: creation module is virtually taken the photograph for creating left and right Camera obtains view port data by left and right virtual machine;First rendering module, for being got according to the left and right virtual camera View port data creation left and right figure, by it is described left and right figure merge rendering obtain texture image data;First communication module, being used for will Texture image data is sent to VR equipment.
As above, wherein the creation module is also used to after creating left and right virtual camera, initializes the left and right The camera spacing of video camera is the average value of eyes interpupillary distance;The video extraction equipment further includes setup module, for responding In the newest camera spacing of VR equipment, set the camera spacing of the left and right cameras between the newest camera Away from.
The application also provides a kind of extraction system of video image, comprising: above-mentioned video extraction equipment;Server, including Second communication module, for the texture image data of the video extraction equipment to be forwarded to VR equipment;VR equipment, including third Communication module and the second rendering module;The third communication module is used to receive the texture image data from the server, Second rendering module is used to the texture image data received being divided into right and left eyes scene image, and renders to equipment respectively In left and right cameras.
As above, wherein second rendering module specifically includes: rendering submodule: for according to the texture received Left eye scene image and right eye scene image are rendered into one by image data acquisition left eye scene image and right eye scene image It opens on texture image, obtains target texture image;Anti- distortion submodule: the ginseng for parameter and eyeglass by device screen Number, determines the visible screen area of human eye, constructs anti-distortion grid based on the visible screen area of human eye, and determines anti-distortion net The grid vertex of lattice, by the drafting viewport of the screen of the grid vertex and target terminal of anti-distortion grid, after determining anti-distortion Grid vertex;The rendering submodule is also used to determine anti-distortion processing by grid vertex after counter distort and target image Image afterwards, will anti-distortion treated that image is divided into the anti-fault image in left and right, and render to the left and right screen of VR equipment respectively On.
What the application realized has the beneficial effect that: the application is used creates two Softcams in video extraction equipment Technical solution, the video image in video extraction equipment can be more quickly acquired, so that showing after being transferred to VR equipment Image it is more accurate, and do not need that right and left eyes of another video extraction equipment to render VR equipment respectively are additionally arranged, Cost is reduced, and can be realized VR equipment right and left eyes synchronized update, preferably can simulate and present perspective view.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application can also be obtained according to these attached drawings other for those of ordinary skill in the art Attached drawing.
Fig. 1 is the extracting method flow chart for the video image that the embodiment of the present application one provides;
Fig. 2 is the concrete operations stream that application program is initialized after opening in one video extraction equipment of the embodiment of the present application Cheng Tu;
Fig. 3 is the concrete operations process that application initialization acquires plug-in unit in one video extraction equipment of the embodiment of the present application Figure;
Right and left eyes scene image is rendered to equipment for VR equipment in one video extraction system of the embodiment of the present application by Fig. 4 respectively The concrete operations flow chart of left and right cameras;
Fig. 5 is the schematic diagram of the extraction system for the video image that the embodiment of the present application two provides.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on the present invention In embodiment, those skilled in the art's every other embodiment obtained without making creative work, all Belong to the scope of protection of the invention.
The extracting method of video image provided by the present application is suitable for (to be set by video extraction equipment for PC machine or movement It is standby etc.), in the system of server and VR equipment composition, wherein the application journey of video image is extracted in operation in video extraction equipment Sequence, server are used in video extraction equipment and VR equipment room transmitting video image, and VR equipment is used for will be from video extraction equipment The video image received is rendered into equipment right and left eyes, completes the display of image.
Embodiment one
Referring to Fig. 1, the embodiment of the present application one provides a kind of extracting method of video image, specifically includes:
Step 110: the application program of video extraction equipment is opened, and creation acquisition viewport, initialization are adopted in the application Collect the view port data of viewport;
Wherein, as shown in Fig. 2, application program carries out initialization after opening specifically include following sub-step:
Step 210: creation equipment (device), context (context) and exchanging chain (swapchain);
Wherein, equipment device for loading video resource during loading;Context context was for rendering The data of incoming video card are set in journey;Exchanging chain swapchain is used to describe output window, rendering frame per second and post-processing object, Exchanging chain provides foreground caching and backstage caches, and foreground is cached for rendering, and backstage caching is for drawing latest image data.
Step 220: creation post-processing object;
Post-processing object (render target) is the final destination of all drafting behaviors, i.e. screen, application program is being compiled It collects and obtains page rendering data when running under device from post-processing object.
Step 230: being set as exporting to screen by post-processing object using context (context).
Step 240: creation viewport (viewport) initializes view port data;
Wherein, view port data includes height, width and RGB information (the rgb color mould of each pixel position of viewport Formula is a kind of color standard of industry, passes through the variation and their phases to red (R), green (G), blue (B) three Color Channels Superposition between mutually obtains miscellaneous color);View port data is initialized as to the height of setting after application program launching With width, and by the RGB information of each pixel priming color is set, such as white.
It in the present embodiment, further include judging whether video extraction equipment connects after the application program launching of video extraction equipment Server is connect, connection server is successfully that application program connection VR equipment carries out data transmission preparing, and application program passes through clothes Acquired image data are sent to VR equipment by business device.
Referring back to Fig. 1, step 120: application program creates left and right virtual camera, and left and right virtual camera obtains viewport Data;
Since the view that each eye that the principle that stereoscopic effect is presented in VR equipment is simulation people is seen is all completely different, greatly Brain combines the two, forms a 3D stereo-picture, this is stereoscopic vision.Since the scene that left eye is seen is seen with right eye The scene arrived is different, forms binocular disparity, therefore the application creates two virtual cameras when application program makes simultaneously The right and left eyes of analog subscriber obtain the flat image with different respectively;
Wherein, application program creates two virtual cameras, the spacing between the virtual camera of left and right is initialized, between initial It is configured away from the eyes interpupillary distance for being foundation people, the interpupillary distance IPD range of eyes is 52mm to 78mm, preferably that left and right is virtual The camera spacing ICD of video camera is initialized as the average value 60mm of interpupillary distance.
In the present embodiment, application program obtains view port data by left and right virtual camera, specifically includes following sub-step It is rapid:
Step 121: application program is that left and right virtual camera loads real time image collection plug-in unit respectively and initializes acquisition Plug-in unit;
In the present embodiment, a new blank plug-in unit template is created under the window editor of application program operation, then A project file is generated, Image Acquisition plug-in unit is loaded onto the project file.
Referring to Fig. 3, application initialization acquires plug-in unit, specifically includes:
Step 310: obtain scene viewport (SceneViewport), the width that current window is obtained by scene viewport it is high with And required interface;
Mode when window type includes editing machine mode and operation obtains scene view port data, place at runtime under mode Rendering hardware interface when reason operation under mode;
Step 320: calling interface function (FSlateRenderer) creates application layer renderer, and obtains viewport number of resources According to;
Step 330: the resource of top-level windows is obtained by application layer renderer;
Specifically, application layer renderer obtains viewport component by scene viewport, the node of viewport component is converted into window Mouth class, that is, obtain the resource of top-level windows.
Step 340: the resources constrained for the top-level windows that will acquire is converted to the type that rendering hardware interface can identify;
In the present embodiment, it needs to be converted to window resource after type that rendering hardware interface RHI can be identified Rendering hardware interface can be called to obtain rendering data.
It further, further include obtaining current viewport, resolution ratio and rendering order after initialization acquisition plug-in unit success List interface.
Step 122: rendering hardware interface is called in acquisition plug-in unit, by switch contexts using exchanging chain in real time from wash with watercolours It contaminates and obtains current page rendering data in target, update view port data using current page rendering data;
Wherein, current page rendering data includes height, width and the RGB of each pixel position letter of current page Breath, according to the height and width of current page, obtains rgb value of each pixel in the page, CPU single thread line by line The rgb value is inputted GPU by ground, updates view port data using the value;
Due to being all to be carried out by the main thread of application program, but led when carrying out user interface rendering (UI rendering) The constraint of thread process ability and cpu performance is carried out when the more heavy or cpu performance of task processing of main thread is lower UI renders the situation it is possible that page Caton.Therefore, the application, which is directed in rendering task 1 or rendering task 2, includes UI rendering, when the main thread of application program executes Rendering operations in CPU, by the way of asynchronous process, by be used to render The data cached acquisition transition of operation of standby executes, main thread is by calling rendering hardware to connect to sub thread in rendering thread Mouth RHI is data cached from rendering thread acquisition standby, continues to execute Rendering operations in the main thread, thus to mitigate the master The load of thread, while reducing main thread and can not handle user interface card caused by rendering task in time when task is heavy ?;
Referring back to Fig. 1, step 130: application program creates left according to the view port data that left and right virtual camera is got Left and right is schemed merging rendering and obtains texture image data, texture image data is sent to VR equipment by server by right figure;
Specifically, two figures are created according to the view port data that two virtual cameras in left and right are got, then by this two Figure merges into a texture maps, and texture image data is sent to VR equipment by server;
Wherein, left and right will be schemed to merge to render by left and right virtual camera obtains texture image data specifically, first creating line The surface of texture is managed and obtained, then to the surface rendering scene of texture, finally renders texture itself again.
The texture image data received is divided into right and left eyes scene image by step 140:VR equipment, and renders to set respectively In standby left and right cameras;
In the present embodiment, two video cameras in left and right are also created in VR equipment, and texture image data is divided into left eye field Scape image and there is a scene image, left eye scene image is then rendered into left-eye camera, right eye scene image is rendered to Right-eye camera, referring to fig. 4, this operation specifically include following sub-step:
Step 410: left eye scene image and right eye scene image are obtained according to the texture image data received.
Specifically, VR equipment is after the texture image data for receiving the transmission of video extraction equipment, to texture image data It is split according to the antimode for merging texture image with video extraction equipment, obtains left eye scene image and right eye scene figure Picture.
Step 420: left eye scene image and right eye scene image being rendered on a texture image, target texture is obtained Image;
In the present embodiment, in order to avoid transmitting time-consuming caused by texture image twice, the present embodiment is by left eye scene Image and right eye scene image are rendered into a texture image, and then only carry out the transmitting of a texture image.Specifically, by left Eye scene image and right eye scene image are rendered into two regions not overlapped on texture image.
Step 430: by the parameter of device screen and the parameter of eyeglass, determining the visible screen area of human eye;
Wherein, the parameter of the screen of VR equipment may include width and height, the size for drawing viewport of screen etc., and VR is set Standby lens parameters include the field angle of eyeglass, refractive index etc.;Specifically, the width of screen and height can pass through the DPI of screen (pixel number of per inch) determines, further, can obtain the DPI of screen from target terminal by system interface.
Step 440: anti-distortion grid being constructed based on the visible screen area of human eye, and determines the grid top of anti-distortion grid Point;
In order to allow user visually to possess true feeling of immersion, VR equipment needs to cover the vision model of human eye as far as possible It encloses, therefore is provided with a spherical curvature eyeglass in VR equipment, but image when spherical curvature eyeglass projects in human eye through image Be distortion, cause human eye that can not accurately obtain the positioning of Virtual Space, therefore be directed to the spherical curvature eyeglass of VR equipment, into When row right and left eyes image renders, need first to carry out anti-aberration, so that being presented to the user the image of visually plane.
Wherein, anti-distortion grid is for the anti-grid to be distorted, and the grid vertex of anti-distortion grid is anti-distortion The position coordinates of each grid vertex in grid.
Step 450: by the drafting viewport of the screen of the grid vertex and target terminal of anti-distortion grid, determining anti-distortion Grid vertex afterwards;
Specifically, calculate at a distance from the drafting viewport center of the grid vertex of anti-distortion grid and the screen of target terminal, The grid vertex after anti-distortion is determined based on calculated distance.
Step 460: anti-distortion is determined treated image by grid vertex after counter distort and target image, it will be anti-abnormal Become that treated that image is divided into the anti-fault image in left and right, and is rendered on the left and right screen of VR equipment respectively;
In the present embodiment, the anti-fault image in left and right is rendered to respectively on the left and right screen of VR equipment, specifically include as Lower process:
Local spatial: i.e. modeling space, in the organizational form of locally tissue triangle;
World space: pass through translation (D3DXMaterxTranslation function), rotation (D3DXMaterxRoation X/Y/Z/Axis function), scaling (D3DXMaterxScalling function) object of local spatial is converted into world space Object realizes the tissue of scene;
Video camera: being moved to the origin of world space by view spaces, and rotary camera makes the Z of its positive direction and world space Direction is consistent, and when movement or rotary camera, the geometric graph of world space does identical variation with the variation of video camera, obtains To camera view matrix (D3DXMatrixLookAtLH function);
The back side sorts: useless backfacing polygon (g_Device- > SetRendState is rejected by way of the sorting of the back side (D3DRS_CULLMODE, Value));
Illumination is cut: illumination, and the geometric graph part cutting that will be more than truncate platform are provided in world space;
Projection: 3D scene conversion is schemed at 2D by projective transformation matrix (D3DXMaterxPerspectiveFovLH) Picture is then gone on projection window;
The viewport transform: by project window be transformed on screen a matrix area it is reliable convert (g_pDevice- > SetView Port(D3DVIEWPORT);
Rasterization process: calculating the pixel value for needing each point in each triangle to be shown, will be after the viewport transform Image is shown on VR equipment or so screen.
The application VR equipment further includes user by adjusting the adjusting knob in VR equipment equipment after showing video image The spacing of two Softcams in VR equipment is adjusted, then VR equipment is sent the camera spacing after adjusting by server To video extraction equipment, video extraction equipment adjusts the camera spacing of internal two virtual cameras simultaneously, with this this it is newest Camera spacing acquires video image and is sent to VR equipment and shown, user adjusts camera spacing in real time so as to truer Ground reflects image content, increases user experience.
Embodiment two
As shown in figure 5, the embodiment of the present application two provides a kind of extraction system of video image, wherein video image mentions Taking system 5 includes: video extraction equipment 510, server 520 and VR equipment 530;
Wherein, video extraction equipment 510 includes such as lower component:
Creation module 511 obtains view port data by left and right virtual machine for creating left and right virtual camera;
First rendering module 512, the view port data merging for getting left and right virtual camera are rendered into texture, obtain To texture image data;
First communication module 513, for texture image data to be sent to VR equipment.
Specifically, creation module 511 is also used to after creating left and right virtual camera, the left and right cameras is initialized Camera spacing is the average value of eyes interpupillary distance;
Further, video image equipment 510 further includes setup module 514, for the newest camera shooting in response to VR equipment Head spacing, sets newest camera spacing for the camera spacing of left and right cameras.
Server 520, including second communication module 521, for the texture image data of video extraction equipment to be forwarded to VR equipment 530.
VR equipment 530, including third communication module 531 and the second rendering module 532;The third communication module 531 is used In receiving the texture image data from the server, second rendering module 532 is used for the texture image that will be received Data are divided into right and left eyes scene image, and are rendered in equipment left and right cameras respectively;
Specifically, the second rendering module 532, specifically includes:
Rendering submodule 5321: for obtaining left eye scene image and right eye field according to the texture image data received Left eye scene image and right eye scene image are rendered on a texture image by scape image, obtain target texture image;
Anti- distortion submodule 5322: for the parameter of parameter and eyeglass by device screen, determine that human eye is visible Screen area constructs anti-distortion grid based on the visible screen area of human eye, and determines the grid vertex of anti-distortion grid, passes through The drafting viewport of the screen of the grid vertex and target terminal of anti-distortion grid, the grid vertex after determining anti-distortion;
Further, the rendering submodule 5321 is also used to determine by grid vertex after counter distort and target image Anti- distortion treated image, will anti-distortion treated that image is divided into the anti-fault image in left and right, and render to VR equipment respectively On 530 left and right screen.
What the application realized has the beneficial effect that:
(1) the application creates two Softcams in video extraction equipment, can more quickly acquire video and mention The video image in equipment is taken, so that it is more accurate to be transferred to the image shown after VR equipment;
(2) it does not need that right and left eyes of another video extraction equipment to render VR equipment respectively are additionally arranged, reduces cost, And it can be realized VR equipment right and left eyes synchronized update;
(2) the camera spacing for two virtual cameras being respectively created in video extraction equipment and VR equipment can be real-time It adjusts, can be realized better human-computer interaction, preferably can simulate and present perspective view.
Although the preferred embodiment of the application has been described, it is created once a person skilled in the art knows basic Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as It selects embodiment and falls into all change and modification of the application range.Obviously, those skilled in the art can be to the application Various modification and variations are carried out without departing from spirit and scope.If in this way, these modifications and variations of the application Belong within the scope of the claim of this application and its equivalent technologies, then the application is also intended to encompass these modification and variations and exists It is interior.

Claims (10)

1. a kind of extracting method of video image characterized by comprising
Left and right virtual camera is created, view port data is obtained by the left and right virtual camera;
The view port data creation left and right figure got according to the left and right virtual camera;
The left and right figure is merged into rendering and obtains texture image data;
The texture image data is sent to VR equipment.
2. the extracting method of video image as described in claim 1, which is characterized in that after creation left and right virtual camera, also Camera spacing including initializing the left and right cameras is the average value of eyes interpupillary distance;In response to the newest camera shooting of VR equipment Head spacing, sets the newest camera spacing for the camera spacing of the left and right cameras.
3. the extracting method of video image as described in claim 1, which is characterized in that also wrapped before creation left and right virtual camera Creation acquisition viewport is included, and initializes the view port data of acquisition viewport;Wherein, the view port data of initialization acquisition viewport is specifically wrapped Creation equipment, context, exchanging chain, post-processing object and viewport are included, is set the post-processing object to using the context It is exported to screen, and initializes view port data.
4. the extracting method of video image as claimed in claim 3, which is characterized in that obtained and regarded by left and right virtual camera Mouth data, specifically include following sub-step:
Real time image collection plug-in unit is loaded respectively for the left and right virtual camera and initializes acquisition plug-in unit;
Rendering hardware interface is called in the acquisition plug-in unit, uses exchanging chain in real time from post-processing object by switch contexts Current page rendering data is obtained, updates view port data using current page rendering data.
5. the extracting method of video image as claimed in claim 4, which is characterized in that initialization acquisition plug-in unit specifically includes Following sub-step:
Scene viewport is obtained, the width height and required interface of current window are obtained by scene viewport;
Application layer renderer is created, and obtains viewport resource data;
The resource of top-level windows is obtained by application layer renderer;
The resources constrained for the top-level windows that will acquire is converted to the type that rendering hardware interface can identify.
6. the extracting method of video image as claimed in claim 4, which is characterized in that updated using current page rendering data View port data obtains RGB of each pixel in the page specially according to the height of current page and width line by line Value, inputs GPU for rgb value to CPU single thread, updates view port data using rgb value.
7. a kind of video extraction equipment, which is characterized in that including such as lower component:
Creation module obtains view port data by left and right virtual camera for creating left and right virtual camera;
First rendering module, the view port data creation left and right figure for being got according to the left and right virtual camera, will be described Left and right figure merges rendering and obtains texture image data;
First communication module, for texture image data to be sent to VR equipment.
8. video extraction equipment as claimed in claim 7, which is characterized in that the creation module is also used in creation left and right After virtual camera, the camera spacing for initializing the left and right cameras is the average value of eyes interpupillary distance;
The video extraction equipment further includes setup module, for the newest camera spacing in response to VR equipment, by the left side The camera spacing of right video camera is set as the newest camera spacing.
9. a kind of extraction system of video image characterized by comprising
Video extraction equipment as described in one of claim 7-8;
Server, including second communication module, for the texture image data of the video extraction equipment to be forwarded to VR equipment;
VR equipment, including third communication module and the second rendering module;The third communication module comes from the clothes for receiving The texture image data of business device, second rendering module are used to the texture image data received being divided into right and left eyes scene figure Picture, and rendered in equipment left and right cameras respectively.
10. the extraction system of video image as claimed in claim 9, which is characterized in that second rendering module, it is specific to wrap It includes:
Rendering submodule: for obtaining left eye scene image and right eye scene image according to the texture image data received, Left eye scene image and right eye scene image are rendered on a texture image, target texture image is obtained;
Anti- distortion submodule: for the parameter of parameter and eyeglass by device screen, determining the visible screen area of human eye, Anti- distortion grid is constructed based on the visible screen area of human eye, and determines the grid vertex of anti-distortion grid, passes through the anti-net that distorts The drafting viewport of the screen of the grid vertex and target terminal of lattice, the grid vertex after determining anti-distortion;
The rendering submodule is also used to determine anti-distortion by grid vertex after counter distort and target image treated figure Picture, will anti-distortion treated that image is divided into the anti-fault image in left and right, and rendered on the left and right screen of VR equipment respectively.
CN201910053769.2A 2019-01-21 2019-01-21 Video image extraction method, device and system Active CN109510975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910053769.2A CN109510975B (en) 2019-01-21 2019-01-21 Video image extraction method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910053769.2A CN109510975B (en) 2019-01-21 2019-01-21 Video image extraction method, device and system

Publications (2)

Publication Number Publication Date
CN109510975A true CN109510975A (en) 2019-03-22
CN109510975B CN109510975B (en) 2021-01-05

Family

ID=65758239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910053769.2A Active CN109510975B (en) 2019-01-21 2019-01-21 Video image extraction method, device and system

Country Status (1)

Country Link
CN (1) CN109510975B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235562A (en) * 2020-10-12 2021-01-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN113064739A (en) * 2021-03-31 2021-07-02 北京达佳互联信息技术有限公司 Inter-thread communication method and device, electronic equipment and storage medium
CN113473105A (en) * 2021-06-01 2021-10-01 青岛小鸟看看科技有限公司 Image synchronization method, image display and processing device and image synchronization system
CN114095655A (en) * 2021-11-17 2022-02-25 海信视像科技股份有限公司 Method and device for displaying streaming data
CN115103175A (en) * 2022-07-11 2022-09-23 北京字跳网络技术有限公司 Image transmission method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742348A (en) * 2010-01-04 2010-06-16 中国电信股份有限公司 Rendering method and system
CN104199723A (en) * 2014-09-09 2014-12-10 福建升腾资讯有限公司 Camera mapping method based on virtual equipment
US20160283081A1 (en) * 2015-03-27 2016-09-29 Lucasfilm Entertainment Company Ltd. Facilitate user manipulation of a virtual reality environment view using a computing device with touch sensitive surface
CN106126021A (en) * 2016-06-21 2016-11-16 上海乐相科技有限公司 A kind of interface display method and device
CN106341603A (en) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 View finding method for virtual reality environment, device and virtual reality device
CN108282648A (en) * 2018-02-05 2018-07-13 北京搜狐新媒体信息技术有限公司 A kind of VR rendering intents, device, Wearable and readable storage medium storing program for executing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742348A (en) * 2010-01-04 2010-06-16 中国电信股份有限公司 Rendering method and system
CN104199723A (en) * 2014-09-09 2014-12-10 福建升腾资讯有限公司 Camera mapping method based on virtual equipment
US20160283081A1 (en) * 2015-03-27 2016-09-29 Lucasfilm Entertainment Company Ltd. Facilitate user manipulation of a virtual reality environment view using a computing device with touch sensitive surface
CN106126021A (en) * 2016-06-21 2016-11-16 上海乐相科技有限公司 A kind of interface display method and device
CN106341603A (en) * 2016-09-29 2017-01-18 网易(杭州)网络有限公司 View finding method for virtual reality environment, device and virtual reality device
CN108282648A (en) * 2018-02-05 2018-07-13 北京搜狐新媒体信息技术有限公司 A kind of VR rendering intents, device, Wearable and readable storage medium storing program for executing

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112235562A (en) * 2020-10-12 2021-01-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN112235562B (en) * 2020-10-12 2023-09-15 聚好看科技股份有限公司 3D display terminal, controller and image processing method
CN113064739A (en) * 2021-03-31 2021-07-02 北京达佳互联信息技术有限公司 Inter-thread communication method and device, electronic equipment and storage medium
CN113473105A (en) * 2021-06-01 2021-10-01 青岛小鸟看看科技有限公司 Image synchronization method, image display and processing device and image synchronization system
CN114095655A (en) * 2021-11-17 2022-02-25 海信视像科技股份有限公司 Method and device for displaying streaming data
CN115103175A (en) * 2022-07-11 2022-09-23 北京字跳网络技术有限公司 Image transmission method, device, equipment and medium
CN115103175B (en) * 2022-07-11 2024-03-01 北京字跳网络技术有限公司 Image transmission method, device, equipment and medium

Also Published As

Publication number Publication date
CN109510975B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN109510975A (en) A kind of extracting method of video image, equipment and system
US8270704B2 (en) Method and apparatus for reconstructing 3D shape model of object by using multi-view image information
US8217990B2 (en) Stereoscopic picture generating apparatus
CN108513123B (en) Image array generation method for integrated imaging light field display
US20050117215A1 (en) Stereoscopic imaging
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
CN109660783A (en) Virtual reality parallax correction
CN109147027B (en) Monocular image three-dimensional rebuilding method, system and device based on reference planes
CN107578435A (en) A kind of picture depth Forecasting Methodology and device
CN109769109A (en) Method and system based on virtual view synthesis drawing three-dimensional object
CN104599317A (en) Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
CN107005689B (en) Digital video rendering
JP7344988B2 (en) Methods, apparatus, and computer program products for volumetric video encoding and decoding
JP4996922B2 (en) 3D visualization
WO2021081568A2 (en) Advanced stereoscopic rendering
CN107071381B (en) Signalling uses the deformation pattern of the high efficiency video coding extension of 3D Video coding
CN106169179A (en) Image denoising method and image noise reduction apparatus
CN109821236A (en) A kind of extracting method of realtime graphic
CN111327886B (en) 3D light field rendering method and device
JP6898264B2 (en) Synthesizers, methods and programs
WO2022156451A1 (en) Rendering method and apparatus
Sun et al. Seamless view synthesis through texture optimization
CN113989434A (en) Human body three-dimensional reconstruction method and device
JPH03296176A (en) High-speed picture generating/displaying method
CN111243099A (en) Method and device for processing image and method and device for displaying image in AR (augmented reality) device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant