CN111726594A - Implementation method for efficient optimization rendering and pose anti-distortion fusion - Google Patents

Implementation method for efficient optimization rendering and pose anti-distortion fusion Download PDF

Info

Publication number
CN111726594A
CN111726594A CN201910218901.0A CN201910218901A CN111726594A CN 111726594 A CN111726594 A CN 111726594A CN 201910218901 A CN201910218901 A CN 201910218901A CN 111726594 A CN111726594 A CN 111726594A
Authority
CN
China
Prior art keywords
rendering
pose
gpu
fov
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910218901.0A
Other languages
Chinese (zh)
Inventor
周正华
周益安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Taojinglihua Information Technology Co ltd
Original Assignee
Shanghai Flying Ape Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Flying Ape Information Technology Co ltd filed Critical Shanghai Flying Ape Information Technology Co ltd
Priority to CN201910218901.0A priority Critical patent/CN111726594A/en
Publication of CN111726594A publication Critical patent/CN111726594A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof

Abstract

The invention provides a method for realizing efficient optimization rendering and pose anti-distortion fusion, which relates to the embedded field and comprises the following steps: s1: acquiring external input data by using a CPU, wherein the external input data comprises four types, namely a panoramic or 3D data source, pose information, an FOV (field of view) and a projection mode; s2: performing hardware decoding by using the VPU, and transmitting the decoded output to the GPU; s3: performing map rendering by using a GPU, wherein the map rendering comprises projection modeling, chromaticity space reduction, FOV initialization and attitude fusion; s4: and performing loop iteration according to the requirements of the video or the image. The invention fully utilizes hardware acceleration, provides a real-time video pipeline rendering comprehensive method with various projection modes and supporting FOV (field of view) and pose information and the like, and solves VR (virtual reality) rendering of high-resolution panoramic videos, panoramic images and 3D (three-dimensional) video images by utilizing the performances of a VPU (virtual private Unit), a CPU (Central processing Unit) and a GPU (graphics processing Unit).

Description

Implementation method for efficient optimization rendering and pose anti-distortion fusion
Technical Field
The invention relates to the embedded field, in particular to a method for realizing efficient optimization rendering and pose anti-distortion fusion.
Background
With the rise of VR (virtual reality), how to make the most common handheld devices support the generation and output of VR panorama becomes a popular research issue.
The most common IVS (independent software vendor) solutions are based on CPU instruction acceleration, for example: the decoding acceleration is realized through the acceleration of the assembly; however, for panoramic rendering, the conventional instruction-based acceleration method cannot be basically realized because too many matrix operations are involved, and the basic instruction acceleration is far from sufficient; although hardware-based acceleration is also available, it is basically limited to decoding of video; for application scenes with more matrix type operations such as panorama and inverse distortion, the traditional CPU is very inefficient; with the rise of VR, rendering is performed based on GPU, but it is difficult to achieve optimization of overall hardware performance and VR rendering without combining common attributes related to GPU (image processing unit), CPU (general purpose computing processing unit), VPU (video processing unit) and VR.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present invention is to provide a method for implementing efficient optimization rendering and pose anti-distortion fusion, which has multiple projection modes, and solves the optimization of the overall hardware performance and VR rendering of a high-resolution panoramic video by using the performance of VPU, CPU, and GPU.
The invention provides a method for realizing efficient optimization rendering and pose anti-distortion fusion, which comprises the following steps:
s1: acquiring external input data by using a CPU, wherein the external input data comprises four types, namely a panoramic or 3D data source, pose information, an FOV (field of view) and a projection mode;
s2: performing hardware decoding by using the VPU, and transmitting decoded output to the GPU;
s3: performing map rendering by using a GPU, wherein the map rendering comprises projection modeling, chromaticity space reduction, FOV initialization and attitude fusion;
s4: and performing loop iteration according to the requirements of the video or the image.
Further, the panoramic or 3D data source is a video or an image, and the video and the image are in a panoramic format developed by equal latitude and longitude or in a 3D format; the pose information is output data capable of providing a three-dimensional pose information device; the FOV is the display field angle.
Further, the projection mode includes a plane projection mode, a spherical projection mode, and a cubic projection mode.
Further, the inverse distortion is also a special projection mode.
Further, the map rendering comprises the following steps:
s3.1: performing projection modeling based on the projection mode;
s3.2: initializing a GPU according to external input data;
s3.3: splicing and fusing through a customized vertex shader;
s3.4: performing chroma space conversion through a customized fragment shader;
s3.5: performing attitude fusion based on the pose information and the FOV;
s3.6: and performing real-time rendering by using a pipeline of the GPU, and controlling output display by using a ping-pong Buffer mechanism.
As described above, the implementation method for efficient optimization rendering and pose anti-distortion fusion has the following beneficial effects: the invention fully utilizes the common attributes related to VR such as CPU, VPU, GPU and the like of the handheld device to render the panoramic video and the picture, combines the demand points of large resolution, different projection modes, pose information, FOV and the like of the panoramic video and the picture, and effectively utilizes hardware acceleration to fulfill the rendering demand, thereby providing effective technical support for a universal embedded system to efficiently complete the panoramic rendering, fully utilizing the hardware capability of the universal system, greatly reducing the requirements of the panoramic rendering on hardware, greatly promoting the application of the panoramic to fall to the ground, and decoding all panoramas with 2K, 4K, 6K and higher resolution in the future.
Drawings
FIG. 1 is a general flow chart of the implementation method disclosed in the embodiment of the invention;
FIG. 2 is a diagram showing the relationship among the CPU, GPU and VPU disclosed in the embodiment of the present invention;
FIG. 3 is a flow chart illustrating the splicing and fusing steps disclosed in the embodiments of the present invention;
FIG. 4 is a flowchart illustrating the chrominance space down-conversion procedure disclosed in the embodiments of the present invention;
FIG. 5 is a flowchart illustrating the gesture fusion steps disclosed in the embodiments of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
As shown in fig. 1 and fig. 2, the present invention provides a method for implementing efficient optimization rendering and pose anti-distortion fusion, the method includes the following steps:
s1: acquiring external input data by using a CPU, wherein the external input data comprises four types, namely a panoramic or 3D data source, pose information, an FOV (field of view) and a projection mode;
the panoramic or 3D data source can be a video or an image, the video and the image are in a panoramic format developed by equal latitude and longitude, the general aspect ratio is 2:1, if the panoramic or 3D data source is a stereo, the left-right ratio is 4:1, and the up-down ratio is 1: 1; the pose information is output data capable of providing a three-dimensional pose information device such as: the thetas/Phi/Gamma (rotation angle of three-dimensional space along XYZ direction) is generally the output data of the gyroscope, but is not limited to the gyroscope; the FOV is generally a display field angle, and commonly used angles include 90 °, 110 °, 130 °, and the like; the projection mode is commonly used as a plane projection mode, a spherical projection mode, a cubic projection mode and the like;
in addition, the inverse distortion is also a special projection mode;
s2: performing hardware decoding by using the VPU, and transmitting decoded output to the GPU;
in the traditional video rendering, video decoding is the most important, and for panoramic rendering, the video decoding is not the most important but is an important link, and VPUs are used for hardware decoding to obtain texture input, and meanwhile, the texture input is transmitted to GPUs for high-performance operation;
besides common video decoding data, the texture input also refers to data of a watermark or a logo;
s3: performing map rendering by using a GPU, wherein the map rendering comprises projection modeling, chromaticity space reduction, FOV initialization and attitude fusion;
the map rendering comprises the following steps:
s3.1: performing projection modeling based on the projection mode;
s3.2: initializing a GPU according to external input data;
s3.3: splicing and fusing are carried out through a customized vertex shader, spherical XYZ coordinates, UV (texture) coordinates, weight and vertex sequence coordinates of a panoramic or 3D data source are calculated according to the customized vertex shader by taking spherical projection as an example, and the panoramic or 3D data source is spliced and fused; as shown in fig. 3, the number of cells per row and column is obtained from the original image of the panoramic or 3D data source through a LUT (look-up table), and the quality and efficiency of rendering can be balanced by configuring the number of rows and columns;
the LUT can be output data after self calibration, or generated by a third-party tool such as PT-GUI, and is a lookup table used for expanding specific positions after feature matching;
s3.4: performing chromaticity space down-conversion by the customized fragment shader, as shown in fig. 4, configuring a conversion matrix from a YUV space to an RGB space according to the customized fragment shader, and converting YUV data of each frame into RGB information suitable for LCD display;
s3.5: performing attitude fusion based on the pose information and the FOV, as shown in FIG. 5, firstly obtaining a view projection based on the FOV according to the input of the FOV, then performing fusion projection based on the pose matrix, and finally obtaining a final projection according to the initialized magnification attribute;
s3.6: and performing real-time rendering by using a GPU (graphics processing unit) pipeline, and controlling output display by using a ping-pong Buffer mechanism.
S4: performing loop iteration according to the requirements of videos or images;
the anti-distortion, Logo and watermark can be regarded as a special form of common rendering, and are the same logic, and the anti-distortion processing is performed while the common rendering is performed through the customization of a vertex shader and a fragment shader, and the rendering output is performed on the watermark and the Logo.
In conclusion, the invention fully utilizes hardware acceleration, provides a real-time video pipeline rendering comprehensive method which has multiple projection modes and supports FOV (field of view angle), pose information and the like, and solves VR rendering of high-resolution panoramic video by utilizing the performances of a VPU, a CPU and a GPU. Therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (5)

1. An implementation method for efficient optimization rendering and pose anti-distortion fusion is characterized by comprising the following steps:
s1: acquiring external input data by using a CPU, wherein the external input data comprises four types, namely a panoramic or 3D data source, pose information, an FOV (field of view) and a projection mode;
s2: performing hardware decoding by using the VPU, and transmitting decoded output to the GPU;
s3: performing map rendering by using a GPU, wherein the map rendering comprises projection modeling, chromaticity space reduction, FOV initialization and attitude fusion;
s4: and performing loop iteration according to the requirements of the video or the image.
2. The method for realizing efficient optimization rendering and pose anti-distortion fusion according to claim 1, characterized by comprising the following steps: the panoramic or 3D data source is a video or an image, and the video and the image are in a panoramic format or a 3D format which is expanded by equal latitude and longitude; the pose information is output data capable of providing a three-dimensional pose information device; the FOV is the-display field angle.
3. The method for realizing efficient optimization rendering and pose anti-distortion fusion according to claim 1, characterized by comprising the following steps: the projection mode comprises a plane projection mode, a spherical projection mode and a cubic projection mode.
4. The method for realizing efficient optimization rendering and pose anti-distortion fusion according to claim 1, characterized by comprising the following steps: inverse distortion is also a special projection mode.
5. The method for realizing efficient optimization rendering and pose anti-distortion fusion according to claim 1, wherein the chartlet rendering comprises the following steps:
s3.1: performing projection modeling based on the projection mode;
s3.2: initializing a GPU according to external input data;
s3.3: splicing and fusing through a customized vertex shader;
s3.4: performing chroma space conversion through a customized fragment shader;
s3.5: performing attitude fusion based on the pose information and the FOV;
s3.6: and performing real-time rendering by using a pipeline of the GPU, and controlling output display by using a ping-pong Buffer mechanism.
CN201910218901.0A 2019-03-21 2019-03-21 Implementation method for efficient optimization rendering and pose anti-distortion fusion Pending CN111726594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910218901.0A CN111726594A (en) 2019-03-21 2019-03-21 Implementation method for efficient optimization rendering and pose anti-distortion fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910218901.0A CN111726594A (en) 2019-03-21 2019-03-21 Implementation method for efficient optimization rendering and pose anti-distortion fusion

Publications (1)

Publication Number Publication Date
CN111726594A true CN111726594A (en) 2020-09-29

Family

ID=72562771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910218901.0A Pending CN111726594A (en) 2019-03-21 2019-03-21 Implementation method for efficient optimization rendering and pose anti-distortion fusion

Country Status (1)

Country Link
CN (1) CN111726594A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164378A (en) * 2020-10-28 2021-01-01 上海盈赞通信科技有限公司 VR glasses all-in-one machine anti-distortion method and device
CN112437287A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Panoramic image scanning and splicing method
CN113205599A (en) * 2021-04-25 2021-08-03 武汉大学 GPU accelerated video texture updating method in video three-dimensional fusion

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016095057A1 (en) * 2014-12-19 2016-06-23 Sulon Technologies Inc. Peripheral tracking for an augmented reality head mounted device
US20170339391A1 (en) * 2016-05-19 2017-11-23 Avago Technologies General Ip (Singapore) Pte. Ltd. 360 degree video system with coordinate compression
CN107844190A (en) * 2016-09-20 2018-03-27 腾讯科技(深圳)有限公司 Image presentation method and device based on Virtual Reality equipment
US20180174619A1 (en) * 2016-12-19 2018-06-21 Microsoft Technology Licensing, Llc Interface for application-specified playback of panoramic video
US20180176483A1 (en) * 2014-12-29 2018-06-21 Metaio Gmbh Method and sytem for generating at least one image of a real environment
CN108616731A (en) * 2016-12-30 2018-10-02 艾迪普(北京)文化科技股份有限公司 360 degree of VR panoramic images images of one kind and video Real-time Generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016095057A1 (en) * 2014-12-19 2016-06-23 Sulon Technologies Inc. Peripheral tracking for an augmented reality head mounted device
US20180176483A1 (en) * 2014-12-29 2018-06-21 Metaio Gmbh Method and sytem for generating at least one image of a real environment
US20170339391A1 (en) * 2016-05-19 2017-11-23 Avago Technologies General Ip (Singapore) Pte. Ltd. 360 degree video system with coordinate compression
CN107844190A (en) * 2016-09-20 2018-03-27 腾讯科技(深圳)有限公司 Image presentation method and device based on Virtual Reality equipment
US20180174619A1 (en) * 2016-12-19 2018-06-21 Microsoft Technology Licensing, Llc Interface for application-specified playback of panoramic video
CN108616731A (en) * 2016-12-30 2018-10-02 艾迪普(北京)文化科技股份有限公司 360 degree of VR panoramic images images of one kind and video Real-time Generation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164378A (en) * 2020-10-28 2021-01-01 上海盈赞通信科技有限公司 VR glasses all-in-one machine anti-distortion method and device
CN112437287A (en) * 2020-11-23 2021-03-02 成都易瞳科技有限公司 Panoramic image scanning and splicing method
CN113205599A (en) * 2021-04-25 2021-08-03 武汉大学 GPU accelerated video texture updating method in video three-dimensional fusion

Similar Documents

Publication Publication Date Title
TWI570665B (en) Computer graphics system and a graphics processing method
TWI578266B (en) Varying effective resolution by screen location in graphics processing by approximating projection of vertices onto curved viewport
US8692848B2 (en) Method and system for tile mode renderer with coordinate shader
TWI654874B (en) Method and apparatus for processing a projection frame having at least one non-uniform mapping generated projection surface
US10776997B2 (en) Rendering an image from computer graphics using two rendering computing devices
US6763175B1 (en) Flexible video editing architecture with software video effect filter components
US6885378B1 (en) Method and apparatus for the implementation of full-scene anti-aliasing supersampling
EP3121786B1 (en) Graphics pipeline method and apparatus
CN107924556B (en) Image generation device and image display control device
CN106558017B (en) Spherical display image processing method and system
US20110221743A1 (en) Method And System For Controlling A 3D Processor Using A Control List In Memory
TW201541403A (en) Gradient adjustment for texture mapping to non-orthonormal grid
US10325391B2 (en) Oriented image stitching for spherical image content
JP2011170881A (en) Method and apparatus for using general three-dimensional (3d) graphics pipeline for cost effective digital image and video editing
CN111726594A (en) Implementation method for efficient optimization rendering and pose anti-distortion fusion
JP2003141562A (en) Image processing apparatus and method for nonplanar image, storage medium, and computer program
KR20200052846A (en) Data processing systems
JP2005339313A (en) Method and apparatus for presenting image
WO2014043814A1 (en) Methods and apparatus for displaying and manipulating a panoramic image by tiles
US20080024510A1 (en) Texture engine, graphics processing unit and video processing method thereof
CN109388455B (en) Fish-eye image unfolding monitoring method based on Opengles supporting multiple platforms
US7756391B1 (en) Real-time video editing architecture
WO2023207001A1 (en) Image rendering method and apparatus, and electronic device and storage medium
US20150103252A1 (en) System, method, and computer program product for gamma correction in a video or image processing engine
KR20100103703A (en) Multi-format support for surface creation in a graphics processing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230412

Address after: 200136 Room 2903, 29th Floor, No. 28 Xinjinqiao Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Shanghai taojinglihua Information Technology Co.,Ltd.

Address before: 200126 building 13, 728 Lingyan South Road, Pudong New Area, Shanghai

Applicant before: Shanghai flying ape Information Technology Co.,Ltd.

TA01 Transfer of patent application right