WO2017101303A1 - 一种视频画面的绘制方法及装置 - Google Patents

一种视频画面的绘制方法及装置 Download PDF

Info

Publication number
WO2017101303A1
WO2017101303A1 PCT/CN2016/088195 CN2016088195W WO2017101303A1 WO 2017101303 A1 WO2017101303 A1 WO 2017101303A1 CN 2016088195 W CN2016088195 W CN 2016088195W WO 2017101303 A1 WO2017101303 A1 WO 2017101303A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
video
renderer
rendering
rendering parameter
Prior art date
Application number
PCT/CN2016/088195
Other languages
English (en)
French (fr)
Inventor
成宁
李英杰
于水龙
徐珣
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017101303A1 publication Critical patent/WO2017101303A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for drawing a video picture.
  • Rendering technology in computer graphics refers to the process of generating images from a model using software. In the graphic display operation, rendering is the last important step, through which the final display of the model and animation is obtained. Rendering technology is widely used in practical applications such as computer and video games, simulation, film or TV effects, and visual design. Different ways to render the display can be roughly divided into two categories: pre-rendering or offlinering and real-timerendering or onlinerendering. Among them, pre-rendering is that the developer pre-places the content to be rendered on the server for rendering.
  • the pre-rendering is very computationally intensive, and is usually used for complex scene processing, such as cool 3D movie production.
  • Real-time rendering requires a real-time experience, often used in a variety of 3D games and other scenarios, usually relying on hardware accelerators to complete the process.
  • the local rendering technology is the hardware device of the user equipment (UserEquipment, UE), such as: Central Processing Unit (CPU), graphics processor (Graphic Processing Unit, GPU) to render the model, after the rendering is finished, the display device retrieves the rendering result. display.
  • the cloud rendering technology moves the operation of the user equipment to the cloud, and then transmits the final result to the user equipment for display by means of pictures.
  • Video calls often involve the originator of a video call and the recipient of a video call.
  • the drawing of the picture in the video call in the prior art can be as follows:
  • GLSurfaceViews can be created for the originator and receiver of the video call, and then the video call initiators and receivers are respectively connected through the two GLSurfaceViews. The image of the recipient is drawn so that the video call can be completed.
  • the embodiment of the present invention provides a method and a device for drawing a video picture, which can reduce the occupation of mobile phone resources to ensure the quality of the video call.
  • An embodiment of the present application provides a method for drawing a video picture, including: creating a GLSurfaceView and a first renderer and a second renderer corresponding to the GLSurfaceView; configuring a first render parameter for the first renderer, The first rendering parameter includes at least a picture location and a picture size of the video initiator; configuring a second rendering parameter for the second renderer, the second rendering parameter including at least a picture position and a picture size of the video receiver; when the video call When established, the first renderer and the second renderer respectively draw each received frame image according to the first rendering parameter and the second rendering parameter.
  • the device for drawing a video picture includes: a renderer creating module, configured to create a GLSurfaceView and a first renderer and a second renderer corresponding to the GLSurfaceView; and a first rendering parameter configuration module, Is configured to configure a first rendering parameter for the first renderer, the first rendering parameter including at least a screen position and a screen size of the video initiator; and a second rendering parameter configuration module configured to configure the second renderer a second rendering parameter, the second rendering parameter includes at least a screen position and a screen size of the video receiver; and a rendering module configured to: when the video call is established, the first renderer and the second renderer respectively The first rendering parameter and the second rendering parameter are used to draw each frame of the received picture.
  • the embodiment of the present application also provides a computer readable recording medium on which a program for executing the above method is recorded.
  • the method and device for drawing a video picture provided by the embodiment of the present application, only one GLSurfaceView is set, and corresponding two renderers are generated under the GLSurfaceView.
  • One of the renderers is used to render the video call originator's picture, and the other renderer is used to render the video call recipient's picture.
  • the position of the rendered picture can be limited by the preset screen position and screen size of the video call initiator and the screen position and screen size of the video receiver. And size.
  • a GLSurfaceView can realize the drawing process of both sides of the video call, saving the resources of the mobile phone. Specifically, by monitoring the network status during the video call, the resolution or the frame rate can be adjusted according to the actual network state to ensure smooth video call. In addition, by monitoring the touch command on the touch screen of the mobile phone, the screen position of both sides of the video call can be adjusted according to the touch command, thereby ensuring the convenience of the video call process.
  • FIG. 1 is a flowchart of a method for drawing a video picture according to an embodiment of the present application
  • FIG. 2 is a functional block diagram of a video picture drawing device according to an embodiment of the present disclosure.
  • FIG. 1 is a flowchart of a method for drawing a video picture according to an embodiment of the present application.
  • the processes described below include multiple operations occurring in a particular order, it should be clearly understood that these processes can include more or fewer operations that can be performed sequentially or in parallel (eg, using a parallel processor or a multi-threaded environment).
  • the method may include:
  • S1 Create a GLSurfaceView and a first renderer and a second renderer corresponding to the GLSurfaceView.
  • the GLSurfaceView is a view whose embedded surface can be responsible for OpenGL rendering.
  • the GLSurfaceView often provides the following features:
  • the surface can be a piece of memory, can be directly typeset into the Android view;
  • the GLSurfaceView can be initialized. Specifically, since GLSurfaceView is often created with some default configuration, these default configurations are often not modified. Therefore, during the initialization process, a preset number of renderers are mainly set in the GLSurfaceView to respectively render the image to the initiator and the receiver of the video call. Specifically, the embodiment of the present application can set a renderer by using a setRenderer (Renderer) instruction.
  • Renderer setRenderer
  • the GLSurfaceView will create a surface with a pixel format of PixelFormat.RGB_565 by default.
  • the user can change the pixel format according to actual needs.
  • the transparency effect can be changed by calling the getHolder().setFormat(PixelFormat.TRANSLUCENT) instruction.
  • the transparent surface has a 32-bit pixel format, and each color module is 8-bit deep, which means that the pixel format may be ARGB or RGBA.
  • a plurality of EGL configurations are often supported in the Android device.
  • a different number of channels may be used, or each channel may be specified to have a different number of bit depths. Therefore, you should specify the configuration of the EGL before the renderer works.
  • the GLSurfaceView default EGL configuration has a pixel format of RGB_656, a 16-bit depth buffer, and the stencil buffer is not enabled by default.
  • the rendering mode of the renderer can be specified.
  • the video call screen needs to be rendered in real time, it can be
  • the renderer's rendering mode is set to continuous rendering.
  • the video call picture obtained by the non-interactive drawing method cannot interact with the user, for example, cannot perform corresponding adjustment in response to the user's touch instruction.
  • the interactive drawing method may be used to draw the picture of the video call.
  • the queueEvent (Runnable) instruction can be used in the embodiment of the present application. In this way, the video picture drawn by the GLSurfaceView can interact with the user in response to the user's touch command.
  • the first renderer and the second renderer may be set in the GLSurfaceView, wherein the first renderer may be a local renderer, configured to render locally collected video information; the second renderer
  • the device can be a remote renderer, set to render video information sent remotely from the recipient of the video call.
  • S2 configuring a first rendering parameter for the first renderer, where the first rendering parameter includes at least a screen position and a screen size of the video initiator.
  • the first renderer can be configured for the first renderer.
  • the first rendering parameter is a rule for rendering video information, such as where the rendered picture is located and what is the rendered picture size.
  • the first rendering parameter may include at least a screen position and a screen size of the video initiator.
  • the video information of the video initiator may be collected by a local camera and sent to the first renderer, and rendered by the first renderer to become a local video picture.
  • the location and size of the local video picture may then be defined by the first rendering parameter. For example, the local video picture can be located in the lower left corner of the overall picture, and the size is 1/8 of the overall picture.
  • the collected video information is sent to the first renderer according to a certain frame rate.
  • the first renderer performs rendering and imaging on each frame of the received video information, and presents the rendered image to the user at the same frame rate, thereby providing the user with a video call.
  • S3 Configuring a second rendering parameter for the second renderer, where the second rendering parameter includes at least a screen position and a screen size of the video receiver.
  • the second renderer can be configured for the second renderer.
  • the second rendering parameter is also a rule for rendering video information, such as where the rendered picture is located and what the rendered picture size is.
  • the second rendering parameter may include at least a screen position and a screen size of the video receiver.
  • the video information of the video receiver can be collected by the local camera and sent to the communication address where the video initiator is located through the network. In this way, when the video information of the video receiver reaches the video initiator, the second rendering The dyeer renders it, and the rendered image can be presented to the originator of the video call for viewing.
  • the picture position and size of the video call recipient may be defined by the second rendering parameter. For example, the video picture of the video call recipient can be in the middle of the overall picture, and the size is full of the whole picture.
  • the video information of the receiver of the video call can also be sent to the initiator of the video call according to a certain frame rate, and after being rendered by the second renderer, the video call can be presented at the same frame rate. Party, thus forming a picture of the video call.
  • a video call is established between the video call initiator and the video call receiver.
  • the first can be based on the video receiver's screen position and screen size.
  • Each frame of the image received by the second renderer is drawn to obtain a second picture stream.
  • the picture in the second picture stream is transmitted according to a preset frame rate, which may be, for example, 24 frames/second or 30 frames/second.
  • each frame picture received by the first renderer may be drawn according to a picture location and a picture size of the video initiator to obtain a first picture stream.
  • the picture in the first picture stream is also transmitted according to a preset frame rate, which may be, for example, 24 frames/second or 30 frames/second.
  • the second picture stream may be used as a background to fill the window of the video call, and the first picture stream may be suspended on the second picture stream, such that the second picture The stream does not obscure the first picture stream. That is, after the second picture stream and the first picture stream are obtained, the first picture stream may be loaded on the second picture stream to form a video picture.
  • the window can be used as a background to cover a video call, and the first picture stream can be located in the lower left corner of the video call window and occupy 1/8 of the entire window size.
  • the video receiver's screen may be adjusted accordingly according to different network states.
  • the embodiment of the present application can monitor the network of the video initiator. a network state, when the network state meets a preset condition, adjusting a screen resolution of the video receiver to a preset resolution.
  • the resolution of the current picture can be increased to increase the definition of the picture.
  • the resolution of the current picture can be lowered to ensure the smoothness of the picture.
  • the embodiment of the present application may also monitor the network status of the video initiator, and when the network status meets the preset condition, adjust the picture rendering frame rate of the video receiver according to a preset rule. For example, when the network status is good and the network delay is lower than a preset threshold, the frame rate of the current picture may be increased to increase the smoothness of the picture. Conversely, when the network status is poor and the network delay is higher than the preset threshold, the frame rate of the current picture can be lowered to ensure that the picture is not interrupted.
  • the frame rate used for the picture rendering can be limited to be between the minimum frame rate and the maximum frame rate, wherein the maximum frame rate is required for the perfect operation of the moving image.
  • the maximum frame rate value which is the minimum frame rate value that can be tolerated when running a moving image. That is, the frame rate used for rendering the picture is compared with the minimum frame rate and the maximum frame rate. If the frame rate used for the picture rendering is less than the minimum frame rate, the frame rate used for the picture rendering is set to the minimum frame. Rate; if the frame rate used for the above picture rendering is greater than the maximum frame rate, the frame rate used for the above picture rendering is set to the maximum frame rate.
  • the minimum frame rate may be, for example, 20 frames/second, and the maximum frame rate may be, for example, 60 frames/second.
  • the picture rendering frame rate of the video receiver may be adjusted according to the following formula:
  • INT is the rounding function
  • is the adjusted frame rendering frame rate
  • N is the number of frame frames per rendering
  • T is the time required to render the N frames
  • k is the adjustment factor.
  • the adjustment factor can be adjusted according to the state of the network, and the range is between 0.1 and 1.
  • the embodiment of the present application may establish a relationship between a network delay and the adjustment coefficient, and the relationship may be represented by an inverse proportional function. When the network delay is higher, the corresponding adjustment coefficient is smaller; when the network delay is lower, The corresponding adjustment factor is larger.
  • FIG. 2 is a functional block diagram of a video picture drawing device according to an embodiment of the present disclosure.
  • the apparatus may include:
  • a renderer creation module 100 configured to create a GLSurfaceView and a first renderer and a second renderer corresponding to the GLSurfaceView;
  • the first rendering parameter configuration module 200 is configured to configure a first rendering parameter for the first renderer, where the first rendering parameter includes at least a screen position and a screen size of the video initiator;
  • the second rendering parameter configuration module 300 is configured to configure a second rendering parameter for the second renderer, where the second rendering parameter includes at least a screen position and a screen size of the video receiver;
  • the drawing module 400 is configured to: when the video call is established, the first renderer and the second renderer respectively draw each received frame image according to the first rendering parameter and the second rendering parameter .
  • the drawing module 400 specifically includes:
  • a second picture stream obtaining module configured to: when a video call is established, draw a picture of each frame received by the second renderer according to a picture position and a picture size of the video receiver, to obtain a second picture stream;
  • a first picture stream obtaining module configured to draw, according to a picture position and a picture size of the video initiator, each frame of the picture received by the first renderer to obtain a first picture stream;
  • the device further includes: after the loading module, the device further includes:
  • the touch command monitoring module is configured to monitor a touch command on the video screen, and move the position of the first picture stream or the second picture stream in response to the monitored touch command.
  • the device may further include:
  • the resolution adjustment module is configured to monitor a network status of the video initiator, and adjust a screen resolution of the video receiver to a preset resolution when the network status satisfies a preset condition.
  • the device may further include:
  • the frame rate adjustment module is configured to monitor a network state of the video initiator, and when the network state meets a preset condition, adjust a frame rendering frame rate of the video receiver according to a preset rule.
  • the picture rendering frame rate of the video receiver may be adjusted according to the following formula:
  • INT is the rounding function
  • is the adjusted frame rendering frame rate
  • N is the number of frame frames per rendering
  • T is the time required to render the N frames
  • k is the adjustment factor
  • the video picture drawing device provided by the embodiment of the present application only sets one GLSurfaceView, and generates corresponding two renderers under the GLSurfaceView.
  • One of the renderers is used to render the video call originator's picture
  • the other renderer is used to render the video call recipient's picture.
  • the position and size of the rendered picture can be limited by the preset picture position and picture size of the video call initiator and the picture position and picture size of the video receiver.
  • a GLSurfaceView can realize the drawing process of both sides of the video call, saving the resources of the mobile phone. Specifically, by monitoring the network status during the video call, the resolution or the frame rate can be adjusted according to the actual network state to ensure smooth video call. In addition, by monitoring the touch command on the touch screen of the mobile phone, the screen position of both sides of the video call can be adjusted according to the touch command, thereby ensuring the convenience of the video call process.
  • references to elements or components or steps should not be construed as limited to only one of the elements, components, or steps, but may be one or more of the elements, components, or steps.
  • This application can be used in a variety of general purpose or special purpose computer system environments or configurations.
  • the above-described technical solutions may be embodied in the form of a software product in essence or in the form of a software product, which may be stored in a computer readable recording medium, the computer readable record
  • the medium includes any mechanism for storing or transmitting information in a form readable by a computer (eg, a computer).
  • ROM/RAM, diskette, optical disk, etc. includes instructions for causing a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments or portions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

一种视频画面的绘制方法及装置,其中,所述方法包括:创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器;为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸;为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸;当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。本申请实施例提供一种视频画面的绘制方法及装置,能够减少对手机资源的占用,以保证视频通话的质量。

Description

一种视频画面的绘制方法及装置
本申请要求于2015年12月15日提交中国专利局、申请号为201510934280.8,发明名称为“一种视频画面的绘制方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种视频画面的绘制方法及装置。
背景技术
从计算机诞生的那天开始,对现实世界的真实模拟就是图形学领域追求的最终目标。渲染技术在计算机绘图中,是指用软件从模型生成图像的过程。在图形显示操作中,渲染是最后一项重要步骤,通过它得到模型与动画最终显示效果。渲染技术被广泛用于计算机与视频游戏、模拟、电影或者电视特效以及可视化设计等实际应用场景。针对渲染显示的方式不同,可以大致分为两类:预渲染(pre-rendering或者offlinerendering)和实时渲染(real-timerendering或者onlinerendering)。其中,预渲染就是由开发人员将待渲染内容预先放置在服务器上进行渲染,预渲染的计算强度很大,通常用于复杂场景处理,比如酷炫的3D电影制作等。实时渲染要求实时体验,经常用于各类3D游戏等场景,通常需要依靠硬件加速器完成这个过程。
目前通常可以采用本地渲染或云端渲染技术来实现对画面的绘制。本地渲染技术就是用户设备(UserEquipment,UE)的硬件设备,如:中央处理器(CentralProcessingUnit,CPU)、图形处理器(GraphicProcessingUnit,GPU)对模型进行渲染,渲染结束后,显示设备调取渲染结果进行显示。云端渲染技术就是将用户设备的操作移到云端运行,然后把最终的结果,通过图片方式传送给用户设备进行显示。
随着通信技术的不断发展,人们已经越来越习惯于利用手机进行视频通话。在视频通话的过程中也会涉及上述的图像渲染技术。视频通话往往涉及视频通话的发起方以及视频通话的接收方。现在技术中对于视频通话中画面的绘制可以如下所述:
首先,可以针对视频通话的发起方和接收方分别建立不同的GLSurfaceView,然后通过这两个GLSurfaceView分别对视频通话发起方和接 收方的图像进行绘制,从而可以实现视频通话的过程。
然而上述的现有技术中会存在这样的缺陷:创建两个GLSurfaceView无疑会消耗更多的内存并且会占用更多的CPU使用率,从而造成手机资源的浪费,另外,基于两个GLSurfaceView来进行画面的绘制,得到的画面位置往往都不方便进行调节。
发明内容
本申请实施例提供一种视频画面的绘制方法及装置,能够减少对手机资源的占用,以保证视频通话的质量。
本申请实施例提供一种视频画面的绘制方法,包括:创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器;为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸;为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸;当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。
本申请实施例提供的一种视频画面的绘制装置,包括:渲染器创建模块,设置为创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器;第一渲染参数配置模块,设置为为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸;第二渲染参数配置模块,设置为为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸;绘制模块,设置为当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。
本申请实施例还提供一种在其上记录有用于执行上述方法的程序的计算机可读记录介质。
本申请实施例提供的视频画面的绘制方法及装置,仅设置一个GLSurfaceView,同时在该GLSurfaceView下生成对应的两个渲染器。其中一个渲染器用来渲染视频通话发起方的画面,另一个渲染器则用来渲染视频通话接收方的画面。通过预先设置的视频通话发起方的画面位置和画面尺寸以及视频接收方的画面位置和画面尺寸,从而可以限定渲染得到的画面的位置 和尺寸。这样,通过一个GLSurfaceView便可以实现视频通话双方画面的绘制过程,节省了手机的资源。具体地,通过对视频通话过程中的网络状态进行监测,从而可以根据实际的网络状态,对分辨率或者帧率进行调节,以保证视频通话的顺畅。此外,通过监测手机触摸屏上的触控指令,从而可以根据触控指令对视频通话双方的画面位置进行调节,保证了视频通话过程的便捷性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图逐一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种视频画面的绘制方法流程图;
图2为本申请实施例提供的一种视频画面的绘制装置功能模块图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
图1为本申请实施例提供的一种视频画面的绘制方法流程图。虽然下文描述流程包括以特定顺序出现的多个操作,但是应该清楚了解,这些过程可以包括更多或更少的操作,这些操作可以顺序执行或并行执行(例如使用并行处理器或多线程环境)。如图1所示,所述方法可以包括:
S1:创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器。
所述的GLSurfaceView是一个视图,其内嵌的surface可以负责OpenGL渲染。所述GLSurfaceView往往可以提供以下特性:
1)管理一个surface,该surface可以为一块内存,能够直接排版到Android的视图上;
2)管理一个EGL display,该EGL display能够将内容渲染到上述的 surface上;
3)支持用户自定义渲染器;
4)让渲染器在独立的线程里运作,将渲染器运作的线程与UI线程分离;
5)支持按需渲染(on-demand)和连续渲染(continuous)。
在创建了所述GLSurfaceView之后,可以对所述GLSurfaceView进行初始化。具体地,由于GLSurfaceView在创建时往往会具备一些默认的配置,这些默认的配置往往可以不进行修改。因此在初始化的过程中,主要是在所述GLSurfaceView中设置预设数量的渲染器,以分别对视频通话的发起方和接收方进行画面渲染。具体地,本申请实施例可以通过setRenderer(Renderer)指令设置一个渲染器。
另外,所述GLSurfaceView会默认创建像素格式为PixelFormat.RGB_565的surface。当然,用户可以根据实际需求对该像素格式进行更改,例如可以通过调用getHolder().setFormat(PixelFormat.TRANSLUCENT)指令来更改透明效果。透明的surface的像素格式都是32位的,每个色彩模块都是8位深度,这就说明该像素格式可能是ARGB或者RGBA。
在Android设备中往往支持多种EGL配置,在本申请实施例中可以使用不同数目的通道(channel),也可以指定每个通道具有不同数目的位(bits)深度。因此,在渲染器工作之前就应该指定EGL的配置。所述GLSurfaceView默认EGL配置的像素格式为RGB_656,16位的深度缓存(depth buffer),默认不开启遮罩缓存(stencil buffer)。当然,如果需要选择不同的EGL配置,则可以调用setEGLConfigChooser指令进行更换。
在对所述GLSurfaceView中的配置参数进行修改以及设置预设数量的渲染器后,便可以指定渲染器的渲染模式,在本申请实施例中由于需要对视频通话的画面进行实时渲染,因此可以将渲染器的渲染模式都设置为连续渲染。
在使用GLSurfaceView对视频通话的画面进行绘制时,可以通过交互式或者非交互式的方法进行绘制。具体地,所述非交互式的绘制方法得到的视频通话画面不能够与用户进行互动,例如不能响应于用户的触控指令进行相应的调整。在本申请一优选实施例中,为了能够根据用户的触控指令对视频画面的位置进行调节,可以采用交互式的绘制方法对视频通话的画面进行绘制。具体地,由于渲染的对象是在独立进程中,当需要对渲染的对象进行交 互时,则需要采用跨线程的机制来进行事件的处理。具体地,在本申请实施例中可以使用queueEvent(Runnable)指令来进行设置。这样,由所述GLSurfaceView绘制的视频画面便可以响应于用户的触控指令,与用户进行交互。
在本申请实施例中,可以在GLSurfaceView中设置第一渲染器和第二渲染器,其中,所述第一渲染器可以为本地渲染器,设置为渲染本地采集的视频信息;所述第二渲染器可以为远程渲染器,设置为渲染视频通话的接收方远程发来的视频信息。
S2:为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸。
在设置了第一渲染器之后,可以为所述第一渲染器配置第一渲染参数。所述第一渲染参数是对视频信息进行渲染的规则,例如渲染后的画面位于什么位置以及渲染后的画面大小是多少。所述第一渲染参数中至少可以包括视频发起方的画面位置和画面尺寸。所述视频发起方的视频信息可以由本地的摄像头采集后送入所述第一渲染器中,并由所述第一渲染器进行渲染,成为本地的视频画面。所述本地的视频画面的位置以及尺寸则可以由所述第一渲染参数进行限定。例如,本地的视频画面可以位于整体画面的左下角,大小为整体画面的1/8。
本地的摄像头在采集到视频信息后,会将采集到的视频信息按照一定的帧率发送至所述第一渲染器。所述第一渲染器对接收到的每一帧视频信息均进行渲染成像,并将渲染后的画面按照同样的帧率展示给用户,从而给用户提供视频通话的画面。
S3:为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸。
同样的,在设置了第二渲染器之后,可以为所述第二渲染器配置第二渲染参数。所述第二渲染参数同样是对视频信息进行渲染的规则,例如渲染后的画面位于什么位置以及渲染后的画面大小是多少。所述第二渲染参数中至少可以包括视频接收方的画面位置和画面尺寸。所述视频接收方的视频信息可以由本地的摄像头采集后,通过网络发送至视频发起方所在的通讯地址。这样,当所述视频接收方的视频信息到达视频发起方时,可以由所述第二渲 染器对其进行渲染,渲染后的画面便可以呈现给视频通话的发起方进行观看。所述视频通话接收方的画面位置以及尺寸则可以由所述第二渲染参数进行限定。例如,视频通话接收方的视频画面可以在整体画面的正中间,大小为铺满整体画面。
所述视频通话的接收方的视频信息同样可以按照一定的帧率发送至视频通话的发起方,在经过所述第二渲染器进行渲染后,便可以按照同样的帧率呈现给视频通话的发起方,从而形成视频通话的画面。
S4:当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。
在视频通话的接收方接收了视频通话请求后,在视频通话发起方和视频通话接收方之间便建立了视频通话,此时,首先可以根据视频接收方的画面位置和画面尺寸对所述第二渲染器接收的每一帧图片进行绘制,得到第二图片流。该第二图片流中的图片是按照预设帧率进行传输,该预设帧率例如可以为24帧/秒或者30帧/秒。在得到所述第二图片流之后,可以根据视频发起方的画面位置和画面尺寸对所述第一渲染器接收的每一帧图片进行绘制,得到第一图片流。同样的,该第一图片流中的图片也是按照预设帧率进行传输,该预设帧率例如可以为24帧/秒或者30帧/秒。在进行视频通话的过程中,所述第二图片流往往可以作为背景铺满视频通话的窗口,而所述第一图片流则可以悬浮在所述第二图片流上,这样所述第二图片流便不会遮挡住所述第一图片流。也就是说,在得到第二图片流和第一图片流后,可以将所述第一图片流加载于所述第二图片流之上,构成视频画面。如上所述,所述可以作为背景铺满视频通话的窗口,而所述第一图片流则可以位于视频通话窗口的左下角,并且占整个窗口的1/8大小。
在进行视频通话的过程中,每一帧画面的分辨率和相邻两帧之间传输的帧率往往会决定视频画面的质量。每一帧画面的分辨率越高,那么视频画面则越清晰,但同时也会占用较多的网络资源,对网络状态的要求较高。同样地,相邻两帧之间传输的帧率越高,视频画面就会越流畅,但同时同样会占用较多的网络资源,对网络状态的要求较高。因此,在本申请一优选实施例中,为了保证视频通话的顺畅,可以根据网络状态的不同,对所述视频接收方的画面进行相应的调整。具体地,本申请实施例可以监测视频发起方的网 络状态,当所述网络状态满足预设条件时,将所述视频接收方的画面分辨率调节为预设分辨率。例如,当所述网络状态较好,其网络延迟低于预设阈值时,可以将当前画面的分辨率调高,以增加画面的清晰度。相反的,当所述网络状态较差,其网络延迟高于预设阈值时,可以将当前画面的分辨率调低,以保证画面的流畅度。
同样的,本申请实施例还可以监测视频发起方的网络状态,当所述网络状态满足预设条件时,将所述视频接收方的画面渲染帧率按照预设规则进行调节。例如,当所述网络状态较好,其网络延迟低于预设阈值时,可以将当前画面的帧率调高,以增加画面的流畅度。相反的,当所述网络状态较差,其网络延迟高于预设阈值时,可以将当前画面的帧率调低,以保证画面不至于中断。实际上,为了保证动态图像的画面渲染质量在可控范围内,还可以限定画面渲染所用的帧率处于最小帧率和最大帧率之间,其中最大帧率是能够完美地运行动态图像所需要的最大帧率值,最小帧率是运行动态图像时可以容忍的最小帧率值。即,将画面渲染所用的帧率同该最小帧率及最大帧率进行比较,如果上述画面渲染所用的帧率小于该最小帧率,则将上述画面渲染所用的帧率设定成该最小帧率;如果上述画面渲染所用的帧率大于该最大帧率,则将上述画面渲染所用的帧率设定成该最大帧率。所述的最小帧率例如可以为20帧/秒,所述的最大帧率例如可以为60帧/秒。
在本申请一优选实施中,可以按照下述公式对所述视频接收方的画面渲染帧率进行调节:
δ=INT(1000*N/T)*k
其中,INT为取整函数,δ为调节后的画面渲染帧率,N为每次渲染时的画面帧数,T为渲染N帧画面所需的时间,k为调节系数。所述调节系数可以根据网络状态进行调整,其范围为0.1至1之间。本申请实施例可以建立网络延迟与所述调节系数之间的关系,该关系可以用反比例函数来表示,当网络延迟越高时,其对应的调节系数则越小;当网络延迟越低时,其对应的调节系数则越大。
由上可见,本申请实施例提供的视频画面的绘制方法,仅设置一个GLSurfaceView,同时在该GLSurfaceView下生成对应的两个渲染器。其中一个渲染器用来渲染视频通话发起方的画面,另一个渲染器则用来渲染视频通 话接收方的画面。通过预先设置的视频通话发起方的画面位置和画面尺寸以及视频接收方的画面位置和画面尺寸,从而可以限定渲染得到的画面的位置和尺寸。这样,通过一个GLSurfaceView便可以实现视频通话双方画面的绘制过程,节省了手机的资源。具体地,通过对视频通话过程中的网络状态进行监测,从而可以根据实际的网络状态,对分辨率或者帧率进行调节,以保证视频通话的顺畅。此外,通过监测手机触摸屏上的触控指令,从而可以根据触控指令对视频通话双方的画面位置进行调节,保证了视频通话过程的便捷性。
本申请实施例还提供一种视频画面的绘制装置。图2为本申请实施例提供的一种视频画面的绘制装置功能模块图。如图2所示,所述装置可以包括:
渲染器创建模块100,设置为创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器;
第一渲染参数配置模块200,设置为为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸;
第二渲染参数配置模块300,设置为为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸;
绘制模块400,设置为当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。
在本申请一优选实施例中,所述绘制模块400具体包括:
第二图片流获取模块,设置为当视频通话建立时,根据视频接收方的画面位置和画面尺寸对所述第二渲染器接收的每一帧图片进行绘制,得到第二图片流;
第一图片流获取模块,设置为根据视频发起方的画面位置和画面尺寸对所述第一渲染器接收的每一帧图片进行绘制,得到第一图片流;
加载模块,设置为将所述第一图片流加载于所述第二图片流之上,构成视频画面。
其中,在所述加载模块之后,所述装置还包括:
触控指令监测模块,设置为监测所述视频画面上的触控指令,并响应于监测到的触控指令,将所述第一图片流或者第二图片流的位置进行移动。
在本申请另一优选实施例中,所述装置还可以包括:
分辨率调节模块,设置为监测视频发起方的网络状态,当所述网络状态满足预设条件时,将所述视频接收方的画面分辨率调节为预设分辨率。
在本申请另一优选实施例中,所述装置还可以包括:
帧率调节模块,设置为监测视频发起方的网络状态,当所述网络状态满足预设条件时,将所述视频接收方的画面渲染帧率按照预设规则进行调节。
具体地,在本申请实施例中可以按照下述公式对所述视频接收方的画面渲染帧率进行调节:
δ=INT(1000*N/T)*k
其中,INT为取整函数,δ为调节后的画面渲染帧率,N为每次渲染时的画面帧数,T为渲染N帧画面所需的时间,k为调节系数。
需要说明的是,本申请实施例上述各个功能模块的具体实现方式与步骤S1至S4中一致,这里便不再赘述。
由上可见,本申请实施例提供的视频画面的绘制装置,仅设置一个GLSurfaceView,同时在该GLSurfaceView下生成对应的两个渲染器。其中一个渲染器用来渲染视频通话发起方的画面,另一个渲染器则用来渲染视频通话接收方的画面。通过预先设置的视频通话发起方的画面位置和画面尺寸以及视频接收方的画面位置和画面尺寸,从而可以限定渲染得到的画面的位置和尺寸。这样,通过一个GLSurfaceView便可以实现视频通话双方画面的绘制过程,节省了手机的资源。具体地,通过对视频通话过程中的网络状态进行监测,从而可以根据实际的网络状态,对分辨率或者帧率进行调节,以保证视频通话的顺畅。此外,通过监测手机触摸屏上的触控指令,从而可以根据触控指令对视频通话双方的画面位置进行调节,保证了视频通话过程的便捷性。
在本说明书中,诸如第一和第二这样的形容词仅可以用于将一个元素或动作与另一元素或动作进行区分,而不必要求或暗示任何实际的这种关系或顺序。在环境允许的情况下,参照元素或部件或步骤(等)不应解释为局限于仅元素、部件、或步骤中的一个,而可以是元素、部件、或步骤中的一个或多个等。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同 相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本申请可用于众多通用或专用的计算机系统环境或配置中。例如:个人计算机、服务器计算机、手持设备或便携式设备、平板型设备、多处理器系统、基于微处理器的系统、置顶盒、可编程的消费电子设备、网络PC、小型计算机、大型计算机、包括以上任何系统或设备的分布式计算环境等等。
基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读记录介质中,所述计算机可读记录介质包括用于以计算机(例如计算机)可读的形式存储或传送信息的任何机制。如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:上面对本申请的各种实施方式的描述以描述的目的提供给本领域技术人员。其不旨在是穷举的、或者不旨在将本申请限制于单个公开的实施方式。如上所述,本申请的各种替代和变化对于上述技术所属领域技术人员而言将是显而易见的。因此,虽然已经具体讨论了一些另选的实施方式,但是其它实施方式将是显而易见的,或者本领域技术人员相对容易得出。本申请旨在包括在此已经讨论过的本申请的所有替代、修改、和变化,以及落在上述申请的精神和范围内的其它实施方式。

Claims (11)

  1. 一种视频画面的绘制方法,其特征在于,包括:
    创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器;
    为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸;
    为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸;
    当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。
  2. 根据权利要求1所述的视频画面的绘制方法,其特征在于,所述第一渲染器为本地渲染器,所述第二渲染器为远程渲染器。
  3. 根据权利要求1所述的视频画面的绘制方法,其特征在于,当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制具体包括:
    当视频通话建立时,根据视频接收方的画面位置和画面尺寸对所述第二渲染器接收的每一帧图片进行绘制,得到第二图片流;
    根据视频发起方的画面位置和画面尺寸对所述第一渲染器接收的每一帧图片进行绘制,得到第一图片流;
    将所述第一图片流加载于所述第二图片流之上,构成视频画面。
  4. 根据权利要求3所述的视频画面的绘制方法,其特征在于,在得到第二图片流之后,所述方法还包括:
    监测视频发起方的网络状态,当所述网络状态满足预设条件时,将所述视频接收方的画面分辨率调节为预设分辨率。
  5. 根据权利要求3所述的视频画面的绘制方法,其特征在于,在得到第二图片流之后,所述方法还包括:
    监测视频发起方的网络状态,当所述网络状态满足预设条件时,将所述视频接收方的画面渲染帧率按照预设规则进行调节。
  6. 根据权利要求5所述的视频画面的绘制方法,其特征在于,按照下述公式对所述视频接收方的画面渲染帧率进行调节:
    δ=INT(1000*N/T)*k
    其中,INT为取整函数,δ为调节后的画面渲染帧率,N为每次渲染时的画面帧数,T为渲染N帧画面所需的时间,k为调节系数。
  7. 根据权利要求3所述的视频画面的绘制方法,其特征在于,在将所述第一图片流加载于所述第二图片流之上,构成视频画面之后,所述方法还包括:
    监测所述视频画面上的触控指令,并响应于监测到的触控指令,将所述第一图片流或者第二图片流的位置进行移动。
  8. 一种视频画面的绘制装置,其特征在于,包括:
    渲染器创建模块,设置为创建GLSurfaceView以及与所述GLSurfaceView相对应的第一渲染器和第二渲染器;
    第一渲染参数配置模块,设置为为所述第一渲染器配置第一渲染参数,所述第一渲染参数至少包括视频发起方的画面位置和画面尺寸;
    第二渲染参数配置模块,设置为为所述第二渲染器配置第二渲染参数,所述第二渲染参数至少包括视频接收方的画面位置和画面尺寸;
    绘制模块,设置为当视频通话建立时,所述第一渲染器和所述第二渲染器分别根据所述第一渲染参数和所述第二渲染参数,对接收的每一帧图片进行绘制。
  9. 根据权利要求8所述的视频画面的绘制装置,其特征在于,所述绘制模块具体包括:
    第二图片流获取模块,设置为当视频通话建立时,根据视频接收方的画面位置和画面尺寸对所述第二渲染器接收的每一帧图片进行绘制,得到第二图片流;
    第一图片流获取模块,设置为根据视频发起方的画面位置和画面尺寸对所述第一渲染器接收的每一帧图片进行绘制,得到第一图片流;
    加载模块,设置为将所述第一图片流加载于所述第二图片流之上,构成视频画面。
  10. 根据权利要求9所述的视频画面的绘制装置,其特征在于,在所述加载模块之后,所述装置还包括:
    触控指令监测模块,设置为监测所述视频画面上的触控指令,并响应于 监测到的触控指令,将所述第一图片流或者第二图片流的位置进行移动。
  11. 一种在其上记录有用于执行权利要求1所述方法的程序的计算机可读记录介质。
PCT/CN2016/088195 2015-12-15 2016-07-01 一种视频画面的绘制方法及装置 WO2017101303A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510934280.8 2015-12-15
CN201510934280.8A CN105916052A (zh) 2015-12-15 2015-12-15 一种视频画面的绘制方法及装置

Publications (1)

Publication Number Publication Date
WO2017101303A1 true WO2017101303A1 (zh) 2017-06-22

Family

ID=56744080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/088195 WO2017101303A1 (zh) 2015-12-15 2016-07-01 一种视频画面的绘制方法及装置

Country Status (2)

Country Link
CN (1) CN105916052A (zh)
WO (1) WO2017101303A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109285211A (zh) * 2018-10-29 2019-01-29 Oppo广东移动通信有限公司 画面渲染方法、装置、终端及存储介质
CN111626915A (zh) * 2020-05-29 2020-09-04 大陆汽车车身电子系统(芜湖)有限公司 一种图像显示方法
CN115942131A (zh) * 2023-02-09 2023-04-07 蔚来汽车科技(安徽)有限公司 保障车辆环视功能的方法、座舱系统及车辆、存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107124637A (zh) * 2017-05-18 2017-09-01 北京视博云科技有限公司 在轻终端上进行信息通告的方法及装置、计算机存储介质
CN107888970A (zh) * 2017-11-29 2018-04-06 天津聚飞创新科技有限公司 视频处理方法、装置、嵌入式设备及存储介质
CN108184054B (zh) * 2017-12-28 2020-12-08 上海传英信息技术有限公司 一种用于智能终端拍摄图像的预处理方法及预处理装置
CN114071229B (zh) * 2021-12-08 2023-06-09 四川启睿克科技有限公司 一种解决SurfaceView渲染器重载视频解码时回收延迟的方法
CN115619911B (zh) * 2022-10-26 2023-08-08 润芯微科技(江苏)有限公司 基于Unreal Engine的虚拟形象生成方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886853A (zh) * 2012-12-19 2014-06-25 巴科股份有限公司 显示墙布局优化
CN103916621A (zh) * 2013-01-06 2014-07-09 腾讯科技(深圳)有限公司 视频通信方法及装置
WO2014184956A1 (ja) * 2013-05-17 2014-11-20 三菱電機株式会社 映像合成装置および方法
WO2015104849A1 (en) * 2014-01-09 2015-07-16 Square Enix Holdings Co., Ltd. Video gaming device with remote rendering capability
CN104811785A (zh) * 2015-04-01 2015-07-29 乐视致新电子科技(天津)有限公司 智能终端的显示图形用户界面的控制方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1901668B (zh) * 2005-07-19 2012-05-23 腾讯科技(深圳)有限公司 多人视频数据显示处理方法及系统
CN102521020B (zh) * 2011-10-26 2014-05-21 华为终端有限公司 用于移动终端的应用屏幕截图方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886853A (zh) * 2012-12-19 2014-06-25 巴科股份有限公司 显示墙布局优化
CN103916621A (zh) * 2013-01-06 2014-07-09 腾讯科技(深圳)有限公司 视频通信方法及装置
WO2014184956A1 (ja) * 2013-05-17 2014-11-20 三菱電機株式会社 映像合成装置および方法
WO2015104849A1 (en) * 2014-01-09 2015-07-16 Square Enix Holdings Co., Ltd. Video gaming device with remote rendering capability
CN104811785A (zh) * 2015-04-01 2015-07-29 乐视致新电子科技(天津)有限公司 智能终端的显示图形用户界面的控制方法及装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109285211A (zh) * 2018-10-29 2019-01-29 Oppo广东移动通信有限公司 画面渲染方法、装置、终端及存储介质
CN109285211B (zh) * 2018-10-29 2023-03-31 Oppo广东移动通信有限公司 画面渲染方法、装置、终端及存储介质
CN111626915A (zh) * 2020-05-29 2020-09-04 大陆汽车车身电子系统(芜湖)有限公司 一种图像显示方法
CN111626915B (zh) * 2020-05-29 2024-03-26 大陆汽车车身电子系统(芜湖)有限公司 一种图像显示方法
CN115942131A (zh) * 2023-02-09 2023-04-07 蔚来汽车科技(安徽)有限公司 保障车辆环视功能的方法、座舱系统及车辆、存储介质
CN115942131B (zh) * 2023-02-09 2023-09-01 蔚来汽车科技(安徽)有限公司 保障车辆环视功能的方法、座舱系统及车辆、存储介质

Also Published As

Publication number Publication date
CN105916052A (zh) 2016-08-31

Similar Documents

Publication Publication Date Title
WO2017101303A1 (zh) 一种视频画面的绘制方法及装置
TWI803590B (zh) 藉由所關注區域之制定的異步時間及空間翹曲
US11109011B2 (en) Virtual reality with interactive streaming video and likelihood-based foveation
US10818081B2 (en) Dynamic lighting for objects in images
US10298903B2 (en) Method and device for processing a part of an immersive video content according to the position of reference parts
US11119719B2 (en) Screen sharing for display in VR
JP6563024B2 (ja) 出力デバイスへのクラウドゲームデータストリーム及びネットワーク特性の動的な調節
CN112671994A (zh) 视频通话期间实现的方法、用户终端及可读存储介质
TW201501761A (zh) 以遊戲者之注意區域爲基礎改善視訊串流的速率控制位元分配
US11523185B2 (en) Rendering video stream in sub-area of visible display area
WO2022262618A1 (zh) 一种屏保交互方法、装置、电子设备和存储介质
EP3268930B1 (en) Method and device for processing a peripheral image
WO2018045789A1 (zh) 图像灰度值调整方法和装置
US11924442B2 (en) Generating and displaying a video stream by omitting or replacing an occluded part
WO2022057782A1 (zh) 用于头戴显示设备的图像处理方法、装置及电子设备
US11962867B2 (en) Asset reusability for lightfield/holographic media
US20210360236A1 (en) System and method for encoding a block-based volumetric video having a plurality of video frames of a 3d object into a 2d video format
US10152818B2 (en) Techniques for stereo three dimensional image mapping
US9924150B2 (en) Techniques for stereo three dimensional video processing
WO2021199128A1 (ja) 画像データ転送装置、画像生成方法およびコンピュータプログラム
CN116193216A (zh) 特效视频帧的生成方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16874366

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16874366

Country of ref document: EP

Kind code of ref document: A1