WO2017092334A1 - 一种图像渲染处理的方法及装置 - Google Patents

一种图像渲染处理的方法及装置 Download PDF

Info

Publication number
WO2017092334A1
WO2017092334A1 PCT/CN2016/089271 CN2016089271W WO2017092334A1 WO 2017092334 A1 WO2017092334 A1 WO 2017092334A1 CN 2016089271 W CN2016089271 W CN 2016089271W WO 2017092334 A1 WO2017092334 A1 WO 2017092334A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
state
generate
sequence
view
Prior art date
Application number
PCT/CN2016/089271
Other languages
English (en)
French (fr)
Inventor
胡雪莲
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Priority to US15/246,396 priority Critical patent/US20170160795A1/en
Publication of WO2017092334A1 publication Critical patent/WO2017092334A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present invention relates to the field of virtual reality technologies, and in particular, to a method for image rendering processing and an apparatus for image rendering processing.
  • Virtual Reality also known as Spiritual Reality or Virtual Reality
  • VR is a multi-dimensional sensory environment that is generated in whole or in part by computer, such as vision, hearing, and touch.
  • auxiliary sensing device such as helmet display and data glove, it provides a multi-dimensional human-machine interface for observing and interacting with the virtual environment, so that people can enter the virtual environment and directly observe the internal changes of things and interact with things. , giving people a sense of "immersive".
  • VR theater systems based on mobile terminals have also developed rapidly.
  • head tracking is used to change the angle of view of the image, so that the user's visual system and the motion sensing system can be connected and feel more realistic.
  • each image frame of the video is displayed on the screen, and the state of the user's head needs to be collected, the angle of view is calculated, the scene and the video are rendered according to the angle of view, and the anti-distortion anti-dispersion and time are performed. Deviation effect (TimeWarp) processing and other links.
  • the inventor found in the process of implementing the present invention that it takes time to collect the state of the user's head, calculate the angle of view, and render the scene and video according to the angle of view, so when the user's head rotates, the rendering starts. There is a deviation between the angle of view of the time and the angle of view at the end of rendering, causing the image actually displayed by the mobile terminal to deviate from the image that the user needs to display at the current position, so that the user actually sees that the current position is The scene image of the deviation causes the user to feel dizzy during the viewing process.
  • the scene image causes the vertigo to be more intense when watching the video, that is, the display effect of the image is poor, which affects the playback effect of the video.
  • the image field actually displayed by the mobile terminal is compared with the image that the user needs to display at the current position due to the deviation of the angle of view of the image frame at the beginning and the end of the rendering. Big deviation.
  • the technical problem to be solved by the embodiments of the present invention is to provide a method for image rendering processing, which reduces the deviation of the angle of view of the image frame at the beginning and the end of the rendering, thereby solving the problem that the image viewing effect is poor due to the deviation of the viewing angle.
  • an embodiment of the present invention further provides an apparatus for image rendering processing to ensure implementation and application of the foregoing method.
  • an embodiment of the present invention discloses a method for image rendering processing, including:
  • the target state sequence is simulated to generate a fitting curve
  • the target scene is rendered based on the angle of view to generate a rendered image.
  • an embodiment of the present invention further discloses an apparatus for image rendering processing, including:
  • a state sequence generating module configured to perform state detection on the target header, and generate a target state sequence
  • a fitting curve generating module configured to simulate the target state sequence to generate a fitting curve when determining that the target head enters a moving state
  • a field of view determination module for determining a pre-generated frame delay time and the fitting curve The angle of view of the target scene
  • a rendering image generating module configured to render the target scene based on the field of view to generate a rendered image.
  • Embodiments of the present invention provide a computer program comprising computer readable code that, when executed on a mobile terminal, causes the mobile terminal to perform the method of image rendering processing described above.
  • Embodiments of the present invention provide a computer readable medium in which the above computer program is stored.
  • the embodiments of the invention include the following advantages:
  • the embodiment of the present invention generates a target state sequence by detecting the state of the target head, and generates a fitting curve by simulating the target state sequence when determining that the target head enters the moving state; according to the frame delay time and the fitting curve, Determining the angle of view of the target scene, that is, by fitting the curve to predict the movement state of the target head, compensating for an estimated angle of view deviation, thereby effectively reducing the deviation of the angle of view of the image frame at the beginning and end of the rendering.
  • the utility model effectively reduces the vertigo feeling when the user's head moves quickly, that is, obtains a better image display effect and improves the user experience.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for image rendering processing of the present invention
  • FIG. 2 is a flow chart showing the steps of a preferred embodiment of a method for image rendering processing of the present invention
  • 3A is a structural block diagram of an apparatus embodiment of an image rendering process of the present invention.
  • FIG. 3B is a structural block diagram of a preferred embodiment of an apparatus for image rendering processing according to the present invention.
  • Figure 4 shows schematically a block diagram of a mobile terminal for carrying out the method according to the invention
  • Fig. 5 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
  • One of the core concepts of the embodiments of the present invention is to generate a fitting curve by detecting the state of the user's head, and determine the angle of view of the target scene according to the frame delay time and the fitting curve, that is, by fitting
  • the curve predicts the moving state of the target head, and compensates for an estimated angle of view deviation, thereby effectively reducing the deviation of the angle of view of the image frame at the beginning and the end of the rendering, thereby effectively reducing the dizziness of the user's head when moving quickly. , get better image display effect.
  • FIG. 1 a flow chart of steps of an embodiment of a method for image rendering processing of the present invention is shown, which may specifically include the following steps:
  • Step 101 Perform state detection on the target header to generate a target state sequence.
  • head tracking is used to change the angle of view of the image, so that the user's visual system and the motion sensing system can be connected and feel more realistic.
  • the position of the user's head can be tracked by the position tracker to determine the motion state of the user's head.
  • the position tracker also known as the position tracker, refers to the device that acts on the space tracking and positioning. It is generally used in combination with other VR devices, such as data helmets, stereo glasses, data gloves, etc., so that participants can be free in space. Movement, rotation, is not limited to a fixed spatial position.
  • the VR system based on the mobile terminal can determine the state of the user's head by detecting the state of the user's head, determine the angle of view of the image based on the state of the head of the user, and render the image according to the determined angle of view to obtain a better image display. effect.
  • the mobile terminal refers to a computer device that can be used in the mobile, such as a smart phone, a notebook computer, a tablet computer, etc., which is not limited in this embodiment of the present invention.
  • the embodiments of the present invention are described in detail with reference to the embodiments of the present invention, but should not be construed as limiting the embodiments of the present invention.
  • the mobile phone-based VR system can monitor the movement state of the user's head through an auxiliary sensing device such as a data helmet, stereo glasses, and data gloves, and the user's head to be monitored is used as a target head.
  • Target detection is performed on the target head, so that the target can be determined
  • Status information of the head of the display relative to the display of the mobile phone Based on the state information corresponding to the target header, the state data corresponding to the current state of the user can be obtained through calculation. For example, after the user wears the data helmet, by monitoring the rotation state of the user's head (ie, the target head), the angle of the target head relative to the display screen of the mobile phone can be calculated, that is, the state data is generated.
  • the calculation may be performed according to any one or more of the head orientation, the moving direction, the speed, and the like corresponding to the current state of the user, and the angle of the target head relative to the display screen of the mobile phone is produced.
  • the VR system can store the generated state data into a corresponding state sequence, and generate a target state sequence corresponding to the target head, such as sequentially storing the target head A at a different time relative to the display angle of the mobile phone into a corresponding state sequence.
  • a target state sequence LA corresponding to the target head A is formed.
  • the target state sequence LA can store n state data, and n is a whole number, such as 30, 10, or 50, which is not limited by the embodiment of the present invention.
  • the above step 101 may include the following sub-steps:
  • Sub-step 1010 Acquire data collected by the sensor to generate state data corresponding to the target header.
  • Sub-step 1012 generating a target state sequence using the generated state data.
  • Step 103 When it is determined that the target head enters a moving state, simulate the target state sequence to generate a fitting curve.
  • whether the target head enters the moving state by monitoring the target head rotation state in real time, that is, whether the target head is moved relative to the mobile phone display screen.
  • whether the target header enters a moving state may be determined according to status data corresponding to the target header. For example, by determining whether the angle of the target head relative to the display screen of the mobile phone changes, when the angle of the target head relative to the display screen of the mobile phone changes, it is determined that the target head enters a moving state; the target head is opposite to the display screen of the mobile phone. When the angle is constant, it is determined that the target head does not enter the moving state, that is, the target head is stationary with respect to the display screen of the mobile phone.
  • N is the status data
  • t is the time.
  • the system can calculate the state data N corresponding to the target head at each time t, that is, the fitting curve corresponding to the target head, and can calculate the corresponding target head of the user in the next frame by calculation.
  • Status data For example, suppose the target head is at 50 seconds, and by calculating the value of S(t) when t is 50 seconds, it is obtained that S (50 seconds) is 150. The degree can be determined to be 150 degrees with respect to the display screen at 50 seconds.
  • the step of simulating the target state sequence to generate a fitting curve may be: performing a simulation calculation on the state data of the target state sequence by using a preset simulation algorithm to generate a fitting curve.
  • Step 105 Determine a field of view angle of the target scene according to the pre-generated frame delay time and the fitting curve.
  • the VR system can generate a frame delay time based on historical data of the image rendering. For example, the time information t0 at the start of image frame rendering and the time information t1 at the end of the image frame can be recorded, and by calculating the difference between t0 and t1, the delay time of the image frame from the start of rendering to the display screen can be obtained. This delay time is determined as the frame delay time T.
  • the delay time T of the plurality of image frames may also be used to determine the frame delay time T, such as using the delay time of 60 image frames to determine the frame delay time T, that is, for 60 image frames.
  • the delay time is counted, and the average value of the delay time of the 60 image frames is calculated, and the average value is determined as the frame delay time T.
  • the method for generating the frame delay time is not limited in the embodiment of the present invention.
  • the scene is taken as the target scene, and the rendering moment of the target scene is determined based on the pre-generated frame delay time T, such as determining the sum of the current time t3 and the frame delay time T as the target.
  • the rendering moment of the scene By fitting the curve, the target state data corresponding to the rendering time of the target scene can be calculated.
  • the angle of view corresponding to the target state data can be obtained, and the calculated field of view angle is used as the field of view of the target scene, that is, an estimated deviation is compensated at the beginning of the image frame rendering of the target scene. , effectively reducing the deviation of the angle of view of the image frame at the beginning and end of the rendering, so that a better image display effect can be obtained.
  • Step 107 Render the target scene based on the field of view angle to generate a rendered image.
  • the mobile-based VR system renders the image frame of the target scene based on the calculated field of view angle to generate a rendered image.
  • the mobile phone-based VR system can adopt rendering technology, such as Z buffer technology, ray tracing technology, radiance technology, etc., for calculating the field of view angle, rendering the image frame, and generating a rendered image of the target scene, which is equivalent to calling
  • the preset rendering implementation algorithm calculates the data frame of the target scene for the field of view to obtain the rendered image data, that is, generates the rendered image.
  • the VR system based on the mobile terminal can detect the target head The state, the target state sequence is generated, and when the target head enters the moving state, the fitting state curve is generated by simulating the target state sequence; and the field of view of the target scene is determined according to the frame delay time and the fitting curve, that is, Predicting the movement state of the target head by fitting the curve, compensating for an estimated angle of view deviation, thereby effectively reducing the deviation of the angle of view of the image frame at the beginning and end of the rendering, effectively reducing the rapid movement of the user's head.
  • the vertigo feeling is to obtain a better image display effect and improve the user experience.
  • FIG. 2 a flow chart of steps of an embodiment of a method for image rendering processing according to the present invention is shown, which may specifically include the following steps:
  • Step 201 Acquire data collected by the sensor, and generate state data corresponding to the target head.
  • VR devices such as data helmets, stereo glasses, data gloves, etc. monitor the target head, usually by sensors.
  • the posture of the mobile phone ie, the screen direction
  • the acceleration and the moving direction of the mobile phone can be detected by the accelerometer.
  • the screen direction is equivalent to the head orientation.
  • the VR system based on the mobile phone can calculate the angle of view of the left and right eyes according to the parameters of the left and right eye, the left and right field of view, and the like, and then the target head can be determined based on the angle of view of the left and right eyes.
  • the angle of the display which is the generation of status data.
  • Step 203 Generate a target state sequence by using the generated state data.
  • the VR system can sequentially store the generated state data into a corresponding state sequence, and generate a target state sequence corresponding to the target head, for example, an angle N1, N2, and N3 of the target head A at different times relative to the display screen of the mobile phone.
  • ...Nn is sequentially stored in the corresponding state sequence LA, that is, the target state sequence LA corresponding to the target head A is generated.
  • Stored in the target status sequence LA Stored in the target status sequence LA.
  • the sensor can collect multiple data, and the VR system based on the mobile phone can generate multiple status data, and perform statistics on multiple status data generated every one second time, generating every 1 second.
  • the average value of all the state data in the time is used as the state data corresponding to the target head at the time of 1 second, and is stored in the target state sequence LA.
  • a mobile-based VR system can form a target state sequence based on historically generated state data LA, and generate a fitting curve corresponding to the target head.
  • the deviation of the latest state data for the fitting curve may be determined by calculation, such as using the fitting curve to calculate the latest generated time, and obtaining the time corresponding to the latest state data.
  • the virtual state data may further calculate a difference between the virtual state data and the latest state data, and use the difference as a deviation of the latest state data for the fitting curve, and determine whether the deviation of the latest state data for the fitting curve is greater than Preset deviation threshold.
  • Step 205 Determine, according to the state data, whether the target head enters a moving state.
  • the state data corresponding to the target header changes.
  • the above step 205 may include the following sub-steps:
  • Sub-step 2050 performing statistics on the state data of the target state sequence to determine a state difference.
  • all the state data in the target state sequence LA can be compared, the minimum value S and the maximum value B of all state data in the target state sequence LA are determined, and the mean value M corresponding to all the state data of the target state sequence LA is obtained by calculation.
  • the mobile phone-based VR system may use the difference between the maximum value B and the mean value M as the state difference corresponding to the target head; or the difference between the minimum value S and the mean value M may be used as the state difference corresponding to the target head; The minimum value S and the maximum value B are used as the state difference value corresponding to the target head.
  • the difference between the minimum value S and the mean value M or the difference between the maximum value B and the mean value M is taken as The state difference corresponding to the target header.
  • Sub-step 2052 determining whether the state difference is greater than a preset movement threshold.
  • the mobile phone-based VR system may preset a movement threshold for determining whether the target head enters a moving state. Specifically, whether the target head enters the moving state can be determined by determining whether the state difference corresponding to the target head is greater than a preset movement threshold. As in the above example, the status data is the angle of the target head relative to the display screen of the mobile phone, and the mobile phone based VR system can preset the movement threshold to 10 degrees, and whether the state difference corresponding to the target head is greater than 10 degrees can be determined. Whether the target head enters a fast rotation state.
  • Sub-step 2054 determining that the target head enters a moving state when the state difference is greater than a movement threshold.
  • the target head When the state difference corresponding to the target head is greater than the movement threshold, it may be determined that the target head enters the fast rotation state, that is, enters the movement state. For example, when the difference between the minimum value S and the mean value M is greater than 10 degrees, it is determined that the target head enters a fast rotation state, that is, enters a moving state; or, the difference between the maximum value B and the mean value M is greater than 10 degrees, and the target is determined. The head enters the moving state.
  • the state difference corresponding to the target head is not greater than the movement threshold, it is determined that the target head does not enter the moving state, which is equivalent to determining that the target head is stationary with respect to the display screen.
  • Step 207 Call a preset simulation algorithm to perform simulation calculation on the state data of the target state sequence to generate a fitting curve.
  • the mobile phone based VR system can set the simulation algorithm based on the least squares method.
  • Step 209 Determine a field of view angle of the target scene according to the pre-generated frame delay time and the fitting curve.
  • the above step 209 may include the following sub-steps:
  • Sub-step 2090 determining a rendering moment of the target scene based on the frame delay time.
  • the mobile phone-based VR system uses the sum of the current time t3 and the frame delay time T as the rendering time of the target scene by acquiring the current time t3.
  • Sub-step 2092 the target state data corresponding to the rendering time is calculated by using the fitting curve.
  • the VR system based on the mobile phone can calculate the target state data corresponding to the rendering moment of the target scene by fitting the curve.
  • N3 S(t3+T)
  • the state data N3 corresponding to the rendering time (t3+T) is taken as the target state data.
  • Sub-step 2094 performing calculations using the target state data to generate the field of view.
  • the mobile phone-based VR system uses the target state data N3 for calculation, and the field of view of the target scene can be obtained.
  • the target state data N3 is used for rendering, which can effectively reduce the deviation of the angle of view of the image frame at the beginning and end of the rendering.
  • Step 211 Render the target scene based on the angle of view to generate a rendered image.
  • the VR system based on the mobile terminal predicts the moving state of the target head by fitting the curve, compensates for an estimated field angle deviation, and effectively reduces the field of view of the image frame at the beginning and the end of the rendering.
  • the angular deviation makes the user's eyes actually see the scene image with less deviation from the current position, thereby effectively reducing the stun feeling when the user's head moves quickly, obtaining a better image display effect, and improving the user experience.
  • FIG. 3A a structural block diagram of an apparatus for performing image rendering processing according to the present invention is shown. Specifically, the following modules may be included:
  • the state sequence generating module 301 is configured to perform state detection on the target header to generate a target state sequence.
  • the fitting curve generating module 303 is configured to simulate the target state sequence to generate a fitting curve when determining that the target head enters a moving state.
  • the field of view determination module 305 is configured to determine an angle of view of the target scene according to the pre-generated frame delay time and the fitting curve.
  • the rendered image generating module 307 is configured to render the target scene based on the field of view to generate a rendered image.
  • the apparatus for image rendering processing may further include a movement determining module 309, which is referred to FIG. 3B.
  • the motion determining module 309 is configured to determine, according to the state data, whether the target header enters a moving state.
  • the movement determination module 309 can include the following sub-modules:
  • the state difference determining sub-module 3090 is configured to perform statistics on the state data of the target state sequence to determine a state difference.
  • the difference judgment sub-module 3092 is configured to determine whether the state difference value is greater than a preset movement threshold.
  • the movement determining sub-module 3094 is configured to determine that the target head enters a moving state when the state difference value is greater than a movement threshold.
  • the state sequence generating module 301 can include a state data generating submodule 3010 and a state sequence generating submodule 3012.
  • the state data generating sub-module 3010 is configured to acquire data collected by the sensor, and generate state data corresponding to the target head.
  • the status sequence generation sub-module 3012 is configured to generate a target status sequence using the generated status data.
  • the fitting curve generating module 303 may be specifically configured to invoke a preset simulation algorithm to perform simulation calculation on the state data of the target state sequence to generate a fitting curve.
  • the field of view determination module 305 can include the following sub-modules:
  • the rendering time determination module 3050 is configured to determine a rendering moment of the target scene based on the frame delay time.
  • the target state data determining sub-module 3052 is configured to calculate the target state data corresponding to the rendering time by using the fitting curve.
  • the field of view generation sub-module 3054 is configured to perform calculation using the target state data to generate the field of view.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the invention may be provided as a method, apparatus, or computer program product.
  • embodiments of the invention may be in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the invention may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Figure 4 illustrates that a mobile terminal in accordance with the present invention can be implemented.
  • the mobile terminal transmits
  • the processor 410 and a computer program product or computer readable medium in the form of a memory 420 are included.
  • the memory 420 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • Memory 420 has a memory space 430 for program code 431 for performing any of the method steps described above.
  • storage space 430 for program code may include various program code 431 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • These computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such computer program products are typically portable or fixed storage units as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 420 in the mobile terminal of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit includes computer readable code 431', ie, code readable by a processor, such as 410, that when executed by the mobile terminal causes the mobile terminal to perform each of the methods described above step.
  • Embodiments of the invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing terminal device such that a series of operational steps are performed on the computer or other programmable terminal device to produce computer-implemented processing, such that the computer or other programmable terminal device
  • the instructions executed on the instructions are provided for implementation in a block or blocks of a flow or a flow and/or block diagram of the flowchart The steps of the feature.

Abstract

一种图像渲染处理的方法和装置,该方法包括:对目标头部进行状态检测,生成目标状态序列(101);当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线(103);依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角(105);基于所述视场角对所述目标场景进行渲染,生成渲染图像(107)。上述方案可以通过拟合曲线预测目标头部的移动状态,补偿一个预估的视场角偏差,从而可以有效减少图像帧在渲染开始时与结束时的视场角偏差,有效减轻了用户头部快速移动时的眩晕感,获得比较好的图像显示效果。

Description

一种图像渲染处理的方法及装置
本申请要求在2015年12月4日提交中国专利局、申请号为201510889836.6、发明名称为“一种图像渲染处理的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及虚拟现实技术领域,特别是涉及一种图像渲染处理的方法和一种图像渲染处理的装置。
背景技术
虚拟实境(Virtual Reality,VR),又称灵境技术或虚拟现实技术,是一种全部或部分由计算机生成的视觉、听觉、触觉等多维感觉环境。通过头盔显示器、数据手套等辅助传感设备,给人提供一个观察并与虚拟环境进行交互作用的多维人机接口,使人可以进入这个虚拟环境中直接观察事物的内在变化并与事物发生交互作用,给人一种“身临其境”的真实感。
随着VR技术的快速发展,基于移动终端的VR影院系统也迅速的发展起来。在基于移动终端的VR影院系统中,利用头部跟踪来改变图像的视角,使得用户的视觉系统和运动感知系统之间就可以联系起来,感觉更逼真。具体的,在基于移动终端的VR影院系统中,视频的各图像帧显示到屏幕需要经过采集用户头部状态、计算视场角、根据视场角渲染场景和视频、进行反畸变反色散和时间偏差效果(TimeWarp)处理等环节。但是,发明人在实现本发明的过程中发现:采集用户头部状态、计算视场角、根据视场角渲染场景和视频这些处理环节需要花费一段时间,因此在用户头部转动时,渲染开始时的视场角与渲染结束时的视场角就会有一个偏差,导致移动终端实际显示的图像与用户当前位置所需要显示的图像存在偏差,使得用户眼睛实际看到的是与当前位置有偏差的场景图像,导致用户在观看的过程中会感到眩晕。当图像帧显示延迟的时间越长,头部转动越快,渲染开始时与渲染结束时的视场角偏差就会越大,使得用户眼睛实际看到的是与当前位置有较大偏差的 场景图像,导致观看视频时眩晕感就越强烈,即图像的显示效果较差,影响了视频的播放效果。
显然,在现有基于移动终端的VR影院系统中,由于图像帧在渲染开始时与结束时的视场角偏差问题,导致移动终端实际显示的场景图像与用户当前位置所需要显示的图像有较大的偏差。
发明内容
本发明实施例所要解决的技术问题是提供一种图像渲染处理的方法,减少图像帧在渲染开始时与结束时的视场角偏差,从而解决视场角偏差而导致图像显示效果差的问题。
相应的,本发明实施例还提供了一种图像渲染处理的装置,用以保证上述方法的实现及应用。
为了解决上述问题,本发明实施例公开了一种图像渲染处理的方法,包括:
对目标头部进行状态检测,生成目标状态序列;
当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线;
依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角;
基于所述视场角对所述目标场景进行渲染,生成渲染图像。
相应的,本发明实施例还公开了一种图像渲染处理的装置,包括:
状态序列生成模块,用于对目标头部进行状态检测,生成目标状态序列;
拟合曲线生成模块,用于当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线;
视场角确定模块,用于依据预先生成的帧延迟时间和所述拟合曲线,确 定目标场景的视场角;
渲染图像生成模块,用于基于所述视场角对所述目标场景进行渲染,生成渲染图像。
本发明实施例提供一种计算机程序,其包括计算机可读代码,当所述计算机可读代码在移动终端上运行时,导致所述移动终端执行上述的图像渲染处理的方法。
本发明实施例提供一种计算机可读介质,其中存储了上述的计算机程序。
与现有技术相比,本发明实施例包括以下优点:
本发明实施例通过检测目标头部的状态,生成目标状态序列,以及在判断目标头部进入移动状态时,通过对目标状态序列进行模拟,生成拟合曲线;依据帧延迟时间和拟合曲线,确定目标场景的视场角,即可以通过拟合曲线预测目标头部的移动状态,补偿一个预估的视场角偏差,从而可以有效减少图像帧在渲染开始时与结束时的视场角偏差,有效减轻了用户头部快速移动时的眩晕感,即获得比较好的图像显示效果,提高用户体验。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明的一种图像渲染处理的方法实施例的步骤流程图;
图2是本发明的一种图像渲染处理的方法优选实施例的步骤流程图;
图3A是本发明的一种图像渲染处理的装置实施例的结构框图;
图3B是本发明的一种图像渲染处理的装置优选实施例的结构框图;
图4示意性地示出了用于执行根据本发明的方法的移动终端的框图;以及
图5示意性地示出了用于保持或者携带实现根据本发明的方法的程序代码的存储单元。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
针对上述问题,本发明实施例的核心构思之一在于,通过检测用户头部的状态,生成拟合曲线,依据帧延迟时间和拟合曲线,确定目标场景的视场角,即可以通过拟合曲线预测目标头部的移动状态,补偿一个预估的视场角偏差,从而可以有效减少图像帧在渲染开始时与结束时的视场角偏差,有效减轻了用户头部快速移动时的眩晕感,获得比较好的图像显示效果。
参照图1,示出了本发明的一种图像渲染处理的方法实施例的步骤流程图,具体可以包括如下步骤:
步骤101,对目标头部进行状态检测,生成目标状态序列。
在基于移动终端的VR影院系统中,利用头部跟踪来改变图像的视角,使得用户的视觉系统和运动感知系统之间就可以联系起来,感觉更逼真。通常,可以通过位置追踪器,对用户的头部进行跟踪,确定用户头部的运动状态。其中,位置追踪器又称位置跟踪器,是指作用于空间跟踪与定位的装置,一般与其他VR设备结合使用,如:数据头盔、立体眼镜、数据手套等,使参与者在空间上能够自由移动、旋转,不局限于固定的空间位置。基于移动终端的VR系统可以通过检测用户头部状态确定用户头部状态,基于用户头部状态确定图像的视场角,并按照所确定的视场角对图像进行渲染,获得比较好的图像显示效果。需要说明的是,移动终端是指可以在移动中使用的计算机设备,例如智能手机、笔记本电脑、平板电脑等,本发明实施例对此不作限制。本发明实施例将以手机为例,对本发明实施例进行详细描述,但不应将此作为对本发明实施例的限制。
作为本发明实施例的一个具体示例,基于手机的VR系统可以通过数据头盔、立体眼镜、数据手套等辅助传感设备监测用户头部的移动状态,即将监测的用户头部作为目标头部,对目标头部进行状态检测,从而可以确定目 标头部相对于手机的显示屏的状态信息。基于目标头部所对应的状态信息,通过计算,可以得到用户当前状态所对应的状态数据。例如,在用户戴上数据头盔后,通过监测该用户的头部(即目标头部)转动状态,可以计算得到目标头部相对手机的显示屏的角度,即生成状态数据。具体的,可以根据用户当前状态所对应得头部朝向、移动方向、速度等其中任意一种或几种数据进行计算,生产目标头部相对手机的显示屏的角度。
VR系统可以将生成的状态数据存入相应的状态序列,生成该目标头部所对应的目标状态序列,如依次将目标头部A在不同时间相对手机的显示屏的角度存入相应的状态序列,形成目标头部A所对应的目标状态序列LA。其中,目标状态序列LA可以存储n个状态数据,n为整正数,如30、10或者50等,本发明实施例对此不作限制。
在本发明的一种优选实施例中,上述步骤101可以包括以下子步骤:
子步骤1010,获取传感器所采集的数据,生成目标头部所对应的状态数据。
子步骤1012,采用所生成的状态数据生成目标状态序列。
步骤103,当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线。
实际上,可以通过实时监测目标头部转动状态,判断该目标头部是否进入移动状态,即判断该目标头部相对于手机显示屏是否移动。具体的,可以依据目标头部所对应的状态数据,判断所述目标头部是否进入移动状态。例如,可以通过判断目标头部相对手机的显示屏的角度是否变化,在目标头部相对手机的显示屏的角度变化时,判定目标头部进入移动状态;在目标头部相对手机的显示屏的角度不变时,判定目标头部没有进入移动状态,即目标头部相对手机的显示屏静止。
在目标头部进入移动状态时,基于手机的VR系统可以调用预置的模拟算法对所述目标状态序列进行模拟,生成目标头部对应的拟合曲线N=S(t)。其中,N表示状态数据,t表示时间。通过拟合曲线,系统可以计算出目标头部在每一时刻t所对应的状态数据N,即通过目标头部对应的拟合曲线,通过计算可以预测用户在下一帧时目标头部所对应的状态数据。例如,假设目标头部在50秒时,通过计算t为50秒时S(t)的值,得到S(50秒)为150 度,即可以确定目标头部在50秒时相对于显示屏的角度为150度。
可选的,对所述目标状态序列进行模拟,生成拟合曲线的步骤具体可以是调用预置的模拟算法对所述目标状态序列的状态数据进行模拟计算,生成拟合曲线。
步骤105,依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角。
具体的,VR系统可以基于图像渲染的历史数据,生成帧延迟时间。例如,可以记录一个图像帧渲染开始时的时间信息t0以及结束时的时间信息t1,通过计算t0与t1的差值,可以得该图像帧从渲染开始时到显示在显示屏时的延迟时间,将该延迟时间确定为帧延迟时间T。当然,为了提高帧延迟时间T的精确度,也可以采用多个图像帧的延迟时间确定帧延迟时间T,如采用60个图像帧的延迟时间确定帧延迟时间T,即对60个图像帧的延迟时间进行统计,计算这60个图像帧的延迟时间的平均值,将该平均值确定为帧延迟时间T,本发明实施例对帧延迟时间的生成方式不作限制。
当需要对场景的图像帧进行渲染时,将该场景作为目标场景,并基于预先生成的帧延迟时间T确定目标场景的渲染时刻,如将当前时间t3与帧延迟时间T的和确定为该目标场景的渲染时刻。通过拟合曲线,可以计算出该目标场景的渲染时刻所对应得目标状态数据。采用目标状态数据进行计算,可以得到该目标状态数据对应的视场角,将计算得到视场角作为目标场景的视场角,即在目标场景的图像帧渲染之初就补偿一个预估的偏差,有效地减少图像帧在渲染开始时与结束时的视场角偏差,从而可以获得比较好的图像显示效果。
步骤107,基于所述视场角对所述目标场景进行渲染,生成渲染图像。
在图像渲染时,基于手机的VR系统基于计算得视场角,对目标场景的图像帧进行渲染,生成渲染图像。具体的,基于手机的VR系统可以采用渲染技术,如Z缓冲技术、光线跟踪技术、辐射度技术等,针对计算得到视场角,对图像帧进行渲染,生成目标场景的渲染图像,相当于调用预置渲染实现算法,针对该视场角对目标场景的数据帧进行计算,得到渲染图像数据,即生成渲染图像。
在本发明实施例中,基于移动终端的VR系统可以通过检测目标头部的 状态,生成目标状态序列,以及在判定目标头部进入移动状态时,通过对目标状态序列进行模拟,生成拟合曲线;依据帧延迟时间和拟合曲线,确定目标场景的视场角,即可以通过拟合曲线预测目标头部的移动状态,补偿一个预估的视场角偏差,从而可以有效减少图像帧在渲染开始时与结束时的视场角偏差,有效减轻了用户头部快速移动时的眩晕感,即获得比较好的图像显示效果,提高用户体验。
参照图2,示出了本发明的一种图像渲染处理的方法实施例的步骤流程图,具体可以包括如下步骤:
步骤201,获取传感器所采集的数据,生成目标头部所对应的状态数据。
实际上,VR设备如数据头盔、立体眼镜、数据手套等对目标头部进行监测,通常是通过传感器采集数据。具体的,可以通过陀螺仪检测到手机姿态(即屏幕方向)、通过加速计可以检测手机受到的加速度的大小和移动方向。其中,屏幕方向相当于头部朝向。例如,在确定头部朝向后,基于手机的VR系统可以根据左右眼上下、左右视野范围等参数,计算出左右眼的视场角,进而可以基于左右眼的视场角可以确定目标头部相对显示屏的角度,即生成状态数据。
步骤203,采用所生成的状态数据生成目标状态序列。
VR系统可以将生成的状态数据依次存入相应的状态序列,生成该目标头部所对应的目标状态序列,例如将目标头部A在各不同时刻相对手机的显示屏的角度N1、N2、N3……Nn依次存入相应的状态序列LA中,即生成目标头部A所对应的目标状态序列LA。为了保证渲染图像的效率以及计算得到目标场景的视场角的精确度,优选的将目标状态序列LA设置为可以存入30个状态数据N的序列,即可以将最新生成的30个状态数据N存入目标状态序列LA。
具体的,在1秒时间内,传感器可以采集到多个数据,基于手机的VR系统就可以生成多个状态数据,对每1秒时间内所生成的多个状态数据进行统计,生成每1秒时间内所有状态数据的平均值,将该平均值作为目标头部在这1秒时所对应的状态数据,并存入目标状态序列LA。
基于手机的VR系统可以基于历史生成的状态数据形成目标状态序列 LA,并生成该目标头部对应的拟合曲线。在生成最新的状态数据时,可以通过计算,确定该最新的状态数据对于拟合曲线的偏差,如采用拟合曲线对最新所生成的时间进行计算,得到最新的状态数据所生成的时间对应的虚拟状态数据,进而可以计算虚拟状态数据与最新的状态数据的差值,将该差值作为该最新的状态数据对于拟合曲线的偏差,判断该最新的状态数据对于拟合曲线的偏差是否大于预置的偏差阈值。当该最新的状态数据对于拟合曲线的偏差不大于预置的偏差阈值时,基于该最新的状态数据,更新目标状态序列LA;当该最新的状态数据对于拟合曲线的偏差大于预置的偏差阈值时,判定该最新的状态数据为异常数据,抛弃该最新的状态数据。
步骤205,依据所述状态数据,判断所述目标头部是否进入移动状态。
具体的,可以基于目标状态序列LA中所保存的所有状态数据,判断目标头部所对应的状态数据是否变化,当目标头部所对应的状态数据变化时,可以确定用户进入移动状态。
在本发明的一种优选实施例中,上述步骤205可以包括以下子步骤:
子步骤2050,对所述目标状态序列的状态数据进行统计,确定状态差值。
实际上,可以对目标状态序列LA中所有状态数据进行比较,确定目标状态序列LA中所有状态数据的最小值S和最大值B,以及通过计算得到目标状态序列LA所有状态数据对应的均值M。基于手机的VR系统可以将最大值B与均值M的差值作为目标头部对应的状态差值;也可以将最小值S与均值M的差值作为目标头部对应的状态差值;还可以通过最小值S和最大值B作为目标头部对应的状态差值,本发明实施例对此不作限制,优选的将最小值S与均值M的差值或者最大值B与均值M的差值作为目标头部对应的状态差值。
子步骤2052,判断所述状态差值是否大于预置的移动阈值。
基于手机的VR系统可以预先设置移动阈值,该移动阈值用于判断目标头部是否进入移动状态。具体的,通过判断目标头部对应的状态差值是否大于预置的移动阈值,可以确定目标头部是否进入移动状态。如上述例子中,状态数据为目标头部相对手机的显示屏的角度,基于手机的VR系统可以将移动阈值预置为10度,通过目标头部对应的状态差值是否大于10度,可以确定目标头部是否进入快速转动状态。
子步骤2054,在所述状态差值大于移动阈值时,判定所述目标头部进入移动状态。
当目标头部对应的状态差值大于移动阈值时,可以确定该目标头部进入快速转动状态,即进入移动状态。例如,可以在最小值S与均值M的差值大于10度时,判定目标头部进入快速转动状态,即进入移动状态;或者,在最大值B与均值M的差值大于10度,判定目标头部进入移动状态。
当然,还可以在目标头部对应的状态差值不大于移动阈值时,判定目标头部没有进入移动状态,相当于确定目标头部相对于显示屏幕静止。
步骤207,调用预置的模拟算法对所述目标状态序列的状态数据进行模拟计算,生成拟合曲线。
具体的,基于手机的VR系统可以基于最小二乘法设置模拟算法。当目标头部进入移动状态时,可以通过调用预置的模拟算法对目标状态序列的状态数据进行最小二乘法模拟计算,生成目标头部对应的拟合曲线N=S(t)。
步骤209,依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角。
在本发明的一种优选实施例中,上述步骤209可以包括以下子步骤:
子步骤2090,基于所述帧延迟时间,确定目标场景的渲染时刻。
当需要对目标场景进行渲染时,基于手机的VR系统通过获取当前时间t3,采用当前时间t3与帧延迟时间T的和作为目标场景的渲染时刻。
子步骤2092,采用所述拟合曲线计算出所述渲染时刻所对应的目标状态数据。
在本发明实施例中,基于手机的VR系统可以通过拟合曲线,计算出该目标场景的渲染时刻所对应得目标状态数据。例如,将目标场景的渲染时刻(t3+T)作为t,代入拟合曲线N=S(t)中,通过计算可以得到目标头部在时间(t3+T)时所对应得状态数据N3,其中N3=S(t3+T),即将渲染时刻(t3+T)所对应的状态数据N3作为目标状态数据。
子步骤2094,采用所述目标状态数据进行计算,生成所述视场角。
基于手机的VR系统采用目标状态数据N3进行计算,可以得到目标场景的视场角。在渲染该目标场景得图像帧时,采用目标状态数据N3进行渲染,可以有效地减少图像帧在渲染开始时与结束时的视场角偏差。
步骤211,基于所述视场角对所述目标场景进行渲染,生成渲染图像。
在本发明实施例中,基于移动终端的VR系统通过拟合曲线预测目标头部的移动状态,补偿一个预估的视场角偏差,有效地减少图像帧在渲染开始时与结束时的视场角偏差,使得用户眼睛实际看到的是与当前位置偏差比较小的场景图像,从而有效减轻了用户头部快速移动时的眩晕感,获得比较好的图像显示效果,提高用户体验。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明实施例并不受所描述的动作顺序的限制,因为依据本发明实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本发明实施例所必须的。
参照图3A,示出了本发明一种图像渲染处理的装置实施例的结构框图,具体可以包括如下模块:
状态序列生成模块301,用于对目标头部进行状态检测,生成目标状态序列。
拟合曲线生成模块303,用于当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线。
视场角确定模块305,用于依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角。
渲染图像生成模块307,用于基于所述视场角对所述目标场景进行渲染,生成渲染图像。
在图3A的基础上,可选的,该图像渲染处理的装置还可以包括移动判断模块309,参照图3B。
其中,移动判断模块309用于依据所述状态数据,判断所述目标头部是否进入移动状态。
在本发明的一种优选实施例中,移动判断模块309可以包括以下子模块:
状态差值确定子模块3090,用于对所述目标状态序列的状态数据进行统计,确定状态差值。
差值判断子模块3092,用于判断所述状态差值是否大于预置的移动阈值。
移动判定子模块3094,用于在所述状态差值大于移动阈值时,判定所述目标头部进入移动状态。
可选的,状态序列生成模块301可以包括状态数据生成子模块3010和状态序列生成子模块3012。其中,状态数据生成子模块3010,用于获取传感器所采集的数据,生成目标头部所对应的状态数据。状态序列生成子模块3012,用于采用所生成的状态数据生成目标状态序列。
拟合曲线生成模块303具体可以用于调用预置的模拟算法对所述目标状态序列的状态数据进行模拟计算,生成拟合曲线。
在本发明的一种优选实施例中,视场角确定模块305可以包括以下子模块:
渲染时刻确定模块3050,用于基于所述帧延迟时间,确定目标场景的渲染时刻。
目标状态数据确定子模块3052,用于采用所述拟合曲线计算出所述渲染时刻所对应的目标状态数据。
视场角生成子模块3054,用于采用所述目标状态数据进行计算,生成所述视场角。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本发明实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本发明实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
例如,图4示出了可以实现根据本发明的移动终端。该移动终端传 统上包括处理器410和以存储器420形式的计算机程序产品或者计算机可读介质。存储器420可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器420具有用于执行上述方法中的任何方法步骤的程序代码431的存储空间430。例如,用于程序代码的存储空间430可以包括分别用于实现上面的方法中的各种步骤的各个程序代码431。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图5所述的便携式或者固定存储单元。该存储单元可以具有与图4的移动终端中的存储器420类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码431’,即可以由例如诸如410之类的处理器读取的代码,这些代码当由移动终端运行时,导致该移动终端执行上面所描述的方法中的各个步骤。
本发明实施例是参照根据本发明实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定 的功能的步骤。
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本发明所提供的一种图像渲染处理的方法和一种图像渲染处理的装置,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (14)

  1. 一种图像渲染处理的方法,其特征在于,包括:
    对目标头部进行状态检测,生成目标状态序列;
    当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线;
    依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角;
    基于所述视场角对所述目标场景进行渲染,生成渲染图像。
  2. 根据权利要求1所述的方法,其特征在于,所述对目标头部进行状态检测,生成目标状态序列,包括:
    获取传感器所采集的数据,生成目标头部所对应的状态数据;
    采用所生成的状态数据生成目标状态序列。
  3. 根据权利要求2所述的方法,其特征在于,生成目标状态序列之后,还包括:依据所述状态数据,判断所述目标头部是否进入移动状态。
  4. 根据权利要求3所述的方法,其特征在于,依据所述状态数据,判断所述目标头部是否进入移动状态,包括:
    对所述目标状态序列的状态数据进行统计,确定状态差值;
    判断所述状态差值是否大于预置的移动阈值;
    在所述状态差值大于移动阈值时,判定所述目标头部进入移动状态。
  5. 根据权利要求2至4任一所述的方法,其特征在于,对所述目标状态序列进行模拟,生成拟合曲线,包括:调用预置的模拟算法对所述目标状态序列的状态数据进行模拟计算,生成拟合曲线。
  6. 根据权利要求1至4任一所述的方法,其特征在于,所述依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角,包括:
    基于所述帧延迟时间,确定目标场景的渲染时刻;
    采用所述拟合曲线计算出所述渲染时刻所对应的目标状态数据;
    采用所述目标状态数据进行计算,生成所述视场角。
  7. 一种图像渲染处理的装置,其特征在于,包括:
    状态序列生成模块,用于对目标头部进行状态检测,生成目标状态序列;
    拟合曲线生成模块,用于当判断所述目标头部进入移动状态时,对所述目标状态序列进行模拟,生成拟合曲线;
    视场角确定模块,用于依据预先生成的帧延迟时间和所述拟合曲线,确定目标场景的视场角;
    渲染图像生成模块,用于基于所述视场角对所述目标场景进行渲染,生成渲染图像。
  8. 根据权利要求7所述的装置,其特征在于,所述状态序列生成模块,包括:
    状态数据生成子模块,用于获取传感器所采集的数据,生成目标头部所对应的状态数据;
    状态序列生成子模块,用于采用所生成的状态数据生成目标状态序列。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:移动判断模块,用于依据所述状态数据,判断所述目标头部是否进入移动状态。
  10. 根据权利要求9所述的装置,其特征在于,所述移动判断模块,包括:
    状态差值确定子模块,用于对所述目标状态序列的状态数据进行统计,确定状态差值;
    差值判断子模块,用于判断所述状态差值是否大于预置的移动阈值;
    移动判定子模块,用于在所述状态差值大于移动阈值时,判定所述目标头部进入移动状态。
  11. 根据权利要求8至10任一所述的装置,其特征在于,所述拟合曲线生成模块,具体用于调用预置的模拟算法对所述目标状态序列的状态数据进行模拟计算,生成拟合曲线。
  12. 根据权利要求7至10任一所述的装置,其特征在于,所述视场角确定模块,包括:
    渲染时刻确定模块,用于基于所述帧延迟时间,确定目标场景的渲染时刻;
    目标状态数据确定子模块,用于采用所述拟合曲线计算出所述渲染时刻所对应的目标状态数据;
    视场角生成子模块,用于采用所述目标状态数据进行计算,生成所述视场角。
  13. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在移动终端上运行时,导致所述移动终端执行根据权利要求1-6中的任一个所述的图像渲染处理的方法。
  14. 一种计算机可读介质,其中存储了如权利要求13所述的计算机程序。
PCT/CN2016/089271 2015-12-04 2016-07-07 一种图像渲染处理的方法及装置 WO2017092334A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/246,396 US20170160795A1 (en) 2015-12-04 2016-08-24 Method and device for image rendering processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510889836.6 2015-12-04
CN201510889836.6A CN105976424A (zh) 2015-12-04 2015-12-04 一种图像渲染处理的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/246,396 Continuation US20170160795A1 (en) 2015-12-04 2016-08-24 Method and device for image rendering processing

Publications (1)

Publication Number Publication Date
WO2017092334A1 true WO2017092334A1 (zh) 2017-06-08

Family

ID=56988272

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089271 WO2017092334A1 (zh) 2015-12-04 2016-07-07 一种图像渲染处理的方法及装置

Country Status (3)

Country Link
US (1) US20170160795A1 (zh)
CN (1) CN105976424A (zh)
WO (1) WO2017092334A1 (zh)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10962780B2 (en) * 2015-10-26 2021-03-30 Microsoft Technology Licensing, Llc Remote rendering for virtual images
US10067565B2 (en) * 2016-09-29 2018-09-04 Intel Corporation Methods and apparatus for identifying potentially seizure-inducing virtual reality content
CN107979763B (zh) * 2016-10-21 2021-07-06 阿里巴巴集团控股有限公司 一种虚拟现实设备生成视频、播放方法、装置及系统
CN108062156A (zh) * 2016-11-07 2018-05-22 上海乐相科技有限公司 一种降低虚拟现实设备功耗的方法及装置
WO2018086399A1 (zh) * 2016-11-14 2018-05-17 华为技术有限公司 一种图像渲染的方法、装置及vr设备
CN106598252A (zh) * 2016-12-23 2017-04-26 深圳超多维科技有限公司 一种图像显示调节方法、装置、存储介质及电子设备
WO2018122600A2 (en) * 2016-12-28 2018-07-05 Quan Xiao Apparatus and method of for natural, anti-motion-sickness interaction towards synchronized visual vestibular proprioception interaction including navigation (movement control) as well as target selection in immersive environments such as vr/ar/simulation/game, and modular multi-use sensing/processing system to satisfy different usage scenarios with different form of combination
US10268263B2 (en) * 2017-04-20 2019-04-23 Microsoft Technology Licensing, Llc Vestibular anchoring
CN107479692B (zh) * 2017-07-06 2020-08-28 北京小鸟看看科技有限公司 虚拟现实场景的控制方法、设备及虚拟现实设备
CN107507241B (zh) * 2017-08-16 2021-02-26 歌尔光学科技有限公司 虚拟场景中视角偏差矫正方法及装置
WO2019054611A1 (ko) 2017-09-14 2019-03-21 삼성전자 주식회사 전자 장치 및 그 동작방법
GB2566478B (en) * 2017-09-14 2019-10-30 Samsung Electronics Co Ltd Probability based 360 degree video stabilisation
CN108427199A (zh) * 2018-03-26 2018-08-21 京东方科技集团股份有限公司 一种增强现实设备、系统及方法
CN109194951B (zh) * 2018-11-12 2021-01-26 京东方科技集团股份有限公司 头戴显示设备的显示方法和头戴显示设备
CN109741463B (zh) * 2019-01-02 2022-07-19 京东方科技集团股份有限公司 虚拟现实场景的渲染方法、装置及设备
CN109756728B (zh) * 2019-01-02 2021-12-07 京东方科技集团股份有限公司 图像显示方法及装置,电子设备,计算机可读存储介质
CN110519247B (zh) * 2019-08-16 2022-01-21 上海乐相科技有限公司 一种一对多虚拟现实展示方法及装置
CN110728749B (zh) * 2019-10-10 2023-11-07 青岛大学附属医院 虚拟三维图像显示系统及方法
CN110969706B (zh) * 2019-12-02 2023-10-10 Oppo广东移动通信有限公司 增强现实设备及其图像处理方法、系统以及存储介质
CN113015000A (zh) * 2019-12-19 2021-06-22 中兴通讯股份有限公司 渲染和显示的方法、服务器、终端、计算机可读介质
CN111698425B (zh) * 2020-06-22 2021-11-23 四川可易世界科技有限公司 一种实现实景漫游技术连贯性的方法
CN112380989B (zh) * 2020-11-13 2023-01-24 歌尔科技有限公司 一种头戴显示设备及其数据获取方法、装置和主机
CN115167688B (zh) * 2022-09-07 2022-12-16 唯羲科技有限公司 一种基于ar眼镜的会议模拟系统及方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763636A (zh) * 2009-09-23 2010-06-30 中国科学院自动化研究所 视频序列中的三维人脸位置和姿态跟踪的方法
CN103077497A (zh) * 2011-10-26 2013-05-01 中国移动通信集团公司 对层次细节模型中的图像进行缩放的方法和装置
CN104715468A (zh) * 2015-03-31 2015-06-17 王子强 一种基于Unity3D的裸眼3D内容制作的改进方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030429A1 (en) * 2006-08-07 2008-02-07 International Business Machines Corporation System and method of enhanced virtual reality
KR20140066258A (ko) * 2011-09-26 2014-05-30 마이크로소프트 코포레이션 투시 근안 디스플레이에 대한 센서 입력에 기초한 비디오 디스플레이 수정
CN104714048B (zh) * 2015-03-30 2017-11-21 上海斐讯数据通信技术有限公司 一种用于移动物体移动速度的检测方法及移动终端

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763636A (zh) * 2009-09-23 2010-06-30 中国科学院自动化研究所 视频序列中的三维人脸位置和姿态跟踪的方法
CN103077497A (zh) * 2011-10-26 2013-05-01 中国移动通信集团公司 对层次细节模型中的图像进行缩放的方法和装置
CN104715468A (zh) * 2015-03-31 2015-06-17 王子强 一种基于Unity3D的裸眼3D内容制作的改进方法

Also Published As

Publication number Publication date
CN105976424A (zh) 2016-09-28
US20170160795A1 (en) 2017-06-08

Similar Documents

Publication Publication Date Title
WO2017092334A1 (zh) 一种图像渲染处理的方法及装置
WO2017092332A1 (zh) 一种渲染图像的处理方法及装置
WO2017092339A1 (zh) 一种收集传感器数据的处理方法和装置
JP6258953B2 (ja) 単眼視覚slamのための高速初期化
US9696859B1 (en) Detecting tap-based user input on a mobile device based on motion sensor data
JP6456347B2 (ja) 平面固有の特徴のターゲットのinsitu生成
TWI544447B (zh) 擴增實境的方法及系統
JP7008730B2 (ja) 画像に挿入される画像コンテンツについての影生成
CN109741463B (zh) 虚拟现实场景的渲染方法、装置及设备
TWI543019B (zh) 自動化裝置顯示器的定向偵測技術
CN104021590A (zh) 虚拟试穿试戴系统和虚拟试穿试戴方法
KR20160111008A (ko) 비제약형 slam을 위한 센서-기반 카메라 모션 검출
JP2016526313A (ja) 総体的カメラ移動およびパノラマカメラ移動を使用した単眼視覚slam
US20160227868A1 (en) Removable face shield for augmented reality device
US11244145B2 (en) Information processing apparatus, information processing method, and recording medium
US20170154467A1 (en) Processing method and device for playing video
US20170185147A1 (en) A method and apparatus for displaying a virtual object in three-dimensional (3d) space
US10984571B2 (en) Preventing transition shocks during transitions between realities
EP3506082A1 (en) Audio rendering for augmented reality
KR20180013892A (ko) 가상 현실을 위한 반응성 애니메이션
US9161012B2 (en) Video compression using virtual skeleton
JP7103354B2 (ja) 情報処理装置、情報処理方法、及びプログラム
JP2021503665A (ja) 環境モデルを生成するための方法および装置ならびに記憶媒体
US10582190B2 (en) Virtual training system
CN108804161B (zh) 应用的初始化方法、装置、终端和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869647

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869647

Country of ref document: EP

Kind code of ref document: A1