WO2017219652A1 - 一种头戴显示器、视频输出设备和视频处理方法、系统 - Google Patents

一种头戴显示器、视频输出设备和视频处理方法、系统 Download PDF

Info

Publication number
WO2017219652A1
WO2017219652A1 PCT/CN2016/114042 CN2016114042W WO2017219652A1 WO 2017219652 A1 WO2017219652 A1 WO 2017219652A1 CN 2016114042 W CN2016114042 W CN 2016114042W WO 2017219652 A1 WO2017219652 A1 WO 2017219652A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
mounted display
frame image
output device
data
Prior art date
Application number
PCT/CN2016/114042
Other languages
English (en)
French (fr)
Inventor
赵文慧
徐建军
张超
Original Assignee
青岛歌尔声学科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 青岛歌尔声学科技有限公司 filed Critical 青岛歌尔声学科技有限公司
Publication of WO2017219652A1 publication Critical patent/WO2017219652A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]

Definitions

  • the present invention relates to the field of head-mounted display technologies, and in particular, to a head-mounted display, a video output device, and a video processing method and system.
  • the Head Mount Display can display the video obtained from the video output device such as a smartphone or a computer frame by frame for the wearer to watch.
  • 1 is a schematic diagram of a video transmission scenario of a prior art head-mounted display. Referring to FIG. 1, since the transmission amount of video data is large, compression processing of video data is required. Taking a smart phone as an example, the video transmission process includes: the mobile phone uses a compression coding algorithm to compress and encode each frame in the video, and then transmits the compressed and encoded video frame to the head-mounted display through the HDMI cable, and the head-mounted display is decoded. display.
  • Video compression coding methods which can be divided into lossless compression coding and lossy compression on a large scale.
  • Common algorithms for lossless compression coding include: Hafman coding, run length coding, arithmetic coding, etc.
  • the advantage is that the video after lossless compression coding can be completely recovered when decoding.
  • the disadvantage is that the data transmission data is still large, occupying More transmission resources and longer transmission times.
  • the commonly used algorithms for lossy compression coding are: vector quantization coding, predictive coding, model image coding, etc. This compression coding is irreversible. There is a certain difference between the compressed data and the original data after compression, but it can be obtained. High compression ratio.
  • the video played in the head-mounted display of the prior art is transmitted after lossy compression or lossless compression according to the entire frame of one frame and one frame, and the video transmission efficiency and data integrity cannot be taken into consideration, and the user experience is Not good.
  • the present invention provides a head-mounted display, a video output device, and a video processing method and system, to solve the problem that the playback video in the head-mounted display of the prior art is transmitted after lossy compression or lossless compression of the entire frame, and the video transmission efficiency and Video integrity can not be balanced, the user experience is not good.
  • a head mounted display comprising an image acquisition unit, an image processing unit, a data transmission unit, a video decoding unit and a display unit:
  • An image acquisition unit configured to capture a user's head and eye images in real time and send the user's head and eye images to the image processing unit;
  • An image processing unit configured to calculate a motion trajectory of the user's eyeball according to the received user's head and eye image, and obtain The eyeball currently looks at the key area information of the display screen, and transmits the key area information to the data transmission unit;
  • a data transmission unit configured to send the key area information to the video output device connected to the head mounted display, so that the video output device performs the sub-region compression coding process according to the key area information on the current frame image in the output video; and, Receiving a frame image of the sub-region compression coding process sent by the video output device, and transmitting the frame image to the video decoding unit;
  • a video decoding unit configured to perform a sub-region decoding process on the received frame image subjected to the compression processing using the compression coding, and send the decoded frame image to the display unit;
  • the display unit is configured to display and output the decoded frame image, so that the data corresponding to the key area in the frame image is displayed in high definition, and the data corresponding to the remaining areas is weakly displayed.
  • a video output device comprising:
  • a communication unit configured to receive key area information of the user's eyeball currently watching the display screen sent by the head mounted display, and send the key area information to the video sub-area processing unit;
  • the video sub-area processing unit is configured to divide the current frame image in the to-be-output video into a high-definition area and a weakened area according to the key area information, and perform compression coding on the data in the high-definition area or perform lossless compression coding processing on the weakened area.
  • the data in the data is subjected to lossy compression coding processing, wherein the high-definition area is an area corresponding to the key area;
  • the communication unit is further configured to send the frame image after the sub-region compression coding process to the head mounted display.
  • a video processing method comprising:
  • the frame image after the output decoding process is displayed, so that the data corresponding to the key area in the frame image is displayed in high definition, and the data corresponding to the remaining areas is weakly displayed.
  • a video processing system comprising: a head mounted display according to an aspect of the present invention, and a video output device according to another aspect of the present invention; a head mounted display and a video output device Establish a wired or wireless data connection.
  • the beneficial effects of the present invention are: the technical solution of the embodiment of the present invention, by collecting the head and eye images of the user in real time, And calculating a key area of the user's eyeball currently looking at the display screen based on the collected image, and then transmitting the information of the key area to the video output device connected to the head mounted display, so that the video output device can according to the key area information.
  • the video output device then sends the sub-region processed video frame to the head-mounted display, and the head-mounted display performs sub-region decoding to display the output, thereby ensuring the video by lossless compression or non-compression of the video in the key area viewed by the user.
  • High-definition display and data losslessness, and lossy compression of data in the rest of the area reducing the amount of data so that video can be transmitted wirelessly to the head-mounted display, and improve data transmission efficiency, saving transmission time, optimization The user experience.
  • FIG. 1 is a schematic diagram of video transmission of a head mounted display and a video output device in the prior art
  • FIG. 2 is a block diagram showing the structure of a head mounted display according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a frame image sub-area according to an embodiment of the present invention.
  • FIG. 4 is a block diagram showing the structure of a video output device according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a video processing system according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart diagram of a video processing method according to an embodiment of the present invention.
  • the design concept of the present invention is that the video played in the head-mounted display of the prior art is transmitted after one frame and one frame of compression processing, resulting in a problem of poor user experience.
  • the embodiment of the present invention provides that the focus area of the wearer's eyeball is calculated based on the motion track of the wearer's eyeball, and the key area information is sent to the video output device, so that the video output device obtains the video frame according to the dynamic
  • the key area information is processed into the head-mounted display after being processed in a sub-area, so that the lossy compression processing of the data in the weakened area can be realized, so that the video can be wirelessly transmitted to the head-mounted display, and the high-definition area video is not processed.
  • lossless compression processing which reduces the amount of data while ensuring the integrity of video data in key areas.
  • the head mounted display 100 includes: an image acquisition unit 101, an image processing unit 102, a data transmission unit 103, a video decoding unit 104, and a display.
  • Unit 105 :
  • the image acquisition unit 101 is configured to capture the user's head and eye images in real time and send the user's head and eye images to the image processing unit 102;
  • the image processing unit 102 is configured to calculate a motion trajectory of the user's eyeball according to the received user's head and eye image, Go to the user's eyeball currently looking at the key area information of the display screen, the key area information is sent to the data transmission unit 103;
  • the data transmission unit 103 is configured to send the key area information to the video output device connected to the head mounted display, so that the video output device performs the sub-region compression coding process according to the key area information on the current frame image in the output video; and Receiving the sub-region compression-encoded frame image sent by the video output device, and transmitting the frame image to the video decoding unit 104;
  • the video decoding unit 104 is configured to perform a sub-region decoding process on the received sub-region compression-encoded frame image by using a compression-encoded inverse algorithm, and send the decoded frame image to the display unit 105;
  • the display unit 105 is configured to display and output the decoded frame image, so that the data in the area corresponding to the focus area in the frame image is displayed in high definition, and the data in the remaining area is weakly displayed.
  • the wearer's head and eye images are collected in real time, and the focus area currently worn by the wearer is calculated based on the collected images, and then the key area information is transmitted to the video output device. And causing the video output device to perform a sub-region compression encoding process on the image of the frame to be transmitted according to the key area information, and send the image to the head mounted display, and display the image by the head mounted display.
  • the image can be dynamically compared and compressed according to the processing method of the entire frame in the prior art, thereby ensuring high-definition video display and data loss in the key area of the user, and improving data transmission. Efficiency, saving transmission time.
  • FIG. 3 is a schematic diagram of a frame image sub-area according to an embodiment of the present invention.
  • the frame image sub-area structure shown in FIG. 3 is schematically illustrated, and the decoding and display of the head-mounted display are mainly described.
  • the decoding and display of the head-mounted display are mainly described.
  • an image of the user's head and eyes is first acquired by an image acquisition unit provided in the head mounted display.
  • the image acquisition unit here may be a camera, and the camera may be installed in a space between the display screen of the head mounted display and the eye of the user, and the real time image is obtained by capturing the user's head and the eyeball in real time through the camera.
  • the main movement of the eyeball has three forms: gaze, beat and smooth trailing tracking.
  • the angular range of eye movement is about 18 degrees, and if it exceeds 12 degrees, head movement is required to coordinate the completion. Therefore, when tracking the eyeball, consider the two factors of the head and the eyeball.
  • the camera's head and eye images are captured by the camera. The camera dynamically collects the original image viewed by the human eye and sends it to the image processing unit.
  • the image processing unit processes the original image according to the human eye tracking algorithm, calculates the motion trajectory of the user's eyeball, and determines the key area of the user's eyeball viewing, and then transmits the key area information (such as the coordinate value and radius of the center of the circle) to the video output device. So that the video output device processes the video sub-region during video compression encoding.
  • the basic technology of the image processing unit for human eye gaze tracking can be divided into hardware and software.
  • the hardware-based approach is mainly to bring a special helmet or The head fixing bracket is used to obtain the trajectory of the eyeball, but this method is inconvenient and the error is large.
  • the software-based eye tracking technology first uses the camera to acquire images of the human eye and face, reprocesses the image, calculates the trajectory of the eyeball, and transmits the data to the video processing device to control the processing and display of the video.
  • the software mode is adopted, and the image processing unit adopts an image processing chip, and tracks the eye movement according to the image processing algorithm and technology.
  • the key area is an area on the display screen with the coordinate value of the user's eyeball as the center of the circle, with a predetermined length as the radius.
  • the eyeball corresponds to a position coordinate on the display screen, and then a radius is determined according to the empirical value of the area range in which the eyeball can be observed, thereby determining the key viewing area of the eyeball.
  • the position coordinates are the coordinates of the pixels on the display.
  • the resolution of the display of the head-mounted display is 1680 ⁇ 1050, that is, 1680 pixels in the horizontal direction and 1050 pixels in the vertical direction
  • the position coordinate of the eye-gazing can be (800, 500), that is, the 800th pixel in the horizontal direction.
  • the position coordinates determined by the 500th pixel point in the point and the vertical direction are calculated, and the position coordinates corresponding to the eyeball are calculated, and then the length of the radius is combined to obtain the key area.
  • the illustrated key area is a circular area defined by a broken line.
  • the key area is a high-definition area when the frame image is displayed, and the area except the high-definition area in the frame image is a weakened area.
  • the head-mounted display calculates the key area of the user's eye, and then sends the key area information to the video output device, so that the video output device can follow the key points.
  • the area information divides one frame of image into corresponding high-definition areas, and the area outside the high-definition area as a weakened area, and performs dynamic sub-area processing.
  • the data transmission unit includes: an area information transmission subunit and a video data reception subunit; an area information transmission subunit, configured to send the key area information to the video output device connected to the head mounted display; and receive the video data.
  • the subunit is configured to receive the frame image after the subregion compression coding process sent by the video output device; the area information sending subunit and the video data receiving subunit are a wireless transmission module or a wired transmission module.
  • the video frames are basically processed according to a whole frame, and the relative amount of data is relatively large, so that the video transmission paths of the existing mainstream head-mounted displays are all transmitted by wire.
  • the method as shown in FIG. 1 , is connected between the video output device and the head mounted display through a High Definition Multimedia Interface (HDMI) connection line, and the video data in the video output device is compressed by the entire frame and passed through the HDMI data. The line is passed to the head-mounted display.
  • HDMI High Definition Multimedia Interface
  • This data transmission method needs to carry an HDMI cable, and if the entire frame lossy compression method is used for compression processing in order to improve transmission efficiency, the integrity and smoothness of the video will be affected, and the loss of video data integrity is adopted. Compression, at the expense of data transmission efficiency and transmission time.
  • the area information sending subunit and the video data receiving subunit in this embodiment are all wireless transmission modules, for example, a WiFi wireless transmission module.
  • the data line can be omitted, and the key area information that the user is gazing is calculated, and then the key area information is sent to the video output device to make the sub-area compression coding process ensured.
  • the video in the key area is not destructive, and the rest of the data is lossy.
  • the data of the remaining areas in one frame is lossyly compressed, the amount of data is greatly reduced, making video wireless transmission possible.
  • the loss of the video in the key area is ensured, the video data of the key area can be restored in a non-destructive manner in the subsequent decoding process without affecting the integrity of the video.
  • the image processing unit is further configured to send the key area information to the video decoding unit, where the video decoding unit is specifically configured to perform compression on the received frame image after the sub-region compression encoding process.
  • the encoded inverse algorithm and the key region information acquired from the image processing unit perform sub-region decoding processing.
  • the video output device performs compression coding of one frame image
  • dynamic sub-region compression coding is performed according to the key area information sent by the head mounted display
  • the encoded frame image is sent to the head mounted display
  • the data transmission of the head mounted display is performed.
  • the unit receives the encoded frame image and sends it to the video decoding unit.
  • the video decoding unit performs decoding according to the key area information dynamically acquired from the image processing unit and the encoded information carried by the encoded data, where the encoded information carried by the encoded data may include: high-definition area information, data in the high-definition area.
  • the video decoding unit acquires the key area information from the image processing unit, and the high-definition area information of the encoded data is consistent with the key area information, the video decoding unit can know that the data in the high-definition area is the key area of the user's eyeball currently watching. The corresponding data can then be decoded by using the inverse algorithm of the key area data encoding algorithm to obtain the video data corresponding to the key area.
  • encoding algorithm eg, Hafman coding
  • the video decoding unit may also know the weakened area of the encoded frame image according to the acquired key area information, and decode the data corresponding to the weakened area according to the inverse algorithm of the data encoding algorithm in the weakened area, and then decode the obtained frame.
  • the image is sent to the display unit for display output, and the data in the area corresponding to the focus area in the frame image is displayed in high definition, and the data in the remaining areas is weakly displayed.
  • FIG. 4 is a structural block diagram of a video output device according to an embodiment of the present invention.
  • the structure and working process of the video output device are mainly described.
  • the video output device 400 includes:
  • the communication unit 401 is configured to receive the key area information of the user's eyeball currently watching the display screen sent by the head mounted display, and send the key area information to the video sub-area processing unit 402;
  • the video sub-area processing unit 402 is configured to divide the current frame image in the to-be-output video into a high-definition area and a weakened area according to the key area information, and perform compression coding or non-lossless compression coding processing on the data in the high-definition area, and weaken the data.
  • the data in the area is subjected to lossy compression coding processing, wherein the high-definition area is an area corresponding to the key area;
  • the communication unit 401 is further configured to send the frame image after the sub-region compression coding process to the head mounted display.
  • the video output device 400 is a smart phone or a computer, and the smart phone or the computer stores video data.
  • the communication unit 401 in this embodiment includes: an area information receiving subunit and a video data sending subunit; and an area information receiving subunit, configured to receive key area information of the display screen currently viewed by the user's eyeball sent by the head mounted display; a subunit, configured to send the frame image after the subregion compression coding process to the head mounted display; the area information receiving subunit and the video data sending subunit are a wireless communication module or a wired communication module.
  • the area information receiving subunit is connected to the area information transmitting subunit in the data transmission unit of the head mounted display to realize transmission of the key area information.
  • the video data transmitting subunit is connected to the video data receiving subunit in the data transmission unit of the head mounted display to realize transmission of video data.
  • the transmission manners of the area information receiving subunit and the area information transmitting subunit should be the same. For example, both of them use the wireless transmission method for data transmission. And the transmission mode of the video data transmitting subunit and the video data receiving subunit should also be consistent.
  • the video sub-area processing unit 402 does not perform compression coding or lossless compression coding processing on the data in the high-definition area, and the coding standard when performing lossy compression coding processing on the data in the weakened area is H.264.
  • the advantages of H.264 video coding are: low bit rate, high quality image, strong fault tolerance, and strong network usability. Its biggest advantage is its high data compression ratio. Under the same image quality, H.264 The compression ratio is more than twice that of MPEG-2 and 1.5 to 2 times that of MPEG-4. H.264 also has high-quality and smooth images with high compression ratio. Because of this, the video data compressed by H.264 standard encoding requires less bandwidth and is more economical in the network transmission process.
  • the video output device 400 of the present embodiment treats the key area information that is viewed by the user through the eye tracking in real time and is sent by the user.
  • the frame image of the output video is dynamically divided into regions, and the frame image is divided into a high-definition region and a weakened region according to the key region information (the high-definition region is an area corresponding to the key area, and the weakened area is an area other than the high-definition area in one frame image), Then, the data in the high-definition area is subjected to lossless compression coding using lossless compression coding algorithm, or compression coding is not performed to ensure the losslessness of the video data; and the lossy compression processing is performed on the data of the weakened area, thereby reducing the data transmission amount and improving Transmission efficiency and saving transmission time.
  • the video output device 400 reduces the amount of data by using H.264 video coding with a high data compression ratio, saving data transmission bandwidth.
  • the communication unit 401 can implement wireless transmission of video by using a wireless communication method, omitting data lines such as HDMI, thereby greatly improving the user experience.
  • the video processing system 500 includes: a head mounted display 100, and a video output device 400, which is established between the head mounted display 100 and the video output device 400. Wired or wireless data connection.
  • the video processing system will be described below by taking a wireless data connection between the head mounted display 100 and the video output device 400 as an example.
  • the video output device 400 may be an input device such as a mobile phone and a computer, and is configured to provide video output.
  • the image capturing unit in the head mounted display 100 captures the wearer's head and eye photos for subsequent eye tracking processing; the image capturing unit herein may be a camera;
  • the image processing unit in the head mounted display 100 processes the image collected by the image acquisition unit to calculate a focus area of the wearer's eyeball gaze, and sends it to the data transmission unit;
  • the data transmission unit of the head mounted display 100 includes: an area information transmitting subunit and a video data receiving subunit. Specifically, the area information transmitting subunit establishes a wireless connection with the area information receiving subunit in the communication unit of the video output apparatus 400, and focuses on The area information is sent to the area information receiving subunit;
  • the area information receiving subunit of the video output device 400 transmits the real-time key area information of the user's eyeball gaze to the video sub-area processing unit;
  • the video sub-area processing unit of the video output device 400 divides the image of one frame to be output according to the key area information, that is, divides one frame image into a high-definition area and a weakened area, and HD
  • the area corresponds to the key area
  • the weakened area is an area other than the high definition area in one frame. Then, the data in the high-definition area is subjected to lossless compression coding, and the data of the weakened area is subjected to lossy compression coding, and the compression coding here adopts the H.264 standard.
  • the video sub-area processing unit of the video output device 400 transmits the encoded video data to the video data transmitting subunit, and the video data transmitting subunit establishes a wireless connection with the video data receiving subunit in the data transmission unit of the head mounted display 100, Thereby, the sub-region compression-encoded frame image is wirelessly transmitted to the head mounted display 100.
  • the video decoding unit After receiving the frame image received by the video data receiving subunit of the display 100, the video decoding unit sends the frame image to the video decoding unit. After receiving the frame image, the video decoding unit decodes the video through a video compression inverse algorithm, and sends the decoded image to the display. unit. The display unit displays and outputs the frame image.
  • the video output system provided in this embodiment can wirelessly transmit video data between the video output device and the head mounted display, thereby greatly improving the user experience, and determining the user's focus during video processing.
  • the viewing area, in the key viewing area the video is losslessly transmitted, and the video is weakened in the remaining areas. This not only achieves high definition video in key areas but also ensures video compression transmission efficiency.
  • FIG. 6 is a schematic flowchart of a video processing method according to an embodiment of the present invention. Referring to FIG. 6, the video processing method includes the following steps:
  • Step S610 real-time shooting of the head and eye images of the head-mounted display user
  • Step S620 calculating a motion track of the user's eyeball for the user's head and eye images, and obtaining key area information of the user's eyeball current gaze display screen;
  • Step S630 the key area information is sent to the video output device connected to the head mounted display, so that the video output device performs the sub-region compression coding process according to the key area information on the current frame image in the output video; and the receiving video output device sends Sub-region compression-encoded frame image;
  • Step S640 performing a sub-region decoding process on the received frame image subjected to the compression coding process by using the compression-encoded inverse algorithm
  • Step S650 displaying the frame image after the decoding process is output, so that the data corresponding to the key area in the frame image is displayed in high definition, and the data corresponding to the remaining areas is weakly displayed.
  • the video processing method of this embodiment corresponds to the working process of the foregoing head mounted display. Therefore, what is not described in the implementation steps of the video processing method in this embodiment may be referred to the foregoing description of the present invention. The related description in the embodiment of the head mounted display will not be repeated here.
  • the technical solution of the embodiment of the present invention collects the head and eye images of the user in real time, and calculates a key area of the user's eyeball currently watching the display screen based on the collected image, and then the focus area of the user
  • the information is sent to the video output device connected to the head mounted display, so that the video output device can perform sub-region compression coding on the current frame image in the output video according to the key area information: performing data corresponding to the key areas viewed by the user. Lossless compression and lossy compression of video data in the remaining areas.
  • the video output device then sends the sub-region processed video frame to the head-mounted display, and the head-mounted display performs sub-region decoding to display the output, thereby ensuring the video by lossless compression or non-compression of the video in the key area viewed by the user.
  • the high-definition display and the losslessness of the data reduce the amount of data by using lossy compression of the data in the rest of the area, so that the video can be wirelessly transmitted to the head-mounted display, and the data transmission efficiency is improved, the transmission time is saved, and the optimization is optimized. User experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种头戴显示器、视频输出设备和视频处理方法、系统,头戴显示器包括:图像采集单元实时拍摄用户的头部和眼部图像;图像处理单元根据头部和眼部图像计算用户眼球的运动轨迹,得到眼球当前注视显示屏幕的重点区域信息;数据传输单元将重点区域信息发送至视频输出设备,使该设备对当前帧图像按照重点区域信息进行分区域压缩码;以及接收该设备发送的分区域压缩编码后的帧图像;视频解码单元对分区域压缩编码后的帧图像进行分区域解码;显示单元显示输出解码后的帧图像,使帧图像中与重点区域对应的数据高清显示,其余区域对应的数据弱化显示。利用用户眼球注视位置将一帧视频划分区域,优化了头戴显示器的用户体验。

Description

一种头戴显示器、视频输出设备和视频处理方法、系统 技术领域
本发明涉及头戴显示技术领域,具体涉及一种头戴显示器、视频输出设备和视频处理方法、系统。
发明背景
目前,头戴显示器取得了飞速的发展,头戴显示器(Head Mount Display,简称HMD)可以将从智能手机或电脑等视频输出设备中逐帧获取的视频显示出来供佩戴者观看。图1是现有技术头戴显示器的视频传输场景示意图,参见图1,由于视频数据的传输量较大,所以需要对视频数据进行压缩处理。以智能手机为例,视频传输过程包括:手机采用压缩编码算法对视频中每一帧进行压缩编码,然后将压缩编码后的视频帧通过HDMI连接线发送到头戴显示器,头戴显示器进行解码后显示。
视频压缩编码方法有多种,从大范围而言,可分为无损压缩编码和有损压缩。无损压缩编码的常用算法有:哈弗曼编码、游程编码、算数编码等,其优点是:经过无损压缩编码后的视频在解码的时候可以完全恢复,缺点是数据传输的数据量还是较大,占用传输资源较多和花费的传输时间较长。而有损压缩编码的常用算法有:矢量量化编码、预测编码、模型图像编码等,这种压缩编码是不可逆的,压缩后数据并解码后与原始数据之间存在一定的差异,但是可以获得较高的压缩率。
由上可知,现有技术的头戴显示器中播放的视频都是按照一帧一帧这样整帧的进行有损压缩或无损压缩后传入的,视频传输效率和数据完整性无法兼顾,用户体验不佳。
发明内容
本发明提供了一种头戴显示器、视频输出设备和视频处理方法、系统,以解决现有技术头戴显示器中播放视频都是整帧进行有损压缩或无损压缩后传入,视频传输效率和视频完整性无法兼顾,用户体验不佳的问题。
为了达到上述目的,本发明的技术方案是这样实现的:
根据本发明的一个方面,提供了一种头戴显示器,该头戴显示器包括图像采集单元、图像处理单元、数据传输单元、视频解码单元和显示单元:
图像采集单元,用于实时拍摄用户的头部和眼部图像并将用户的头部和眼部图像发送给图像处理单元;
图像处理单元,用于根据接收的用户头部和眼部图像计算用户眼球的运动轨迹,得到用 户眼球当前注视显示屏幕的重点区域信息,将该重点区域信息发送给数据传输单元;
数据传输单元,用于将重点区域信息发送至与头戴显示器连接的视频输出设备,使得视频输出设备对待输出视频中的当前帧图像按照该重点区域信息进行分区域压缩编码处理;以及,用于接收视频输出设备发送的分区域压缩编码处理后的帧图像,并将该帧图像发送给视频解码单元;
视频解码单元,用于对接收到的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法进行分区域解码处理,将解码后的帧图像发送给显示单元;
显示单元,用于显示输出解码后的帧图像,使得该帧图像中与重点区域对应的数据高清显示,其余区域对应的数据弱化显示。
根据本发明的另一个方面,提供了一种视频输出设备,该视频输出设备包括:
通信单元,用于接收头戴显示器发送的用户眼球当前注视显示屏幕的重点区域信息,将该重点区域信息发送给视频分区域处理单元;
视频分区域处理单元,用于按照重点区域信息将待输出视频中的当前帧图像划分为高清区域和弱化区域,并对高清区域中的数据不进行压缩编码或进行无损压缩编码处理,对弱化区域中的数据进行有损压缩编码处理,其中,高清区域为重点区域对应的区域;
通信单元,还用于将分区域压缩编码处理后的帧图像发送至头戴显示器。
根据本发明的又一个方面,提供了一种视频处理方法,该方法包括:
实时拍摄头戴显示器用户的头部和眼部图像;
对用户的头部和眼部图像计算用户眼球的运动轨迹,得到用户眼球当前注视显示屏幕的重点区域信息;
将重点区域信息发送至与头戴显示器连接的视频输出设备,使得视频输出设备对待输出视频中的当前帧图像按照该重点区域信息进行分区域压缩编码处理;以及,接收视频输出设备发送的分区域压缩编码处理后的帧图像;
对接收的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法进行分区域解码处理;
显示输出解码处理后的帧图像,使得该帧图像中与重点区域对应的数据高清显示,其余区域对应的数据弱化显示。
根据本发明的再一个方面,提供了一种视频处理系统,该系统包括:如本发明一个方面的头戴显示器,以及如本发明另一个方面的视频输出设备;头戴显示器和视频输出设备之间建立有线或无线数据连接。
本发明的有益效果是:本发明实施例的技术方案,通过实时采集用户的头部和眼部图像, 并基于采集到的图像计算出用户眼球当前注视显示屏幕的重点区域,然后将这一重点区域的信息发送给与头戴显示器连接的视频输出设备,使得视频输出设备可根据这一重点区域信息,对待输出视频中当前帧图像进行分区域压缩编码:对那些对应到用户观看的重点区域的数据进行无损压缩,并对其余区域内的视频数据进行有损压缩。视频输出设备再将分区域处理后的视频帧发送给头戴显示器,头戴显示器进行分区域解码后显示输出,从而通过对用户观看的重点区域内视频的无损压缩或不压缩,保证了视频的高清显示和数据的无损性,又通过对其余区域的数据采用有损压缩,减小了数据量使得视频可以通过无线方式传输给头戴显示器,并提高了数据传输效率,节省了传输时间,优化了用户使用体验。
附图简要说明
图1是现有技术中头戴显示器与视频输出设备视频传输示意图;
图2是本发明一个实施例的一种头戴显示器的结构框图;
图3是本发明一个实施例的帧图像分区域的示意图;
图4是本发明一个实施例的视频输出设备的结构框图;
图5是本发明一个实施例的视频处理系统的结构框图;
图6是本发明一个实施例的视频处理方法的流程示意图。
具体实施方式
本发明的设计构思是:针对现有技术头戴显示器中播放的视频都是一整帧、一整帧的压缩处理后传入的,导致了用户体验不佳的问题。本发明实施例提出了基于头戴显示器佩戴者眼球的运动轨迹来计算出佩戴者观看显示屏幕的重点区域,并将该重点区域信息发送给视频输出设备,使得视频输出设备将视频帧按照动态获取的重点区域信息进行分区域处理后传入到头戴显示器中,从而可以实现在弱化区域内数据的有损压缩处理,使得视频可以通过无线方式传输给头戴显示器,而高清区域视频不做处理或无损压缩处理,既减小了数据量同时也保证了重点区域内视频数据的完整性。
实施例一
图2是本发明一个实施例的一种头戴显示器的结构框图,参见图2,该头戴显示器100包括:图像采集单元101、图像处理单元102、数据传输单元103、视频解码单元104和显示单元105:
图像采集单元101,用于实时拍摄用户的头部和眼部图像并将用户的头部和眼部图像发送给图像处理单元102;
图像处理单元102,用于根据接收的用户头部和眼部图像计算用户眼球的运动轨迹,得 到用户眼球当前注视显示屏幕的重点区域信息,将该重点区域信息发送给数据传输单元103;
数据传输单元103,用于将重点区域信息发送至与头戴显示器连接的视频输出设备,使得视频输出设备对待输出视频中的当前帧图像按照该重点区域信息进行分区域压缩编码处理;以及,用于接收视频输出设备发送的分区域压缩编码处理后的帧图像,并将该帧图像发送给视频解码单元104;
视频解码单元104,用于对接收到的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法进行分区域解码处理,将解码后的帧图像发送给显示单元105;
显示单元105,用于显示输出解码后的帧图像,使得该帧图像中与重点区域对应的区域内的数据高清显示,其余区域的数据弱化显示。
通过图2所示的头戴显示器,实时采集佩戴者的头部和眼部图像,并基于采集到的图像计算到佩戴者当前注视的重点区域,然后将这一重点区域信息发送给视频输出设备,使得视频输出设备根据该重点区域信息对待传输视频中的一帧图像进行分区域压缩编码处理并发送给头戴显示器,由头戴显示器进行解码后显示。由于能够对一帧图像进行动态分区域压缩编码从而与现有技术中一整帧的处理方式相比,既能保证用户观看的重点区域内视频高清显示和数据的无损性,又能提高数据传输效率,节省传输时间。
实施例二
图3是本发明一个实施例的帧图像分区域的示意图。本实施例中结合图3所示的帧图像分区域结构示意,对该头戴显示器的解码和显示进行重点说明,其他内容参见本发明的其他实施例。
本实施例中先通过头戴显示器中设置的图像采集单元采集用户的头部和眼部的图像。这里的图像采集单元可以是摄像头,并且摄像头可以安装在头戴显示器的显示屏幕与用户眼部之间的空间中,通过摄像头实时拍摄用户的头部和眼球得到实时图像。
需要说明的是,在眼球运动跟踪的概念里,眼球的主要运动有三种形式:注视,跳动和平滑尾随跟踪。根据现有技术的调查结论可知眼球运动的角度范围约为18度,超过12度的话就需要头部运动来协调其完成。所以在对眼球跟踪的时候,要考虑头部和眼球两个因素。基于此,本实施例中利用摄像头采集用户的头部和眼部图像。摄像头动态采集到人眼观看的原始图像,发送给图像处理单元。
图像处理单元根据人眼跟踪算法对原始图像进行处理,计算用户眼球的运动轨迹,并确定出用户眼球观看的重点区域,然后将重点区域信息(如圆心的坐标值和半径)传输到视频输出设备,使得视频输出设备在视频压缩编码时对视频分区域处理。图像处理单元进行人眼视线追踪的基本技术可分为硬件和软件。以硬件为基础的方法主要是用户带上特制的头盔或 者使用头部固定支架来获取眼球的轨迹,但是此方法不便利而且误差较大。以软件为基础的眼球跟踪技术是先利用摄像机获取人的眼和脸部的图像,将图像再处理过之后,算出眼球的轨迹,并将数据传输到视频处理设备,从而控制视频的处理和显示。本实施例中即采用了软件方式,图像处理单元采用图像处理芯片,并根据图像处理算法和技术跟踪眼球运动。
这里的重点区域是显示屏幕上以用户眼球注视的坐标值为圆心,以预定长度为半径的区域。在头戴显示器中,用户观看显示屏幕时,眼球对应在显示屏幕上一个位置坐标,然后根据眼球可观测到区域范围的经验值确定出一个半径,即可确定出眼球的重点观看区域。这里的位置坐标是显示屏上像素点的坐标。例如头戴显示器的显示屏分辨率为1680×1050,即水平方向有1680个像素点,垂直方向有1050个像素点,则眼球注视的位置坐标可以是(800,500),即水平方向第800个像素点和垂直方向第500个像素点确定的位置坐标,计算得到眼球对应的位置坐标后,再结合半径的长度,即可得到重点区域。
参见图3,本实施例中,示意出的重点区域即为虚线限定的圆形区域,重点区域在帧图像显示时即为高清区域,帧图像中的除了高清区域的区域都是弱化区域。
由于显示屏分辨率与传入视频中每一帧大小是一致的,所以头戴显示器计算出用户眼球当前注视的重点区域后,将重点区域信息发送给视频输出设备,这样视频输出设备能够按照重点区域信息将一帧图像划分为相应的高清区域,并将高清区域外的区域作为弱化区域,进行动态的分区域处理。
在本实施例中,数据传输单元包括:区域信息发送子单元和视频数据接收子单元;区域信息发送子单元,用于将重点区域信息发送至与头戴显示器连接的视频输出设备;视频数据接收子单元,用于接收视频输出设备发送的分区域压缩编码处理后的帧图像;区域信息发送子单元和视频数据接收子单元为无线传输模块或有线传输模块。
需要说明的是,由于现有技术中视频帧基本上都是按照一整帧的处理方式,这种方式相对数据量较大,导致现有的主流头戴显示器的视频传输途径都是通过有线传输方式,如图1所示,视频输出设备和头戴显示器之间通过高清晰度多媒体接口(High Definition Multimedia Interface,简称HDMI)连接线连接,视频输出设备中的视频数据整帧压缩后通过HDMI数据线传到头戴显示器。这种数据传输方式需要携带HDMI连接线,并且如果为了提高传输效率而使用整帧有损压缩方式进行压缩处理,则会影响视频的完整性和流畅度,而为了保证视频数据完整性而采用无损压缩,又会牺牲数据传输效率和传输时间。
针对这种情况,优选地本实施例中区域信息发送子单元和视频数据接收子单元均为无线传输模块,例如,WiFi无线传输模块。这样可以省略数据线,并且通过计算出用户注视的重点区域信息,然后将这一重点区域信息发给视频输出设备使得分区域压缩编码处理,保证了 重点区域内视频无损性,其余区域数据有损压缩。一方面,由于一帧中其余区域的数据被有损压缩,从而大大减小了数据量,使得视频无线传输成为可能。另一方面,由于保证了重点区域视频的无损性,也使得后续解码处理中能够将该重点区域视频数据进行无损还原,不影响视频的完整性。
为了方便视频解码单元进行解码,本实施例中图像处理单元还用于将该重点区域信息发送给视频解码单元,视频解码单元具体用于对接收到的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法以及从图像处理单元获取的重点区域信息进行分区域解码处理。
具体的,视频输出设备在进行一帧图像的压缩编码时,根据头戴显示器发送的重点区域信息进行动态分区域压缩编码,将编码后的帧图像发送给头戴显示器,头戴显示器的数据传输单元接收到编码后的帧图像后发送到视频解码单元。视频解码单元在解码时,根据从图像处理单元动态获取的重点区域信息以及编码后数据携带的编码信息进行解码,这里的编码后数据携带的编码信息可以包括:高清区域信息,高清区域内的数据及其编码算法(如,哈弗曼编码),以及弱化区域信息,弱化区域内数据及其编码算法(如矢量量化)等。由于视频解码单元从图像处理单元获取到了重点区域信息,而编码后数据的高清区域信息是与重点区域信息一致的,所以视频解码单元可以知道高清区域内的数据即为用户眼球当前注视的重点区域对应的数据,然后可以采用重点区域数据编码算法的逆向算法进行解码,从而得到重点区域对应的视频数据。
同样的,视频解码单元根据获取的重点区域信息也可以知道编码后帧图像的弱化区域,并根据该弱化区域内数据编码算法的逆向算法进行解码得到弱化区域对应的数据,然后将解码得到的帧图像发送给显示单元进行显示输出,实现该帧图像中与重点区域对应的区域内的数据高清显示,其余区域的数据弱化显示。
实施例三
图4是本发明一个实施例的视频输出设备的结构框图,本实施例中重点对视频输出设备的结构和工作过程进行说明,其他内容参见本发明的其他实施例。
参见图4,该视频输出设备400包括:
通信单元401,用于接收头戴显示器发送的用户眼球当前注视显示屏幕的重点区域信息,将该重点区域信息发送给视频分区域处理单元402;
视频分区域处理单元402,用于按照重点区域信息将待输出视频中的当前帧图像划分为高清区域和弱化区域,并对高清区域中的数据不进行压缩编码或进行无损压缩编码处理,对弱化区域中的数据进行有损压缩编码处理,其中,高清区域为重点区域对应的区域;
通信单元401,还用于将分区域压缩编码处理后的帧图像发送至头戴显示器。
本实施例中,视频输出设备400为智能手机或电脑,该智能手机或电脑中存储有视频数据。
本实施例中通信单元401包括:区域信息接收子单元和视频数据发送子单元;区域信息接收子单元,用于接收头戴显示器发送的用户眼球当前注视的显示屏幕的重点区域信息;视频数据发送子单元,用于将分区域压缩编码处理后的帧图像发送至头戴显示器;区域信息接收子单元和视频数据发送子单元为无线通信模块或有线通信模块。
具体的,区域信息接收子单元与头戴显示器的数据传输单元中的区域信息发送子单元连接,实现重点区域信息的传输。视频数据发送子单元与头戴显示器的数据传输单元中的视频数据接收子单元连接,实现视频数据的传输。另外,区域信息接收子单元和区域信息发送子单元的传输方式应当一致,例如,两者均采用无线传输方式进行数据传输。并且视频数据发送子单元和视频数据接收子单元的传输方式也应当一致。
本实施例中,视频分区域处理单元402对高清区域中的数据不进行压缩编码或进行无损压缩编码处理,对弱化区域中的数据进行有损压缩编码处理时的编码标准为H.264。H.264的视频编码优点是:低码率、高质量图像、容错能力强、网络实用性强,其最大的优势是具有很高的数据压缩比率,在同等图像质量的条件下,H.264的压缩比是MPEG-2的2倍以上,是MPEG-4的1.5~2倍。H.264在具有高压缩比的同时还拥有高质量流畅的图像,正因为此,经过H.264标准编码压缩后的视频数据,在网络传输过程中所需要的带宽更少,也更加经济。
由图4所示的视频输出设备的结构和工作过程可知,本实施例的视频输出设备400在收到头戴显示器通过眼部跟踪实时计算出的并发送的用户观看的重点区域信息后,对待输出视频的帧图像动态分区域压缩,将该帧图像按照重点区域信息划分为高清区域和弱化区域(高清区域是与重点区域对应的区域,弱化区域是一帧图像中高清区域以外的区域),然后,对高清区域的数据采用无损压缩编码算法进行无损压缩编码,或者不进行压缩编码以保证视频数据的无损性;而对于弱化区域的数据采用有损压缩处理,从而减小数据传输量,提高传输效率,节省传输时间。
另外,视频输出设备400通过采用具有很高的数据压缩比率的H.264视频编码,减少了数据量,节省了数据传输带宽。并且,通信单元401可以采用无线通信方式实现视频无线传输,省略了HDMI等数据线,从而大大提升了用户体验。
实施例四
图5是本发明一个实施例的视频处理系统的结构框图,参见图5,该视频处理系统500包括:头戴显示器100,以及视频输出设备400,头戴显示器100和视频输出设备400之间建立有线或无线数据连接。
下面以头戴显示器100和视频输出设备400之间建立无线数据连接为例对视频处理系统进行说明。
本实施例中,视频处理系统500的结构如图5所示,视频输出设备400可以是手机和电脑等输入设备,作用是提供视频输出;
头戴显示器100中的图像采集单元,将佩戴者头部和眼部照片拍下来用于后面进行眼球跟踪处理;这里的图像采集单元可以是摄像头;
头戴显示器100中的图像处理单元将图像采集单元采集到的图像做处理计算出佩戴者眼球注视的重点区域,发给数据传输单元;
头戴显示器100的数据传输单元包括:区域信息发送子单元和视频数据接收子单元,具体的,区域信息发送子单元与视频输出设备400的通信单元中区域信息接收子单元建立无线连接,将重点区域信息发送给区域信息接收子单元;
视频输出设备400的区域信息接收子单元将用户眼球注视的实时的重点区域信息发送给视频分区域处理单元;
视频输出设备400的视频分区域处理单元收到新的重点区域信息后,根据该重点区域信息,对将要输出的一帧图像划分区域,即,将一帧图像划分为高清区域和弱化区域,高清区域与重点区域对应,弱化区域是一帧中除了高清区域的区域。然后,对高清区域的数据进行无损压缩编码,对弱化区域的数据进行有损压缩编码,这里的压缩编码采用H.264标准。视频输出设备400的视频分区域处理单元将编码后的视频数据发送给视频数据发送子单元,由视频数据发送子单元与头戴显示器100的数据传输单元中的视频数据接收子单元建立无线连接,从而将分区域压缩编码后的帧图像无线发送至头戴显示器100。
头戴显示器100的视频数据接收子单元接收到的帧图像后,发送给视频解码单元,视频解码单元收到帧图像后通过视频压缩的逆向算法来对视频解码,将解码后的图像发送给显示单元。显示单元对帧图像进行显示输出。
通过上述方式,本实施例的提供的视频输出系统,可以采用无线方式在视频输出设备和头戴显示器之间传输视频数据,大大提升了用户体验,并且在视频处理的时候,通过确定用户的重点观看区域,在重点观看区域内,视频无损传输,在其余区域视频弱化处理。这样不仅可以达到重点区域视频高清晰也能保证视频压缩传输效率。
另外,该视频处理系统500中包括的头戴显示器100的其他结构和工作过程,可参见前述实施例一的描述。该视频处理系统500中包括的视频输出设备400的其他结构和工作过程,可参见前述实施例三的描述,这里不再赘述。
实施例五
图6是本发明一个实施例的视频处理方法的流程示意图,参见图6,该视频处理方法包括如下步骤:
步骤S610,实时拍摄头戴显示器用户的头部和眼部图像;
步骤S620,对用户的头部和眼部图像计算用户眼球的运动轨迹,得到用户眼球当前注视显示屏幕的重点区域信息;
步骤S630,将重点区域信息发送至与头戴显示器连接的视频输出设备,使得视频输出设备对待输出视频中的当前帧图像按照该重点区域信息进行分区域压缩编码处理;以及,接收视频输出设备发送的分区域压缩编码处理后的帧图像;
步骤S640,对接收的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法进行分区域解码处理;
步骤S650,显示输出解码处理后的帧图像,使得该帧图像中与重点区域对应的数据高清显示,其余区域对应的数据弱化显示。
需要说明的是,本实施例的这种视频处理方法是与前述头戴显示器的工作过程相对应的是,因此,本实施例中视频处理方法的实现步骤中没有描述的内容可以参见本发明前述头戴显示器实施例中的相关说明,这里不再赘述。
综上所述,本发明实施例的技术方案,通过实时采集用户的头部和眼部图像,并基于采集到的图像计算出用户眼球当前注视显示屏幕的重点区域,然后将这一重点区域的信息发送给与头戴显示器连接的视频输出设备,使得视频输出设备可根据这一重点区域信息,对待输出视频中当前帧图像进行分区域压缩编码:对那些对应到用户观看的重点区域的数据进行无损压缩,并对其余区域内的视频数据进行有损压缩。视频输出设备再将分区域处理后的视频帧发送给头戴显示器,头戴显示器进行分区域解码后显示输出,从而通过对用户观看的重点区域内视频的无损压缩或不压缩,保证了视频的高清显示和数据的无损性,又通过对其余区域的数据采用有损压缩减小了数据量使得视频可以通过无线方式传输给头戴显示器,并提高了数据传输效率,节省了传输时间,优化了用户使用体验。

Claims (9)

  1. 一种头戴显示器,其特征在于,该头戴显示器包括图像采集单元、图像处理单元、数据传输单元、视频解码单元和显示单元:
    所述图像采集单元,用于实时拍摄用户的头部和眼部图像并将所述用户的头部和眼部图像发送给图像处理单元;
    所述图像处理单元,用于根据接收的用户头部和眼部图像计算用户眼球的运动轨迹,得到用户眼球当前注视显示屏幕的重点区域信息,将该重点区域信息发送给数据传输单元;
    所述数据传输单元,用于将所述重点区域信息发送至与所述头戴显示器连接的视频输出设备,使得所述视频输出设备对待输出视频中的当前帧图像按照该重点区域信息进行分区域压缩编码处理;以及,用于接收所述视频输出设备发送的分区域压缩编码处理后的帧图像,并将该帧图像发送给视频解码单元;
    所述视频解码单元,用于对接收到的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法进行分区域解码处理,将解码后的帧图像发送给显示单元;
    所述显示单元,用于显示输出解码后的帧图像,使得该帧图像中与重点区域对应的数据高清显示,其余区域对应的数据弱化显示。
  2. 根据权利要求1所述的头戴显示器,其特征在于,所述重点区域是显示屏幕上以用户眼球注视的坐标值为圆心,以预定长度为半径的区域。
  3. 根据权利要求1或2所述的头戴显示器,其特征在于,所述数据传输单元包括:区域信息发送子单元和视频数据接收子单元;
    所述区域信息发送子单元,用于将所述重点区域信息发送至与所述头戴显示器连接的视频输出设备;
    所述视频数据接收子单元,用于接收所述视频输出设备发送的分区域压缩编码处理后的帧图像;
    所述区域信息发送子单元和视频数据接收子单元为无线传输模块或有线传输模块。
  4. 根据权利要求1所述的头戴显示器,其特征在于,所述图像处理单元,还用于将该重点区域信息发送给所述视频解码单元;
    所述视频解码单元,具体用于对接收到的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法以及从所述图像处理单元获取的重点区域信息进行分区域解码处理。
  5. 一种视频输出设备,其特征在于,所述视频输出设备包括:
    通信单元,用于接收头戴显示器发送的用户眼球当前注视显示屏幕的重点区域信息,将 该重点区域信息发送给视频分区域处理单元;
    所述视频分区域处理单元,用于按照所述重点区域信息将待输出视频中的当前帧图像划分为高清区域和弱化区域,并对高清区域中的数据不进行压缩编码或进行无损压缩编码处理,对弱化区域中的数据进行有损压缩编码处理,其中,所述高清区域为所述重点区域对应的区域;
    所述通信单元,还用于将分区域压缩编码处理后的帧图像发送至所述头戴显示器。
  6. 根据权利要求5所述的视频输出设备,其特征在于,所述视频输出设备为智能手机或电脑,该智能手机或电脑中存储有视频数据。
  7. 根据权利要求5或6所述的视频输出设备,其特征在于,所述通信单元包括:区域信息接收子单元和视频数据发送子单元;
    所述区域信息接收子单元,用于接收所述头戴显示器发送的用户眼球当前注视的显示屏幕的重点区域信息;
    所述视频数据发送子单元,用于将分区域压缩编码处理后的帧图像发送至所述头戴显示器;
    所述区域信息接收子单元和所述视频数据发送子单元为无线通信模块或有线通信模块。
  8. 一种视频处理方法,其特征在于,该方法包括:
    实时拍摄头戴显示器用户的头部和眼部图像;
    对用户的头部和眼部图像计算用户眼球的运动轨迹,得到用户眼球当前注视显示屏幕的重点区域信息;
    将重点区域信息发送至与头戴显示器连接的视频输出设备,使得所述视频输出设备对待输出视频中的当前帧图像按照该重点区域信息进行分区域压缩编码处理;以及,接收所述视频输出设备发送的分区域压缩编码处理后的帧图像;
    对接收的分区域压缩编码处理后的帧图像利用压缩编码的逆向算法进行分区域解码处理;
    显示输出解码处理后的帧图像,使得该帧图像中与重点区域对应的数据高清显示,其余区域对应的数据弱化显示。
  9. 一种视频处理系统,其特征在于,该系统包括:如权利要求1-4中任一项所述的头戴显示器,以及如权利要求5-7中任一项所述的视频输出设备;
    所述头戴显示器和所述视频输出设备之间建立有线或无线数据连接。
PCT/CN2016/114042 2016-06-23 2016-12-31 一种头戴显示器、视频输出设备和视频处理方法、系统 WO2017219652A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610466833.6 2016-06-23
CN201610466833.6A CN105979224A (zh) 2016-06-23 2016-06-23 一种头戴显示器、视频输出设备和视频处理方法、系统

Publications (1)

Publication Number Publication Date
WO2017219652A1 true WO2017219652A1 (zh) 2017-12-28

Family

ID=57020541

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/114042 WO2017219652A1 (zh) 2016-06-23 2016-12-31 一种头戴显示器、视频输出设备和视频处理方法、系统

Country Status (2)

Country Link
CN (1) CN105979224A (zh)
WO (1) WO2017219652A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111225228A (zh) * 2020-01-15 2020-06-02 北京拙河科技有限公司 一种视频直播方法、装置、设备和介质
US10859840B2 (en) 2017-08-14 2020-12-08 Goertek Inc. Graphics rendering method and apparatus of virtual reality
CN114935971A (zh) * 2021-02-05 2022-08-23 京东方科技集团股份有限公司 显示驱动芯片、显示装置和显示驱动方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979224A (zh) * 2016-06-23 2016-09-28 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理方法、系统
CN108347557A (zh) * 2017-01-21 2018-07-31 盯盯拍(东莞)视觉设备有限公司 全景图像拍摄装置、显示装置、拍摄方法以及显示方法
CN108347556A (zh) * 2017-01-21 2018-07-31 盯盯拍(东莞)视觉设备有限公司 全景图像拍摄方法、全景图像显示方法、全景图像拍摄装置以及全景图像显示装置
CN107333119A (zh) 2017-06-09 2017-11-07 歌尔股份有限公司 一种显示数据的处理方法和设备
CN110545430A (zh) * 2018-05-28 2019-12-06 北京松果电子有限公司 视频传输方法和装置
JP7100523B2 (ja) * 2018-07-27 2022-07-13 京セラ株式会社 表示装置、表示システムおよび移動体
TWI827874B (zh) * 2019-11-05 2024-01-01 宏達國際電子股份有限公司 顯示系統
CN112887702A (zh) * 2021-01-11 2021-06-01 杭州灵伴科技有限公司 一种近眼显示设备的摄像头数据的传输方法及近眼显示设备
CN113327489A (zh) * 2021-06-03 2021-08-31 西安工业大学 一种基于机器视觉的分层机械式智能盲文阅读机
CN113256661A (zh) * 2021-06-23 2021-08-13 北京蜂巢世纪科技有限公司 图像处理方法、装置、设备、介质及程序产品
TW202301083A (zh) 2021-06-28 2023-01-01 見臻科技股份有限公司 提供精準眼動追蹤之光學系統和相關方法
CN114786037B (zh) * 2022-03-17 2024-04-12 青岛虚拟现实研究院有限公司 一种面向vr投影的自适应编码压缩方法
CN116737097B (zh) * 2022-09-30 2024-05-17 荣耀终端有限公司 一种投屏图像处理方法及电子设备
CN116132431B (zh) * 2023-04-19 2023-06-30 泰诺尔(北京)科技有限公司 一种数据传输方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103930817A (zh) * 2011-06-20 2014-07-16 谷歌公司 用于数据的自适应传送的系统和方法
CN104539929A (zh) * 2015-01-20 2015-04-22 刘宛平 带有运动预测的立体图像编码方法和编码装置
CN104767992A (zh) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 头戴式显示系统及影像低频宽传输方法
GB2526515A (en) * 2014-03-25 2015-12-02 Jaguar Land Rover Ltd Image capture system
CN105979224A (zh) * 2016-06-23 2016-09-28 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理方法、系统
CN205812229U (zh) * 2016-06-23 2016-12-14 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103930817A (zh) * 2011-06-20 2014-07-16 谷歌公司 用于数据的自适应传送的系统和方法
GB2526515A (en) * 2014-03-25 2015-12-02 Jaguar Land Rover Ltd Image capture system
CN104539929A (zh) * 2015-01-20 2015-04-22 刘宛平 带有运动预测的立体图像编码方法和编码装置
CN104767992A (zh) * 2015-04-13 2015-07-08 北京集创北方科技有限公司 头戴式显示系统及影像低频宽传输方法
CN105979224A (zh) * 2016-06-23 2016-09-28 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理方法、系统
CN205812229U (zh) * 2016-06-23 2016-12-14 青岛歌尔声学科技有限公司 一种头戴显示器、视频输出设备和视频处理系统

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10859840B2 (en) 2017-08-14 2020-12-08 Goertek Inc. Graphics rendering method and apparatus of virtual reality
CN111225228A (zh) * 2020-01-15 2020-06-02 北京拙河科技有限公司 一种视频直播方法、装置、设备和介质
CN114935971A (zh) * 2021-02-05 2022-08-23 京东方科技集团股份有限公司 显示驱动芯片、显示装置和显示驱动方法

Also Published As

Publication number Publication date
CN105979224A (zh) 2016-09-28

Similar Documents

Publication Publication Date Title
WO2017219652A1 (zh) 一种头戴显示器、视频输出设备和视频处理方法、系统
CN111580765B (zh) 投屏方法、投屏装置、存储介质、被投屏设备与投屏设备
CN109417624B (zh) 用于提供和显示内容的装置和方法
WO2018077142A1 (zh) 全景视频的处理方法、装置及系统
US20200267384A1 (en) Low bitrate encoding of panoramic video to support live streaming over a wireless peer-to-peer connection
KR101634500B1 (ko) 미디어 작업부하 스케줄러
JP5726919B2 (ja) リモートディスプレイに画像をレンダリングするためのデルタ圧縮ならびに動き予測およびメタデータの修正を可能にすること
US10454986B2 (en) Video synchronous playback method, apparatus, and system
WO2019157803A1 (zh) 传输控制方法
CA2974104A1 (en) Video transmission based on independently encoded background updates
WO2017084309A1 (zh) 视频的无线传输设备、视频播放设备、方法及系统
US9344678B2 (en) Information processing apparatus, information processing method and computer-readable storage medium
Yang et al. Fovr: Attention-based vr streaming through bandwidth-limited wireless networks
CN205812229U (zh) 一种头戴显示器、视频输出设备和视频处理系统
US20130336381A1 (en) Video transmission system and transmitting device and receiving device thereof
KR101410837B1 (ko) 비디오 메모리의 모니터링을 이용한 영상 처리 장치
KR20130022142A (ko) 스마트모듈을 이용한 실시간 양방향 영상 모니터링 시스템 및 방법
WO2018004936A1 (en) Apparatus and method for providing and displaying content
CN108184053B (zh) 嵌入式图像处理方法及装置
CN102843566A (zh) 一种3d视频数据的通讯方法和设备
WO2021199184A1 (ja) 画像表示システム、画像処理装置、画像表示方法、およびコンピュータプログラム
WO2021042341A1 (zh) 视频显示方法、接收端、系统及存储介质
KR102183895B1 (ko) 가상 현실 비디오 스트리밍에서의 관심영역 타일 인덱싱
CN114697731A (zh) 投屏方法、电子设备及存储介质
US10616620B1 (en) Low bitrate encoding of spherical video to support live streaming over a high latency and/or low bandwidth network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906189

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906189

Country of ref document: EP

Kind code of ref document: A1