WO2020024593A1 - 一种图像处理方法及装置 - Google Patents

一种图像处理方法及装置 Download PDF

Info

Publication number
WO2020024593A1
WO2020024593A1 PCT/CN2019/077304 CN2019077304W WO2020024593A1 WO 2020024593 A1 WO2020024593 A1 WO 2020024593A1 CN 2019077304 W CN2019077304 W CN 2019077304W WO 2020024593 A1 WO2020024593 A1 WO 2020024593A1
Authority
WO
WIPO (PCT)
Prior art keywords
rendering
image
area
processed
viewing angle
Prior art date
Application number
PCT/CN2019/077304
Other languages
English (en)
French (fr)
Inventor
刘琳
秦林婵
黄通兵
Original Assignee
北京七鑫易维信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京七鑫易维信息技术有限公司 filed Critical 北京七鑫易维信息技术有限公司
Publication of WO2020024593A1 publication Critical patent/WO2020024593A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present application relates to the field of image processing, and in particular, to a method and a device for determining a gaze point position.
  • Virtual reality (virtual reality) technology refers to the use of modern high-tech means of computer technology to generate a virtual environment. With the help of special input / output devices, users can interact with objects in the virtual world naturally. Hearing and tactile sensations are the same as in the real world.
  • the virtual environment is a computer-generated, real-time dynamic virtual three-dimensional stereo image. When the user rotates his head or eyeball, the position of the gaze point changes, so the user can see the virtual three-dimensional stereo image in different directions.
  • Traditional gaze point position determination methods are based on tracking determination of a user's binocular or tracking determination of a user's monocular. However, this determination method may cause the location of the fixation point to be determined inaccurately. Furthermore, the quality of the image obtained after processing the image to be processed according to the location of the fixation point may not be high, affecting the user experience.
  • embodiments of the present application provide an image processing method and device.
  • an embodiment of the present application provides an image processing method, where the method includes:
  • image processing is performed on the image to be processed based on a monocular fixation point.
  • the performing image processing on the image to be processed based on the binocular fixation point includes:
  • Image processing the image to be processed based on the monocular fixation point includes:
  • the multiple rendering regions include at least two or more of the following:
  • the first rendering area is a circular area with a position coordinate of the fixation point as an origin and a first preset viewing angle;
  • the second rendering area is an area in which the first rendering area is removed from the circular area of the second preset viewing angle with the position coordinates of the fixation point as the center;
  • the third rendering region is a region in which the first rendering region and the second rendering region are removed from the circular region of the third preset angle of view with the position coordinates of the gaze point as the center;
  • the fourth rendering area is an area in which the first rendering area, the second rendering area, and the third rendering area are excluded from a circular area at a position coordinate center of the gaze point and a fourth preset viewing angle.
  • the first preset viewing angle is a 1.5-degree viewing angle
  • the second preset viewing angle is a 35-degree viewing angle
  • the third preset viewing angle is a 60-degree viewing angle
  • the fourth preset viewing angle is a 110-degree viewing angle.
  • performing image rendering on the to-be-processed image according to the rendering degrees corresponding to the multiple rendering regions includes:
  • an embodiment of the present application provides an image processing apparatus, where the apparatus includes:
  • An obtaining unit configured to obtain a to-be-processed image corresponding to a target object within a visual field range, the visual field range including a binocular visual field coincidence range and a non-binocular visual field coincidence range;
  • a first processing unit configured to perform image processing on the image to be processed based on the binocular gaze point within the binocular field of view coincidence range
  • a second processing unit is configured to perform image processing on the image to be processed based on a monocular fixation point within the non-binocular field of view coincidence range.
  • the performing image processing on the image to be processed based on the binocular fixation point includes:
  • Image processing the image to be processed based on the monocular fixation point includes:
  • the multiple rendering regions include at least two or more of the following:
  • the first rendering area is a circular area with a position coordinate of the fixation point as an origin and a first preset viewing angle;
  • the second rendering area is an area in which the first rendering area is removed from the circular area of the second preset viewing angle with the position coordinates of the fixation point as the center;
  • the third rendering region is a region in which the first rendering region and the second rendering region are removed from the circular region of the third preset angle of view with the position coordinates of the gaze point as the center;
  • the fourth rendering area is an area in which the first rendering area, the second rendering area, and the third rendering area are excluded from a circular area at a position coordinate center of the gaze point and a fourth preset viewing angle.
  • the first preset viewing angle is a 1.5-degree viewing angle
  • the second preset viewing angle is a 35-degree viewing angle
  • the third preset viewing angle is a 60-degree viewing angle
  • the fourth preset viewing angle is a 110-degree viewing angle.
  • the rendering unit is set to:
  • the image processing method and device provided in the embodiments of the present application include: acquiring a to-be-processed image corresponding to a visual field range of a target object, wherein the visual field range includes a binocular visual field coincidence range and a non-binocular visual field coincidence range; Performing image processing on the to-be-processed image based on the binocular fixation point within the binocular field of view coincidence range; and performing image processing on the to-be-processed image based on the monocular fixation point within the non-binocular field of view coincidence range .
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a visual field range of a target object according to an embodiment of the present application
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of multiple rendering regions according to an embodiment of the present application.
  • FIG. 5 is a structural block diagram of an image processing apparatus according to an embodiment of the present application.
  • the position of the fixation point is determined based on tracking the user's binocular, that is, the position of the fixation point is determined according to the tracking of the left and right eyes of the user, when determining the position of the fixation point, the left and right eyes are set separately A weight, in general, the weight of the right eye is higher than the weight of the left eye. In this way, when the gaze point is located in the range of the left eye field minus the binocular field coincidence range, the position of the determined gaze point is not accurate.
  • the gaze point position is determined based on tracking the user's monocular, that is, the gaze point position is determined according to the left eye or the right eye of the user. Then, when the position of the fixation point is determined according to the tracking of the left eye of the user, when the fixation point is located in the range of the right eye field of view minus the binocular field coincidence range, the determined position of the fixation point is not accurate. When the position of the fixation point is determined according to the tracking of the right eye of the user, when the fixation point is located in the range of the left-eye field of view minus the coincidence range of the binocular field of view, the determined position of the fixation point is not accurate.
  • the quality of the image obtained after processing the image to be processed according to the position of the fixation point may be low, which affects the user experience.
  • a to-be-processed image corresponding to a visual field range of a target object is obtained, where the visual field range includes a binocular field of view coincidence range and a non-binocular field of view coincidence range; Within the range, image processing is performed on the to-be-processed image based on the binocular fixation point; within the non-binocular field of view coincidence range, image processing is performed on the to-be-processed image based on the monocular fixation point.
  • different fixation point position determination strategies are adopted in different visual field ranges, so that the determined fixation point positions in each visual field range are more accurate.
  • the image to be processed is further processed based on the determined fixation point, so that the quality of the processed image can be improved, thereby improving the user experience.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 1, the image processing method provided by this embodiment may be implemented through the following steps S101 to S103.
  • S101 Acquire a to-be-processed image corresponding to a target object in a visual field range, where the visual field range includes a binocular visual field coincidence range and a non-binocular visual field coincidence range.
  • the target object mentioned in the embodiments of the present application may be a user using a virtual reality device.
  • binocular refers to the left and right eyes of the target object
  • monocular refers to the left or right eye of the target object
  • the embodiment of the present invention does not specifically limit the image to be processed, and the image to be processed may be various images generated by using a virtual environment technology.
  • a person's field of vision is limited, that is, a target object's field of vision is limited.
  • the image in the field of view of the target object is an image that the target user can observe, and the image outside the field of view of the target object is invisible to the target user.
  • the binocular visual field coincidence range mentioned in this embodiment refers to a visual field range that can be observed by both the left and right eyes.
  • the non-binocular field of view coincidence range refers to a range of the entire visual field range of the target object minus the binocular field of view coincidence range. It can be described in conjunction with FIG. 2, which shows the visual field range of the target object.
  • the non-shaded portion 201 is a binocular visual field coincidence range, and the shaded portion includes 202 and 203, both of which are non-binocular visual field coincidence ranges.
  • image data of the image to be processed may be acquired first, so as to obtain the image to be processed according to the image data.
  • S102 Process the image to be processed based on the binocular gaze point within the binocular field of view coincidence range.
  • fixation point mentioned in the embodiments of the present application refers to a certain point of the object in which the line of sight is aligned during the process of visual perception.
  • the binocular fixation point may be obtained in advance, and the embodiment of the present application does not specifically limit the specific implementation manner of obtaining the binocular fixation point.
  • it may be used Any one or more of an optical system, a micro-electro-mechanical system, a capacitance sensor, and a muscle current detector acquire the binocular fixation point.
  • the binocular visual field coincidence range refers to a visual field range that can be observed by both eyes, that is, a visual field range that can be observed by both the left and right eyes. Therefore, it can be considered that the binocular gaze point is accurate within the binocular visual field coincidence range. Therefore, an image to be processed can be processed based on the binocular gaze point.
  • S103 Perform image processing on the image to be processed based on the monocular fixation point within the non-binocular field of view coincidence range.
  • the monocular fixation point may be obtained in advance, and the embodiment of the present application does not specifically limit the specific implementation of obtaining the monocular fixation point.
  • the monocular fixation point can be obtained by using any one or more devices among an optical system, a micro-electro-mechanical system, a capacitance sensor, and a muscle current detector.
  • non-binocular field of view coincidence range refers to a range of the entire visual field range minus the binocular field of view coincidence range.
  • the image to be processed may be performed based on the monocular fixation point. deal with.
  • the non-binocular field of view overlap range includes a first field of view range and a second field of view range
  • the first field of view range is a range of the left eye field of view minus the binocular field of view field overlap.
  • the two-field-of-view range is a range of the right-eye field of view minus the binocular field-of-view overlap range.
  • FIG. 2 is a schematic diagram of a visual field range of a target object according to an embodiment of the present application.
  • the non-binocular visual field coincident range in FIG. 2 includes a first visual field range 202 and a second visual field range 203.
  • the binocular tracking mode is switched to the monocular tracking mode to obtain the target object.
  • the monocular fixation point it can be:
  • the first visual field range 202 is a visual field range that can be observed by the left eye and a visual field range that cannot be observed by the right eye. Therefore, within the first visual field range, the monocular is determined according to the position of the left eyeball.
  • the position coordinates of the fixation point in the image to be processed may make the position coordinates of the fixation point in the image to be processed relatively accurate.
  • the second visual field range 203 is a visual field range that can be observed by the right eye, and a visual field range that cannot be observed by the left eye. Therefore, within the second field of view, the position coordinates of the monocular fixation point in the image to be processed are determined according to the position of the right eyeball. The position coordinates of the fixation point in the image to be processed can be made more accurate.
  • performing image processing on the image to be processed includes rendering the image to be processed. Rendering the image to be processed according to the binocular fixation point or the monocular fixation point.
  • FIG. 3 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in FIG. 3, it can be implemented through the following steps S301-S302.
  • S301 Determine multiple rendering regions of the image to be processed according to position coordinates of the binocular fixation point or the monocular fixation point, and different rendering regions have different rendering degrees.
  • the position coordinates may be position coordinates in a pre-established three-dimensional space coordinate system, or may be other forms of position coordinates.
  • the position coordinates Can be determined according to the actual situation.
  • the number of the rendering regions is not limited in the embodiment of the present application, and the number of the rendering regions may be specifically determined according to an actual situation.
  • the multiple rendering regions include at least any two or more of a first rendering region, a second rendering region, a third rendering region, and a fourth rendering region.
  • the first rendering area is a circular area with a position coordinate of the fixation point as an origin and a first preset viewing angle;
  • the second rendering region is a region in which the first rendering region is removed from a circular region of a second preset angle of view with the position coordinates of the gaze point as a center.
  • the third rendering region is a region in which the first rendering region and the second rendering region are removed from a circular region of a third preset angle of view with the position coordinates of the gaze point as a center.
  • the fourth rendering area is a circular area of the fourth preset viewing angle centered on the position coordinates of the gaze point, except for the first rendering area, the second rendering area, and the third rendering area. .
  • FIG. 4 is a schematic diagram of multiple rendering regions according to an embodiment of the present application.
  • the origin O is the fixation point, where the circular area where 410 is located is the first rendering area, the circular area where 420 is located is the second rendering area, the circular area where 430 is located is the third rendering area, and 440 is The circle area is the fourth rendering area.
  • the embodiment of the present application does not specifically limit the first preset perspective, the second preset perspective, the third preset perspective, and the fourth preset perspective.
  • the first preset angle of view, the second preset angle of view, the third preset angle of view, and the fourth preset angle of view may be specifically defined according to actual conditions.
  • the color of the object to be observed can be perceived, that is, in the 35-degree perspective of the fixation point, the observed thing is also very clear.
  • stereo vision can be generated, that is, within the 60-degree angle of view of the fixation point, what is observed is relatively clear.
  • the 110-degree field of view of the fixation point it is the maximum monocular field of view of the human eye, that is, within the 110-degree field of view of the fixation point, the observed object lies in the boundary area of the field of view and is relatively blurred.
  • the plurality of rendering regions may be determined according to the characteristics of the field of view when a person observes things.
  • the first preset viewing angle may be a 1.5-degree viewing angle
  • the second preset viewing angle may be a 35-degree viewing angle
  • the third preset viewing angle may be 60.
  • Degree viewing angle the fourth preset viewing angle may be a 110 degree viewing angle.
  • S302 Perform image rendering on the image to be processed according to the rendering degrees corresponding to the multiple rendering regions.
  • the embodiment of the present application does not specifically limit the rendering degree, and the rendering degree may be specifically set according to actual conditions.
  • the first rendering region what is observed by a person is the clearest, that is, the image to be processed observed by the target object is the clearest in the first rendering region.
  • the second rendering area what people observe is very clear, that is, the image to be processed observed by the target object is very clear in the second rendering area.
  • the third rendering area what people observe is relatively clear, that is, the image to be processed observed by the target object is relatively clear in the third rendering area.
  • the fourth rendering area what people observe is relatively blurred, that is, the image to be processed observed by the target object is relatively blurred in the fourth rendering area.
  • step S302 when step S302 is specifically implemented, 100% image rendering may be performed on the image to be processed corresponding to the first rendering area; 50% image rendering; 25% image rendering of the image to be processed corresponding to the third rendering region; 2% image rendering of the image to be processed corresponding to the fourth rendering region. Therefore, the rendered image is consistent with the visual experience of the user in real life, and brings a good user experience to the user.
  • the fixation point may be determined according to a position coordinate of the fixation point and a distance between the fixation point and a visual field boundary. Multiple rendering regions of the image to be processed.
  • the visual field boundary mentioned here refers to the boundary of the image to be processed that can be observed by the target object.
  • the visual field range of the target object may be acquired in advance, so as to determine the visual field boundary.
  • the range of the visual field of the target object may be measured by using a related measuring instrument, so as to determine the boundary of the visual field.
  • the field of view input by the target object can also be received to determine the field of view boundary.
  • the multiple rendering regions of the image to be processed are determined according to the position coordinates of the fixation point and the distance between the fixation point and the boundary of the visual field, which may include at least the following situations.
  • First case if the position coordinates of the fixation point and the distance between the fixation point and the boundary of the field of view are greater than or equal to a first distance, determining the rendering area of the image to be processed as the first rendering area, The second rendering area, the third rendering area, and the fourth rendering area.
  • the first distance refers to a radius of a circular area at a fourth preset viewing angle at a position coordinate center of the fixation point.
  • the The rendering area is determined as a first rendering area, a second rendering area, a third rendering area, and a fourth rendering area.
  • the second case if the position coordinates of the fixation point and the distance between the fixation point and the visual field boundary are less than the first distance and greater than or equal to the second distance, the rendering area of the image to be processed is determined Are the first rendering area, the second rendering area, and the third rendering area.
  • the second distance refers to a radius of a circular area with a third preset viewing angle at a position coordinate center of the fixation point.
  • the rendering region of the image to be processed is determined as the first rendering region, the second rendering region, and the third rendering region.
  • the third distance refers to a radius of a circular area at a second preset viewing angle based on a position coordinate center of the fixation point.
  • the distance between the gaze point and the boundary of the field of view is less than the second distance and greater than or equal to the third distance, that is, the first and second rendering regions are both within the boundary of the field of view, and the third rendering region is fourth The rendering area is not within the boundary of the field of view. Therefore, the rendering area of the image to be processed is determined as the first rendering area and the second rendering area.
  • the fourth case In particular, if the fixation point is close to the boundary of the field of view, that is, the position coordinates of the fixation point, and the distance between the fixation point and the boundary of the field of view is smaller than a third distance, the The rendering area of the image to be processed is determined to be an area within the visual field boundary in the first rendering area.
  • an embodiment of the present application further provides an image processing device, and its working principle is described in detail below with reference to the accompanying drawings.
  • FIG. 5 is a structural block diagram of an image processing apparatus according to an embodiment of the present application.
  • the image processing apparatus 500 provided in this embodiment may include an obtaining unit 510, a first processing unit 520, and a second process. Unit 530.
  • the obtaining unit 510 is configured to obtain a to-be-processed image corresponding to a target object within a visual field range, where the visual field range includes a binocular visual field coincidence range and a non-binocular visual field coincidence range;
  • a first processing unit configured to perform image processing on the image to be processed based on the binocular gaze point within the binocular field of view coincidence range
  • a second processing unit is configured to perform image processing on the image to be processed based on a monocular fixation point within the non-binocular field of view coincidence range.
  • the performing image processing on the image to be processed includes:
  • the multiple rendering regions include at least two or more of the following:
  • the first rendering area is a circular area with a position coordinate of the fixation point as an origin and a first preset viewing angle;
  • the second rendering area is an area in which the first rendering area is removed from the circular area of the second preset viewing angle with the position coordinates of the fixation point as the center;
  • the third rendering region is a region in which the first rendering region and the second rendering region are removed from the circular region of the third preset angle of view with the position coordinates of the gaze point as the center;
  • the fourth rendering area is an area in which the first rendering area, the second rendering area, and the third rendering area are excluded from a circular area at a position coordinate center of the gaze point and a fourth preset viewing angle.
  • the first preset viewing angle is a 1.5-degree viewing angle
  • the second preset viewing angle is a 35-degree viewing angle
  • the third preset viewing angle is a 60-degree viewing angle
  • the fourth preset viewing angle is a 110-degree viewing angle.
  • the rendering unit is set to:
  • fixation point position determination device is a device corresponding to the fixation point position determination method provided in the foregoing embodiment, for specific implementation of each unit of the device, reference may be made to the description of related content in the foregoing method embodiment, here No longer.
  • fixation point position determination strategies are used in different visual field ranges, so that the determined fixation point positions in each visual field range are relatively accurate.
  • the image to be processed is further processed based on the determined fixation point, so that the quality of the processed image can be improved, thereby improving the user experience.
  • the storage medium may be a magnetic disk, an optical disc, a read-only memory (ROM), or a random access memory (Random Access Memory, RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Eye Examination Apparatus (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请实施例公开了一种图像处理方法及装置,该方法包括:获取与目标对象的视野范围相对应的待处理图像,其中视野范围包括双目视野重合范围和非双目视野重合范围;在双目视野重合范围内,基于双目注视点对待处理图像进行图像处理;在非双目视野重合范围内,基于单目注视点对待处理图像进行图像处理。由于在不同的视野范围内采用不同的注视点位置确定策略,使得所确定的各个视野范围内的注视点位置都比较准确。使得基于确定的注视点对待处理图像进行处理后的图像的质量得以提高。。

Description

一种图像处理方法及装置 技术领域
本申请涉及图像处理领域,尤其涉及一种注视点位置确定方法及装置。
背景技术
虚拟现实(virtual reality)技术是指采用计算机技术为核心的现代高科技手段生成一种虚拟环境,用户借助特殊的输入/输出设备,可以与虚拟世界中的物体进行自然的交互,从而通过视觉、听觉和触觉等获得与真实世界相同的感受。虚拟环境是由计算机生成的、实时动态的虚拟三维立体图像,用户转动头部或眼球时,注视点的位置发生改变,因而用户可以看到不同方向的虚拟三维立体图像。
传统的注视点位置确定方法基于对用户的双目进行跟踪确定或基于用户的单目进行跟踪确定。但是这种确定方法会导致注视点的位置确定不准确,进一步地,可能会导致根据该注视点的位置对待处理图像进行处理后获得的图像的质量不高,影响用户体验。
发明内容
为了解决现有技术中,注视点的位置确定不准确的问题,本申请实施例提供一种图像处理方法及装置。
第一方面,本申请实施例提供一种图像处理方法,所述方法包括:
获取与目标对象的视野范围相对应的待处理图像,其中所述视野范围包括双目视野重合范围和非双目视野重合范围;
在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行图像处理;
在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。
可选的,所述基于所述双目注视点对所述待处理图像进行图像处理包括:
根据所述双目注视点的位置坐标确定所述待处理图像的多个渲染区 域,不同渲染区域对应的渲染程度不同;
按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染;
和/或,
基于所述单目注视点对所述待处理图像进行图像处理包括:
根据所述单目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染。
可选的,所述多个渲染区域至少包括以下其中两个或多个:
第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域;
其中,所述第一渲染区域为以所述注视点的位置坐标为原点、第一预设视角的圆形区域;
所述第二渲染区域为以所述注视点的位置坐标为中心、第二预设视角的圆形区域除去所述第一渲染区域的区域;
所述第三渲染区域为以所述注视点的位置坐标为中心、第三预设视角的圆形区域除去所述第一渲染区域和所述第二渲染区域的区域;
所述第四渲染区域为以所述注视点的位置坐标中心、第四预设视角的圆形区域除去所述第一渲染区域、所述第二渲染区域和所述第三渲染区域的区域。
可选的,
所述第一预设视角为1.5度视角;
所述第二预设视角为35度视角;
所述第三预设视角为60度视角;
所述第四预设视角为110度视角。
可选的,所述按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染包括:
对所述第一渲染区域对应的待处理图像进行100%的图像渲染;
对所述第二渲染区域对应的待处理图像进行50%的图像渲染;
对所述第三渲染区域对应的待处理图像进行25%的图像渲染;
对所述第四渲染区域对应的待处理图像进行2%的图像渲染。
第二方面,本申请实施例提供一种图像处理装置,所述装置包括:
获取单元,设置为获取目标对象在视野范围内对应的待处理图像,所述视野范围包括双目视野重合范围和非双目视野重合范围;
第一处理单元,设置为在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行图像处理;
第二处理单元,设置为在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。
可选的,所述基于所述双目注视点对所述待处理图像进行图像处理,包括:
根据所述双目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染;
和/或,
基于所述单目注视点对所述待处理图像进行图像处理包括:
根据所述单目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染。
可选的,所述多个渲染区域至少包括以下其中两个或多个:
第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域;
其中,所述第一渲染区域为以所述注视点的位置坐标为原点、第一预设视角的圆形区域;
所述第二渲染区域为以所述注视点的位置坐标为中心、第二预设视角的圆形区域除去所述第一渲染区域的区域;
所述第三渲染区域为以所述注视点的位置坐标为中心、第三预设视角的圆形区域除去所述第一渲染区域和所述第二渲染区域的区域;
所述第四渲染区域为以所述注视点的位置坐标中心、第四预设视角的圆形区域除去所述第一渲染区域、所述第二渲染区域和所述第三渲染区域的区域。
可选的,
所述第一预设视角为1.5度视角;
所述第二预设视角为35度视角;
所述第三预设视角为60度视角;
所述第四预设视角为110度视角。
可选的,所述渲染单元设置为:
对所述第一渲染区域对应的待处理图像进行100%的图像渲染;
对所述第二渲染区域对应的待处理图像进行50%的图像渲染;
对所述第三渲染区域对应的待处理图像进行25%的图像渲染;
对所述第四渲染区域对应的待处理图像进行2%的图像渲染。
与现有技术相比,本申请实施例具有如下优点:
本申请实施例提供的图像处理的方法及装置,该方法包括:获取与目标对象的视野范围相对应的待处理图像,其中所述视野范围包括双目视野重合范围和非双目视野重合范围;在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行图像处理;在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。由此可见,采用本申请实施例提供的图像处理的方法及装置,在不同的视野范围内采用不同的注视点位置确定策略,使得所确定的各个视野范围内的注视点位置都比较准确。进一步基于确定的注视点对待处理图像进行处理,从而可以提高处理后的图像的质量,从而提高用户体验。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本申请实施例提供的一种图像处理方法的流程示意图;
图2为本申请实施例提供的一种目标对象的视野范围的示意图;
图3为本申请实施例提供的一种图像处理的方法的流程示意图;
图4为本申请实施例提供的多个渲染区域的示意图;
图5为本申请实施例提供的一种图像处理装置的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
发明人在研究中发现,在虚拟现实技术中,传统的注视点位置确定方法基于对用户的双目进行跟踪确定或基于用户的单目进行跟踪确定。
具体地,若基于对用户的双目进行跟踪确定注视点位置,即根据用户的左眼和右眼进行跟踪确定注视点位置,则在确定注视点位置时,会为左眼和右眼分别设置一个权重,一般而言,右眼的权重比左眼的权重高,这样一来,当注视点位于左眼视野范围减去双目视野重合范围的范围时,所确定的注视点的位置并不准确。
若基于对用户的单目进行跟踪确定注视点位置,即根据用户的左眼或者右眼进行跟踪确定注视点位置。那么,在根据用户的左眼进行跟踪确定注视点位置时,当注视点位于右眼视野范围减去双目视野重合范围的范围时,所确定的注视点的位置并不准确。在根据用户的右眼进行跟踪确定注视点位置时,当注视点位于左眼视野范围减去双目视野重合范围的范围时,所确定的注视点的位置并不准确。
进一步地,可能会导致根据该注视点的位置对待处理图像进行处理后获得的图像的质量不高,影响用户体验。
鉴于此,在本申请实施例中,获取与目标对象的视野范围相对应的待处理图像,其中所述视野范围包括双目视野重合范围和非双目视野重合范围;在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行 图像处理;在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。由此可见,采用本申请实施例提供的图像处理的方法及装置,在不同的视野范围内采用不同的注视点位置确定策略,使得所确定的各个视野范围内的注视点位置都比较准确。进一步基于确定的注视点对待处理图像进行处理,从而可以提高处理后的图像的质量,从而提高用户体验。
方法实施例
图1为本申请实施例提供的一种图像处理方法的流程示意图,如图1所示,本实施例提供的图像处理方法,可以通过如下步骤S101-S103实现。
S101:获取目标对象在视野范围内对应的待处理图像,所述视野范围包括双目视野重合范围和非双目视野重合范围。
需要说明的是,本申请实施例中提及的目标对象,可以是使用虚拟现实设备的用户。
需要说明的是,在本申请实施例的以下描述中,双目是指目标对象的左眼和右眼,单目是指目标对象的左眼或者右眼。
需要说明的是,本发明实施例对所述待处理图像不做具体限定,所述待处理图像可以是采用虚拟环境技术生成的各种图像。
可以理解的是,根据人的生理特征,人的视野范围是有限的,也就是说,目标对象的视野范围是有限的。在目标对象的视野范围内的图像,是目标用户能够观察到的图像,而在目标对象的视野范围之外的图像,目标用户则观察不到。
需要说明的是,本实施例中提及的双目视野重合范围是指,左眼和右眼都可以观察到的视野范围。非双目视野重合范围是指,目标对象的整个视野范围减去所述双目视野重合范围的范围。可结合图2进行说明,图2示出了目标对象的视野范围,其中非阴影部分201为双目视野重合范围,阴影部分包括202和203,均为非双目视野重合范围。
需要说明的是,获取所述待处理图像,在具体实现时,可以先获取所述待处理图像的图像数据,从而根据所述图像数据得到所述待处理图像。
S102:在所述双目视野重合范围内,基于双目注视点对所述待处理图 像进行处理。
需要说明的是,本申请实施例中提及的注视点,是指视知觉过程中,视线对准的对象的某一点。
需要说明的是,在本申请实施例中,所述双目注视点可以是预先获取的,本申请实施例不具体限定获取所述双目注视点的具体实现方式,作为一种示例,可以利用光学系统、微机电系统、电容传感器、肌电流检测器中的任意一种或多种器件获取所述双目注视点。
可以理解的是,双目视野重合范围是指,双目均可以观察到的视野范围,即左眼和右眼均可以观察到的视野范围。因此,在所述双目视野重合范围内,可以认为该双目注视点是准确的。因此,可以基于该双目注视点对待处理图像进行处理。
S103:在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。
需要说明的是,与所述双目注视点类似,在本申请实施例中,所述单目注视点可以是预先获取的,本申请实施例不具体限定获取所述单目注视点的具体实现方式,作为一种示例,可以利用光学系统、微机电系统、电容传感器、肌电流检测器中的任意一种或多种器件获取所述单目注视点。
可以理解的是,非双目视野重合范围是指,整个视野范围减去所述双目视野重合范围的范围。
可以理解的是,在所述非双目视野重合范围内时,该双目注视点可能并不准确而单目注视点可以认为是准确的,因此,可以基于该单目注视点对待处理图像进行处理。
由此可见,采用本申请实施例提供的图像处理的方法,在不同的视野范围内采用不同的注视点位置确定策略,使得所确定的各个视野范围内的注视点位置都比较准确。进一步基于确定的注视点对待处理图像进行处理,从而可以提高处理后的图像的质量,从而提高用户体验。
可以理解的是,所述非双目视野重合范围包括第一视野范围和第二视野范围,所述第一视野范围为左眼视野范围减去所述双目视野重合范围的范围,所述第二视野范围为右眼视野范围减去所述双目视野重合范围的范 围。图2为本申请实施例提供的一种目标对象的视野范围的示意图,如图2所示,图2中的非双目视野重合范围包括第一视野范围202和第二视野范围203。
相应的,在本申请实施例中,当所述目标对象的双目注视点在所述非双目视野重合范围内,从所述双目追踪模式切换为单目追踪模式以获取所述目标对象的单目注视点在具体实现时,可以为:
在所述第一视野范围内,根据左眼眼球的位置确定所述单目注视点在所述待处理图像中的位置坐标;在所述第二视野范围内,根据右眼眼球的位置确定所述单目注视点在所述待处理图像中的位置坐标。
可以理解的是,第一视野范围202是左眼可以观察到的视野范围,是右眼无法观察到的视野范围,因此,在第一视野范围内,根据左眼眼球的位置确定所述单目注视点在所述待处理图像中的位置坐标,可以使得所述注视点在所述待处理图像中的位置坐标比较准确。
第二视野范围203是右眼可以观察到的视野范围,是左眼无法观察到的视野范围。因此,在第二视野范围内,根据右眼眼球的位置确定单目注视点在所述待处理图像中的位置坐标。可以使得所述注视点在所述待处理图像中的位置坐标比较准确。
需要说明的是,在本申请实施例中,对所述待处理图像进行图像处理包括对所述待处理图像进行渲染。根据所述双目注视点或所述单目注视点对所述待处理图像进行渲染。图3为本申请实施例提供的一种图像处理的方法的流程示意图,如图3所示,可以通过如下步骤S301-S302实现。
S301:根据所述双目注视点或所述单目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同。
需要说明的是,本申请实施例对所述位置坐标不做限定,所述位置坐标可以是在预先建立的三维空间坐标系中的位置坐标,也可以是其它形式的位置坐标,所述位置坐标可以根据实际情况确定。
需要说明的是,本申请实施例对所述渲染区域的个数不做限定,所述 渲染区域的个数可以根据实际情况具体确定。
在一种可能的实现方式中,所述多个渲染区域至少包括第一渲染区域、第二渲染区域、第三渲染区域以及第四渲染区域中的任意两个或多个。
其中,所述第一渲染区域为以所述注视点的位置坐标为原点、第一预设视角的圆形区域;
所述第二渲染区域为以所述注视点的位置坐标为中心、第二预设视角的圆形区域除去所述第一渲染区域的区域。
所述第三渲染区域为以所述注视点的位置坐标为中心、第三预设视角的圆形区域除去所述第一渲染区域和所述第二渲染区域的区域。
所述第四渲染区域为以所述注视点的位置坐标为中心、第四预设视角的圆形区域除去所述第一渲染区域、所述第二渲染区域和所述第三渲染区域的区域。
具体地,可以结合图4进行理解。图4为本申请实施例提供的一种多个渲染区域的示意图。
在图4,原点O为注视点,其中410所在的圆形区域为第一渲染区域、420所在的圆环区域为第二渲染区域、430所在的圆环区域为第三渲染区域、440所在的圆环区域为第四渲染区域。
需要说明的是,本申请实施例对所述第一预设视角、第二预设视角、第三预设视角和第四预设视角不做具体限定。所述第一预设视角、第二预设视角、第三预设视角和第四预设视角可以根据实际情况具体限定。
发明人在研究中发现,在现实生活中,人观察事物时,中心视野为1.5度视角内,也就是说,在注视点的1.5度视野内,所观察的事物是最清晰的。在注视点的35度视角内,可以感知待观察事物的色彩,也就是说,在注视点的35度视角内,所观察的事物也很清晰。在注视点的60度视角内,可以产生立体视觉,即,在注视点的60度视角内,所观察的事物比较清晰。注视点的110度视角范围内,是人眼的单目最大视野范围,即在注视点的110度视角范围内,所观察的物体处于视野范围的边界区域,是比较模糊的。
鉴于此,可以根据人观察事物时视野的特点确定所述多个渲染区域。在本申请实施例的一种可能的实现方式中,所述第一预设视角可以为1.5 度视角,所述第二预设视角可以为35度视角,所述第三预设视角可以为60度视角,所述第四预设视角可以为110度视角。
S302:按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染。
需要说明的是,本申请实施例对所述渲染程度不做具体限定,所述渲染程度可以根据实际情况具体设定。
可以理解的是,在虚拟现实技术中,为了使得用户可以与虚拟世界中的物体进行自然的交互,从而通过视觉、听觉和触觉等获得与真实世界相同的感受。故而,虚拟现实技术为用户展示的虚拟三维立体图像,要与用户在现实生活中的视觉体验相吻合。
如上文所述可知,在第一渲染区域内,人所观察的事物是最清晰的,即目标对象所观察的待处理图像在第一渲染区域内是最清晰的。在第二渲染区域内,人所观察的事物是很清晰的,即目标对象所观察的待处理图像在第二渲染区域内是很清晰的。在第三渲染区域内,人所观察的事物比较清晰,即目标对象所观察的待处理图像在第三渲染区域内比较清晰。在第四渲染区域内,人所观察的事物比较模糊的,即目标对象所观察的待处理图像在第四渲染区域内比较模糊。
鉴于此,在本申请实施例中,步骤S302在具体实现时,可以对所述第一渲染区域对应的待处理图像进行100%的图像渲染;对所述第二渲染区域对应的待处理图像进行50%的图像渲染;对所述第三渲染区域对应的待处理图像进行25%的图像渲染;对所述第四渲染区域对应的待处理图像进行2%的图像渲染。从而使得渲染后的图像与用户在现实生活中的视觉体验相吻合,给用户带来良好的用户体验。
如前文所述,目标对象的视野范围是有限的。在目标对象的视野范围内的图像,是目标用户能够观察到的图像,而在目标对象的视野范围之外的图像,目标用户则观察不到。鉴于此,在本申请实施例的一种可能的实现方式中,步骤S301在具体实现时,可以根据所述注视点的位置坐标,和所述注视点与视野边界之间的距离,确定所述待处理图像的多个渲染区域。
此处提及的视野边界,是指,目标对象所能观察到的待处理图像的边界。
可以理解的是,在实际生活中,存在个体差异,即不同的人的视野范围可能不同,因此,不同的人对应的视野边界也不同。因此,在本申请实施例的一种可能的实现中,可以预先获取所述目标对象的视野范围,从而确定所述视野边界。具体地,可以利用相关测量仪器测量所述目标对象的视野范围,从而确定所述视野边界。也可以接收目标对象输入的视野范围,从而确定视野边界。
根据所述注视点的位置坐标,和所述注视点与视野边界之间的距离,确定所述待处理图像的多个渲染区域,至少可以包含以下几种情况。
第一种情况:若所述注视点的位置坐标,和所述注视点与视野边界之间的距离大于或者等于第一距离,则将所述待处理图像的渲染区域确定为第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域。其中,第一距离是指,以所述注视点的位置坐标中心、第四预设视角的圆形区域的半径。
可以理解的是。由于注视点与视野边界的距离大于第一距离,即第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域均在视野边界之内,故而,将将所述待处理图像的渲染区域确定为第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域。
第二种情况:若所述注视点的位置坐标,和所述注视点与视野边界之间的距离小于第一距离,且大于或者等于第二距离,则将所述待处理图像的渲染区域确定为第一渲染区域、第二渲染区域和第三渲染区域。其中,第二距离是指,以所述注视点的位置坐标中心、第三预设视角的圆形区域的半径。
可以理解的是,由于注视点与视野边界的距离小于第一距离,且大于或者等于第二距离,即第一渲染区域、第二渲染区域和第三渲染区域均在视野边界之内,而第四渲染区域不在视野边界之内,故而,将将所述待处理图像的渲染区域确定为第一渲染区域、第二渲染区域和第三渲染区域。
第三种情况:若所述注视点的位置坐标,和所述注视点与视野边界之 间的距离小于第二距离,且大于或者等于第三距离,则将所述待处理图像的渲染区域确定为第一渲染区域和第二渲染区域。其中,第三距离是指,以所述注视点的位置坐标中心、第二预设视角的圆形区域的半径。
可以理解的是,由于注视点与视野边界的距离小于第二距离,且大于或者等于第三距离,即第一渲染区域和第二渲染区域均在视野边界之内,而第三渲染区域第四渲染区域不在视野边界之内,故而,将将所述待处理图像的渲染区域确定为第一渲染区域和第二渲染区。
第四种情况:特别地,若所述注视点很接近所述视野边界,即所述注视点的位置坐标,和所述注视点与视野边界之间的距离小于第三距离,可以将所述待处理图像的渲染区域确定为所述第一渲染区域中在所述视野边界之内的区域。
由此可见,采用本申请实施例提供的图像处理方法,可以根据目标对象的注视点的位置坐标确定多个不同的渲染区域,并且不同的渲染区域的渲染程度不同,而并不是对目标对象视野所及的虚拟三维立体图像均进行渲染。提高了渲染效率,且降低了功耗。
装置实施例
基于以上实施例提供的图像处理方法,本申请实施例还提供了一种图像处理装置,下面结合附图来详细说明其工作原理。
图5为本申请实施例提供的一种图像处理装置的结构框图,如图5所示,本实施例提供的图像处理装置500,可以包括:获取单元510、第一处理单元520和第二处理单元530。
获取单元510,设置为获取目标对象在视野范围内对应的待处理图像,所述视野范围包括双目视野重合范围和非双目视野重合范围;
第一处理单元,设置为在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行图像处理;
第二处理单元,设置为在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。
可选的,所述对所述待处理图像进行图像处理,包括:
根据所述双目注视点或所述单目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染。
可选的,所述多个渲染区域至少包括以下其中两个或多个:
第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域;
其中,所述第一渲染区域为以所述注视点的位置坐标为原点、第一预设视角的圆形区域;
所述第二渲染区域为以所述注视点的位置坐标为中心、第二预设视角的圆形区域除去所述第一渲染区域的区域;
所述第三渲染区域为以所述注视点的位置坐标为中心、第三预设视角的圆形区域除去所述第一渲染区域和所述第二渲染区域的区域;
所述第四渲染区域为以所述注视点的位置坐标中心、第四预设视角的圆形区域除去所述第一渲染区域、所述第二渲染区域和所述第三渲染区域的区域。
可选的,
所述第一预设视角为1.5度视角;
所述第二预设视角为35度视角;
所述第三预设视角为60度视角;
所述第四预设视角为110度视角。
可选的,所述渲染单元设置为:
对所述第一渲染区域对应的待处理图像进行100%的图像渲染;
对所述第二渲染区域对应的待处理图像进行50%的图像渲染;
对所述第三渲染区域对应的待处理图像进行25%的图像渲染;
对所述第四渲染区域对应的待处理图像进行2%的图像渲染。
由于所述注视点位置确定装置是与以上实施例提供的注视点位置确定方法对应的装置,故而对于所述装置的各个单元的具体实现,可以参考以上方法实施例中相关内容的描述,此处不再赘述。
由此可见,采用本申请实施例提供的图像处理的装置,在不同的视野 范围内采用不同的注视点位置确定策略,使得所确定的各个视野范围内的注视点位置都比较准确。进一步基于确定的注视点对待处理图像进行处理,从而可以提高处理后的图像的质量,从而提高用户体验。
当介绍本申请的各种实施例的元件时,冠词“一”、“一个”、“这个”和“所述”都意图表示有一个或多个元件。词语“包括”、“包含”和“具有”都是包括性的并意味着除了列出的元件之外,还可以有其它元件。
需要说明的是,本领域普通技术人员可以理解实现上述方法实施例中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。其中,所述存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元及模块可以是或者也可以不是物理上分开的。另外,还可以根据实际的需要选择其中的部分或者全部单元和模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本申请的具体实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (12)

  1. 一种图像处理方法,所述方法包括:
    获取与目标对象的视野范围相对应的待处理图像,其中所述视野范围包括双目视野重合范围和非双目视野重合范围;
    在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行图像处理;
    在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。
  2. 根据权利要求1所述的方法,其中,所述基于所述双目注视点对所述待处理图像进行图像处理包括:
    根据所述双目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
    按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染;
    和/或,
    基于所述单目注视点对所述待处理图像进行图像处理包括:
    根据所述单目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
    按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染。
  3. 根据权利要求2所述的方法,其中,所述多个渲染区域至少包括以下其中两个或多个:
    第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域;
    其中,所述第一渲染区域为以所述注视点的位置坐标为原点、第一预设视角的圆形区域;
    所述第二渲染区域为以所述注视点的位置坐标为中心、第二预设视角的圆形区域除去所述第一渲染区域的区域;
    所述第三渲染区域为以所述注视点的位置坐标为中心、第三预设视角的圆形区域除去所述第一渲染区域和所述第二渲染区域的区域;
    所述第四渲染区域为以所述注视点的位置坐标中心、第四预设视角的 圆形区域除去所述第一渲染区域、所述第二渲染区域和所述第三渲染区域的区域。
  4. 根据权利要求3所述的方法,其中,
    所述第一预设视角为1.5度视角;
    所述第二预设视角为35度视角;
    所述第三预设视角为60度视角;
    所述第四预设视角为110度视角。
  5. 根据权利要求3或4所述的方法,其中,所述按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染包括:
    对所述第一渲染区域对应的待处理图像进行100%的图像渲染;
    对所述第二渲染区域对应的待处理图像进行50%的图像渲染;
    对所述第三渲染区域对应的待处理图像进行25%的图像渲染;
    对所述第四渲染区域对应的待处理图像进行2%的图像渲染。
  6. 一种图像处理装置,所述装置包括:
    获取单元,设置为获取目标对象在视野范围内对应的待处理图像,所述视野范围包括双目视野重合范围和非双目视野重合范围;
    第一处理单元,设置为在所述双目视野重合范围内,基于双目注视点对所述待处理图像进行图像处理;
    第二处理单元,设置为在所述非双目视野重合范围内,基于单目注视点对所述待处理图像进行图像处理。
  7. 根据权利要求6所述的装置,其中,所述基于所述双目注视点对所述待处理图像进行图像处理,包括:
    根据所述双目注视点的位置坐标确定所述待处理图像的多个渲染区域,不同渲染区域对应的渲染程度不同;
    按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染;
    和/或,
    基于所述单目注视点对所述待处理图像进行图像处理包括:
    根据所述单目注视点的位置坐标确定所述待处理图像的多个渲染区 域,不同渲染区域对应的渲染程度不同;
    按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染。
  8. 根据权利要求7所述的装置,其中,所述多个渲染区域至少包括以下其中两个或多个:
    第一渲染区域、第二渲染区域、第三渲染区域和第四渲染区域;
    其中,所述第一渲染区域为以所述注视点的位置坐标为原点、第一预设视角的圆形区域;
    所述第二渲染区域为以所述注视点的位置坐标为中心、第二预设视角的圆形区域除去所述第一渲染区域的区域;
    所述第三渲染区域为以所述注视点的位置坐标为中心、第三预设视角的圆形区域除去所述第一渲染区域和所述第二渲染区域的区域;
    所述第四渲染区域为以所述注视点的位置坐标中心、第四预设视角的圆形区域除去所述第一渲染区域、所述第二渲染区域和所述第三渲染区域的区域。
  9. 根据权利要求8所述的装置,其中,
    所述第一预设视角为1.5度视角;
    所述第二预设视角为35度视角;
    所述第三预设视角为60度视角;
    所述第四预设视角为110度视角。
  10. 根据权利要求8或9所述的装置,其中,所述按照所述多个渲染区域对应的渲染程度对所述待处理图像进行图像渲染包括:
    对所述第一渲染区域对应的待处理图像进行100%的图像渲染;
    对所述第二渲染区域对应的待处理图像进行50%的图像渲染;
    对所述第三渲染区域对应的待处理图像进行25%的图像渲染;
    对所述第四渲染区域对应的待处理图像进行2%的图像渲染。
  11. 一种存储介质,所述存储介质包括存储的程序,其中,在所述程序运行时控制所述存储介质所在设备执行权利要求1至5中任意一项所述的图像处理方法。
  12. 一种图像处理设备,包括存储器和处理器,
    所述存储器存储有计算机程序;
    所述处理器,设置为执行所述存储器中存储的计算机程序,所述计算机程序运行时执行权利要求1至5中任意一项所述的图像处理方法。
PCT/CN2019/077304 2018-08-01 2019-03-07 一种图像处理方法及装置 WO2020024593A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810864706.0A CN109087260A (zh) 2018-08-01 2018-08-01 一种图像处理方法及装置
CN201810864706.0 2018-08-01

Publications (1)

Publication Number Publication Date
WO2020024593A1 true WO2020024593A1 (zh) 2020-02-06

Family

ID=64831274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077304 WO2020024593A1 (zh) 2018-08-01 2019-03-07 一种图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN109087260A (zh)
WO (1) WO2020024593A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087260A (zh) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 一种图像处理方法及装置
CN109901290B (zh) * 2019-04-24 2021-05-14 京东方科技集团股份有限公司 注视区域的确定方法、装置及可穿戴设备
CN110378914A (zh) * 2019-07-22 2019-10-25 北京七鑫易维信息技术有限公司 基于注视点信息的渲染方法及装置、系统、显示设备
CN112465939B (zh) * 2020-11-25 2023-01-24 上海哔哩哔哩科技有限公司 全景视频渲染方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282532A (zh) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3d显示方法和装置
CN105425399A (zh) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 一种根据人眼视觉特点的头戴设备用户界面呈现方法
US20160328884A1 (en) * 2014-11-27 2016-11-10 Magic Leap, Inc. Virtual/augmented reality system having dynamic region resolution
CN106485790A (zh) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 一种画面显示的方法以及装置
CN109087260A (zh) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 一种图像处理方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10298911B2 (en) * 2014-03-31 2019-05-21 Empire Technology Development Llc Visualization of spatial and other relationships
US10372205B2 (en) * 2016-03-31 2019-08-06 Sony Interactive Entertainment Inc. Reducing rendering computation and power consumption by detecting saccades and blinks
CN106327584B (zh) * 2016-08-24 2020-08-07 深圳市瑞云科技有限公司 一种用于虚拟现实设备的图像处理方法及装置
CN106570923A (zh) * 2016-09-27 2017-04-19 乐视控股(北京)有限公司 一种画面渲染方法及装置
CN108287678B (zh) * 2018-03-06 2020-12-29 京东方科技集团股份有限公司 一种基于虚拟现实的图像处理方法、装置、设备和介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105282532A (zh) * 2014-06-03 2016-01-27 天津拓视科技有限公司 3d显示方法和装置
US20160328884A1 (en) * 2014-11-27 2016-11-10 Magic Leap, Inc. Virtual/augmented reality system having dynamic region resolution
CN105425399A (zh) * 2016-01-15 2016-03-23 中意工业设计(湖南)有限责任公司 一种根据人眼视觉特点的头戴设备用户界面呈现方法
CN106485790A (zh) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 一种画面显示的方法以及装置
CN109087260A (zh) * 2018-08-01 2018-12-25 北京七鑫易维信息技术有限公司 一种图像处理方法及装置

Also Published As

Publication number Publication date
CN109087260A (zh) 2018-12-25

Similar Documents

Publication Publication Date Title
WO2020024593A1 (zh) 一种图像处理方法及装置
JP7094266B2 (ja) 単一深度追跡型の遠近調節-両眼転導ソリューション
US9807361B2 (en) Three-dimensional display device, three-dimensional image processing device, and three-dimensional display method
JP2024028736A (ja) ユーザカテゴリ化による多深度平面ディスプレイシステムのための深度平面選択
US20140375531A1 (en) Method of roviding to the user an image from the screen of the smartphome or tablet at a wide angle of view, and a method of providing to the user 3d sound in virtual reality
Leroy et al. Visual fatigue reduction for immersive stereoscopic displays by disparity, content, and focus-point adapted blur
Weier et al. Predicting the gaze depth in head-mounted displays using multiple feature regression
KR20160094190A (ko) 시선 추적 장치 및 방법
KR101788452B1 (ko) 시선 인식을 이용하는 콘텐츠 재생 장치 및 방법
EP3001681B1 (en) Device, method and computer program for 3d rendering
JP2023113632A (ja) ホログラフィック実空間屈折システム
JP2022511571A (ja) 拡張現実ヘッドセットの動的収束調整
Leroy et al. Real-time adaptive blur for reducing eye strain in stereoscopic displays
US20180165887A1 (en) Information processing method and program for executing the information processing method on a computer
CN112099622B (zh) 一种视线追踪方法及装置
WO2019136588A1 (zh) 基于云端计算的标定方法、装置、电子设备和计算机程序产品
Wang et al. Control with vergence eye movement in augmented reality see-through vision
Wibirama et al. 3D gaze tracking on stereoscopic display using optimized geometric method
JP2023515205A (ja) 表示方法、装置、端末機器及びコンピュータプログラム
US11119571B2 (en) Method and device for displaying virtual image
Hussain et al. Modelling foveated depth-of-field blur for improving depth perception in virtual reality
CN111479104A (zh) 用于计算视线会聚距离的方法
WO2015034453A1 (en) Providing a wide angle view image
CN108471939B (zh) 一种潘弄区测量方法、装置以及可穿戴显示设备
CN109031667A (zh) 一种虚拟现实眼镜图像显示区域横向边界定位方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19843724

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19843724

Country of ref document: EP

Kind code of ref document: A1