WO2022007247A1 - 头戴式设备及其渲染方法、存储介质 - Google Patents

头戴式设备及其渲染方法、存储介质 Download PDF

Info

Publication number
WO2022007247A1
WO2022007247A1 PCT/CN2020/123467 CN2020123467W WO2022007247A1 WO 2022007247 A1 WO2022007247 A1 WO 2022007247A1 CN 2020123467 W CN2020123467 W CN 2020123467W WO 2022007247 A1 WO2022007247 A1 WO 2022007247A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
information
user
auxiliary
head
Prior art date
Application number
PCT/CN2020/123467
Other languages
English (en)
French (fr)
Inventor
王程龙
Original Assignee
歌尔股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 歌尔股份有限公司 filed Critical 歌尔股份有限公司
Publication of WO2022007247A1 publication Critical patent/WO2022007247A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present application relates to the technical field of smart device interaction, and in particular, to a head-mounted device, a rendering method thereof, and a storage medium.
  • a head-mounted display device is a wearable virtual display product.
  • the technical principles of the current head-mounted display devices are roughly divided into virtual reality (Virtual Reality) referred to as VR display, augmented reality (Augmented Reality) referred to as AR display, mixed reality (Mixed Reality) referred to as MR display and Extended Reality (Extended Reality) referred to as XR display show.
  • the rendering method for the area displayed by the head-mounted device mainly adopts a unified and fixed resource allocation method for rendering.
  • the user's gaze range is limited. This method of uniformly rendering the entire area will undoubtedly waste resources and increase the rendering burden of hardware resources.
  • This will cause blurry and stuck images in the display area viewed by the user, reducing the user's viewing experience.
  • the embodiments of the present application aim to solve the problem that the current resource configuration is simple when rendering the display area of the head-mounted device, and in the case of limited hardware resources, This will cause problems such as blurring and freezing when users view images in the display area.
  • one aspect of the present application provides a method for rendering a head-mounted device, and the method for rendering an image includes the following steps:
  • the main eye focus area and the auxiliary eye focus area are rendered using different resource configurations.
  • the step of rendering the focal area of the main eye and the focal area of the auxiliary eye by using different resource configurations includes:
  • the primary eye focal area and the auxiliary eye focal area are rendered using different resource configurations according to the first missing ratio and the second missing ratio.
  • the step of rendering the main eye focal area and the auxiliary eye focal area by using different resource configurations according to the first missing ratio and the second missing ratio includes:
  • the primary eye focal area is rendered according to the first resource allocation ratio
  • the secondary eye focal area is rendered according to the second resource allocation ratio
  • the step of determining the dominant eye information and auxiliary eye information of the user currently using the head-mounted device according to the user identity information includes:
  • the data information stored in the database includes a number of pre-stored user identity information and the main eye information and auxiliary eye information associated with the pre-stored user identity information;
  • a message for obtaining the dominant eye information and auxiliary eye information corresponding to the user identity information is sent to the server, and the user's dominant eye information and auxiliary eye information are determined according to the returned information.
  • the step of determining the user's dominant eye information and auxiliary eye information according to the returned information includes:
  • the test operation of the dominant eye and the auxiliary eye is performed, and the user's dominant eye information and auxiliary eye information are determined according to the test operation result.
  • the step of performing the test operation of the dominant eye and the auxiliary eye, and determining the user's dominant eye information and the auxiliary eye information according to the test operation result includes:
  • the dominant eye information and the auxiliary eye information of the current user are determined according to the first missing ratio and the second missing ratio.
  • the step of determining the dominant eye information and auxiliary eye information of the current user according to the first missing ratio and the second missing ratio includes:
  • the first missing ratio is smaller than the second missing ratio, determining that the current user's right eye is the dominant eye and the left eye is the auxiliary eye;
  • the first missing ratio is greater than the second missing ratio, determining that the left eye of the current user is the dominant eye and the right eye is the auxiliary eye;
  • the first missing ratio is equal to the second missing ratio, it is determined that both the left eye and the right eye of the current user are dominant eyes.
  • the step of obtaining the user identity information of the currently used head-mounted device includes:
  • the identity information associated with the iris image is acquired as the user identity information.
  • another aspect of the present application further provides a head-mounted device, the head-mounted device comprising:
  • a first determining module for determining the dominant eye information and auxiliary eye information of the current user according to the user identity information
  • a second determining module configured to determine the user's main eye focus area and auxiliary eye focus area according to the current user's main eye information and auxiliary eye information;
  • the rendering module uses different resource configurations to render the focus area of the main eye and the focus area of the auxiliary eye.
  • another aspect of the present application further provides a computer-readable storage medium on which an image rendering program is stored, and when the image rendering program is executed by a processor, implements any one of the above steps of the method.
  • the present application proposes a rendering method for a head-mounted device.
  • the method automatically detects the user's identity information, and obtains the user's dominant eye information and auxiliary eye information by acquiring the user's identity information. Eye information, and then determine the focal area of the main eye and the focal area of the auxiliary eye when the user is viewing with the head-mounted device, and arrange different resource configurations to render the focal area of the main eye and the focal area of the auxiliary eye of the user.
  • the hardware resources will be reasonably allocated to the areas that need to be focused on rendering, so as to improve the smoothness of rendering the focal area of the main eye and the focal area of the auxiliary eye, improve the situation of image blur and freeze, and improve the user's viewing ability. experience.
  • FIG. 1 is a schematic structural diagram of a device involved in an embodiment of the present application
  • FIG. 2 is a schematic flowchart of an embodiment of a rendering method for a head-mounted device of the present application
  • FIG. 3 is a schematic diagram of the focal area of the main eye and the focal area of the auxiliary eye in the rendering method of the head-mounted device of the application;
  • FIG. 4 is a schematic flowchart of another embodiment of a rendering method for a head-mounted device of the present application
  • FIG. 5 is a schematic flowchart of another embodiment of a rendering method for a head-mounted device of the present application
  • FIG. 6 is a schematic flowchart of another embodiment of a rendering method for a head-mounted device of the present application.
  • FIG. 7 is a schematic diagram of modules of a rendering method for a head-mounted device of the present application.
  • the main solutions of the embodiments of the present application are: acquiring the user identity information of the head-mounted device currently used; determining the dominant eye information and auxiliary eye information of the current user according to the user identity information; The eye information and the auxiliary eye information determine the main eye focus area and the auxiliary eye focus area of the user; the main eye focus area and the auxiliary eye focus area are rendered using different resource configurations.
  • the present application provides the above solution, which aims to improve the rationality of resource allocation when the hardware device renders the display area.
  • An embodiment of the present application provides a head-mounted device, where the head-mounted device includes a display part, a support part, and a control circuit.
  • the display part is used for displaying image information
  • the support part is used for the user to support the device when using the device so that the display part is located within the visual range of the user
  • the control circuit is used to control the head-mounted device Perform control operations such as rendering.
  • FIG. 1 is a schematic diagram of the terminal structure of the hardware operating environment of the device involved in the solution of the embodiment of the present application.
  • the terminal may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
  • the communication bus 1002 is used to realize the connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may include a standard wired interface and a wireless interface (eg, a WI-FI interface).
  • the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
  • the terminal may further include a camera, a fingerprint reader, a voiceprint reader, an iris reader, and the like.
  • the terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a temperature sensor, etc., which will not be repeated here.
  • the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal device, and may include more or less components than the one shown, or combine some components, or arrange different components.
  • the memory 1005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a rendering program of the head-mounted device.
  • the present application also provides a rendering method for a head-mounted device.
  • the rendering method of the head-mounted device includes:
  • Step S10 obtaining the user identity information of the current head-mounted device
  • the camera device of the head-mounted device When the head-mounted device is turned on, the camera device of the head-mounted device is controlled to be activated, and the user's face information is collected. In the database area of the head-mounted device, the user's face information is compared with the face information stored in the database, and then the identity information of the user currently using the head-mounted device is determined.
  • the user identity information is information that uniquely identifies the identity of the user currently using the head-mounted device.
  • the step of obtaining the user identity information of the currently used head-mounted device includes:
  • Step S11 acquiring the user's binocular iris images
  • Step S12 matching the iris image of both eyes with the iris image in the database
  • Step S13 if the matching is successful, obtain the user identity information associated with the iris image as the user identity information of the currently used head-mounted device.
  • the user's wearing characteristics of the head-mounted device can be obtained by collecting iris images of both eyes of the user.
  • a specific iris acquisition camera can be located on the side of the display part of the head-mounted device, and when the user is using the head-mounted device, the iris acquisition camera is controlled to acquire the user's eye image, and then the eye image is obtained from the eye image. Extract the iris information, and transfer the iris information to the database, and match the iris image of the two eyes with the iris image stored in the database, and if the matching is successful, obtain the user identity information associated with the iris image as the current use of the headset.
  • user identity information of the device The database stores user identity information, and data such as primary eye information and auxiliary eye information stored corresponding to the user identity information.
  • the method of obtaining the identity information of the user currently using the head-mounted device in the present application can also be obtained through voiceprint recognition, fingerprint recognition, etc., and is not limited to the above-mentioned identification methods.
  • Step S20 determining the dominant eye information and auxiliary eye information of the current user according to the user identity information
  • the head-mounted device sends the obtained user identity information to the processor, and the processor compares it with the user identity information saved in the database on the memory, if the user identity information in the database matches the current user identity information, then Acquire the dominant eye information and auxiliary eye information stored corresponding to the identity information.
  • the main viewing eye information and the auxiliary viewing eye information are determined according to the user's eye habit.
  • Step S30 determining the user's main eye focus area and auxiliary eye focus area according to the current user's main eye information and auxiliary eye information;
  • the user's main eye information and auxiliary eye information are acquired, the user's main eye focus area and auxiliary eye focus area in the display interface can be determined.
  • the photoreceptor cells in the retina convert incoming light into signals transmitted by the optic nerve to the brain, where the middle of the retina is the eye's sharpest center of vision and where most colors are perceived.
  • the focal region that is, the ROI (region of interest) region, as shown in FIG. 3 .
  • the focus area is the main area for users to watch, and a large proportion of hardware resources should be arranged for rendering to ensure that the focus area can be rendered normally.
  • the rendering includes adjusting the color, resolution, pixel, light and shadow effect, and shadow effect of the currently playing image.
  • the area where the display area of the head mounted device falls in the middle of the retina of the user's main eye is called the focus area of the main eye; the area that falls in the middle of the retina of the user's auxiliary eye is called the focus area of the auxiliary eye.
  • the focus area will change with the change of the plot.
  • eye tracking technology can be used to track the change of the user's sight in real time. , obtain the focal area of the main eye and the focal area of the auxiliary eye in time, and allocate enough hardware resources to them to achieve the purpose of timely rendering and improve the user's sense of experience.
  • Step S40 using different resource configurations to render the focal area of the main eye and the focal area of the auxiliary eye.
  • the hardware resources of the head-mounted device consume a lot of computing power and time when calculating new pixels and displaying those pixels on the display. For example, when the display has 5 million pixels, if every displayed pixel needs to be rendered, the hardware resources such as the processor will have a heavy burden when rendering the display area.
  • the resource is configured to calculate the missing ratio of the test image observed by the user's dominant eye compared to the test image observed by both eyes and the test image observed by the secondary eye compared to the test image observed by both eyes The missing ratio is calculated.
  • the user's main eye information and auxiliary eye information are determined, and then the user's main eye focus area and auxiliary eye focus area when viewing an image are determined.
  • the hardware resource configuration required for normal rendering of the focal area of the visual eye and the focal area of the auxiliary eye can render the focal area normally according to the user's personal eye habits, which improves the performance of the head-mounted device for rendering the image rendering area. Accuracy, and can reasonably allocate hardware resources according to the focus area of the main eye and the focus area of the auxiliary eye, so that the rendering is smoother.
  • FIG. 4 is another embodiment of the present application.
  • the step of rendering the focal area of the main eye and the focal area of the auxiliary eye by using different resource configurations includes:
  • Step S41 obtaining the first missing ratio and the second missing ratio
  • Step S42 using different resource configurations to render the main eye focal area and the auxiliary eye focal area according to the first missing ratio and the second missing ratio.
  • the first missing ratio is the missing ratio of the user viewing the test image with the right eye compared to viewing the test image with both eyes
  • the second missing ratio is the user viewing the test image with the left eye. Proportion.
  • the resource configuration of the main eye focal area and the auxiliary eye focal area of the display area of the rendering head-mounted device is determined according to the first missing ratio and the second missing ratio.
  • the step of rendering the focal area of the main eye and the focal area of the auxiliary eye by using different resource configurations according to the first missing ratio and the second missing ratio includes:
  • Step S421 calculating a first resource allocation ratio and a second resource allocation ratio according to the first missing ratio and the second missing ratio;
  • Step S422 Render the main eye focal area according to the first resource allocation ratio, and render the auxiliary eye focal area according to the second resource allocation ratio.
  • the dominant eye information and auxiliary eye information, the first missing ratio, and the second missing ratio associated with the user identity information are further obtained.
  • the main eye information and auxiliary eye information determine the main eye focus area and auxiliary eye focus area where the user is using the head-mounted device, and according to the first missing ratio and the second missing The ratio obtains the first resource allocation ratio of the focal area of the main eye and the second resource allocation ratio of the focal area of the auxiliary eye.
  • the missing ratio of the user viewing the test image with the right eye is the first missing ratio compared with the missing ratio of viewing the test image with both eyes; the missing ratio of the user viewing the test image with the left eye is the first missing ratio; The ratio is the second missing ratio compared to the missing ratio of the test image viewed by both eyes. Comparing the magnitudes of the first deletion ratio and the second deletion ratio, the deletion ratio of the dominant eye (smaller deletion ratio) is denoted as ⁇ , and the deletion ratio of the auxiliary eye (the larger deletion ratio) is denoted as ⁇ .
  • the first missing ratio is 1/5 and the second missing ratio is 1/4
  • the allocation of GPU resources is M
  • the resource allocation (first resource allocation) required by the focal area of the main eye is 5/9*M
  • the required resource configuration (second resource configuration) is 4/9*M. Allocate hardware resources according to the resource configuration to render the focal area of the main eye and the focal area of the auxiliary eye respectively, so as to ensure the smoothness of the rendering area during rendering without affecting the user's viewing.
  • FIG. 5 is another embodiment of the present application.
  • the step of determining the dominant eye information and auxiliary eye information of the user currently using the head-mounted device according to the user identity information includes:
  • Step S21 judging whether the user identity information exists in the database, and the data information stored in the database includes a number of pre-stored user identity information and the main eye information and auxiliary eye information associated with the pre-stored user identity information;
  • Step S22 if there is, acquiring the dominant eye information and auxiliary eye information corresponding to the user identity information in the database;
  • Step S23 if it does not exist, send to the server a message for obtaining the dominant eye information and auxiliary eye information corresponding to the user identity information, and determine the user's dominant eye information and auxiliary eye information according to the returned information .
  • the user identity information of the currently used head-mounted device By comparing the user identity information of the currently used head-mounted device with the pre-stored user identity information in the database, it is determined whether the user identity information exists in the local database of the head-mounted device, and if so, the corresponding user identity information is obtained according to the user identity information.
  • Main eye information and auxiliary eye information and then determine the main eye focus area and auxiliary eye focus area in the display area of the user's viewing head-mounted device, and reasonably arrange resource allocation for the main eye focus area and auxiliary eye focus area. Perform a rendering operation.
  • the main eye focal area and the auxiliary eye focal area are part of the area currently displayed by the head-mounted device.
  • the step of determining the user's dominant eye information and auxiliary eye information according to the returned information includes:
  • Step S231 if there is dominant eye information and auxiliary eye information in the returned information, determine the user's dominant eye information and auxiliary eye information from the returned information;
  • Step S232 if there is no dominant eye information and auxiliary eye information in the returned information, perform a test operation of the dominant eye and auxiliary eye, and determine the user's dominant eye information and auxiliary eye according to the test operation result. information.
  • the user information currently using the head-mounted device can be sent to the server to check whether there is user identity information corresponding to the user at the server.
  • the server side is connected to a plurality of head mounted devices.
  • the user can store the data of the head-mounted device to the server, so that when the data stored in the currently used head-mounted device is cleared or when the user uses other head-mounted devices, he can access the head-mounted device by logging in to the user's personal center.
  • the server of the head-mounted device (you can also set the head-mounted device to automatically access it), and download the corresponding personal data information to the head-mounted device.
  • the acquired user information can be sent to the server to check whether the user identity information exists, so as to facilitate the acquisition of the user's identity information.
  • FIG. 6 is another embodiment of the application, the steps of performing the test operation of the dominant eye and the auxiliary eye, and determining the main eye information and the auxiliary eye information of the user according to the test operation result, include:
  • Step S2321 displaying a test image for performing the test operation on the display interface of the head-mounted device
  • Step S2322 reducing the test image in turn until the user sees the outer edge of the test image
  • Step S2323 obtaining the missing ratio of the test image viewed by the user's right eye compared to the test image viewed by both eyes, denoted as the first missing ratio, and obtained by the user's left eye viewing the test image compared to both eyes. Watch the missing ratio of the test image, denoted as the second missing ratio;
  • Step S2324 Determine the dominant eye information and the auxiliary eye information of the current user according to the first missing ratio and the second missing ratio.
  • the server connected to the head-mounted device does not have the user identity information of the current user of the head-mounted device, it is considered that the current user is using the head-mounted device for the first time, and the current user needs to perform primary eye information and auxiliary vision information for the current user.
  • the eye information determination test the test process is: firstly start the camera or iris device of the head-mounted device to collect the user identity information for testing, and then insert a test image for the test operation in the display interface of the head-mounted device , zoom out the test image sequentially until the user's eyes can see the outer edge of the test image at the same time, during this process, the head mounted device can be controlled to issue a voice prompt to the user to determine whether to stop zooming out the displayed test image.
  • the user is prompted to close the left eye, and the missing ratio of the user viewing the image with the right eye compared to viewing the test image with both eyes is obtained by voice, and record is the first missing ratio; further obtain the missing ratio that the user closes his right eye and observes the test image with his left eye compared to viewing the image with both eyes, and records it as the second missing ratio, and compares the missing ratio information with the user identity Information is stored accordingly.
  • the test image can be a brand logo image added in the middle of the LCD screen of the head mounted device.
  • the missing ratio of the logo image viewed by the user with the right eye compared to the logo image viewed with both eyes is recorded, which is called the first missing ratio; the missing ratio of the logo image viewed with the left eye is called the second missing ratio.
  • the step of determining the user's dominant eye information and auxiliary eye information according to the missing ratio includes:
  • Step S2325 if the first missing ratio is smaller than the second missing ratio, determine that the current user's right eye is the dominant eye and the left eye is the auxiliary eye;
  • Step S2326 if the first missing ratio is greater than the second missing ratio, determine that the left eye of the current user is the dominant eye and the right eye is the auxiliary eye;
  • Step S2327 if the first missing ratio is equal to the second missing ratio, determine that both the left eye and the right eye of the current user are dominant eyes.
  • the processor After receiving the information of the first missing ratio and the second missing ratio input by the user, the processor determines the size of the first missing ratio and the second missing ratio by comparing, and when the first missing ratio is smaller than the second missing ratio , determine that the user's right eye is the dominant eye, and the left eye is the auxiliary eye; when the first missing ratio is greater than the second missing ratio, determine that the user's left eye is the dominant eye, and the right eye is the auxiliary eye; A missing ratio is equal to the second missing ratio, and it is determined that the left and right eyes of the user are both dominant eyes.
  • the identity information of the user currently using the head-mounted device by determining the identity information of the user currently using the head-mounted device, and acquiring its dominant eye information and auxiliary eye information, then determining the The focus area of the main eye and the focus area of the auxiliary eye when the user uses the head-mounted device, and then allocate resources to render the focus area of the main eye and the focus area of the auxiliary eye.
  • the present application also provides a head-mounted device, the head-mounted device comprising:
  • a first determining module for determining the dominant eye information and auxiliary eye information of the current user according to the user identity information
  • a second determining module configured to determine the user's main eye focus area and auxiliary eye focus area according to the current user's main eye information and auxiliary eye information;
  • the rendering module uses different resource configurations to render the focus area of the main eye and the focus area of the auxiliary eye.
  • the present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a rendering program of a head-mounted device, and the rendering program of the head-mounted device is executed by a processor to realize the above-mentioned The rendering method of the headset.
  • a software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other in the technical field. in any other known form of storage medium.
  • RAM random access memory
  • ROM read only memory
  • electrically programmable ROM electrically erasable programmable ROM
  • registers hard disk, removable disk, CD-ROM, or any other in the technical field. in any other known form of storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)

Abstract

一种头戴式设备的渲染方法,该方法包括:获取当前使用头戴式设备的用户身份信息(S10);根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息(S20);根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域(S30);采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域(S40)。该方法旨在根据用户的主视眼信息以及辅视眼信息配置资源渲染主视眼聚焦区域以及辅视眼聚焦区域,提高渲染的流畅性。

Description

头戴式设备及其渲染方法、存储介质
本申请要求于2020年7月10日提交中国专利局、申请号为202010660813.9、发明名称为“头戴式设备及其渲染方法、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能设备交互技术领域,尤其涉及一种头戴式设备及其渲染方法、存储介质。
背景技术
头戴显示设备,是一种可穿戴的虚拟显示产品。目前的头戴显示设备的技术原理大致分为虚拟现实(Virtual Reality)简称VR显示,增强现实(Augmented Reality)简称AR显示,混合现实(Mixed Reality)简称MR显示和扩展现实(Extended Reality)简称XR显示。
现有技术中对头戴式设备所显示区域的渲染方法主要采用统一、固定的资源分配方式进行渲染。在观看的过程中,用户的注视范围是有限的,这种对整个区域进行统一渲染的方式无疑会造成资源的浪费,也增加硬件资源的渲染负担,且在硬件资源渲染能力有限的情况下,会导致用户所观看显示区域的图像时出现模糊、卡顿的情况,降低用户的观看体验。
发明内容
本申请实施例通过提供一种头戴式设备的渲染方法、头戴式设备和存储介质,旨在解决目前对头戴式设备显示区域进行渲染时资源配置简单,在硬件资源有限的情况下,会导致用户在观看显示区域的图像时出现模糊、卡顿等问题。
为实现上述目的,本申请一方面提供一种头戴式设备的渲染的方法,所述图像渲染的方法包括以下步骤:
获取当前使用头戴式设备的用户身份信息;
根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;
根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;
采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
可选地,所述采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域的步骤,包括:
获取第一缺失比例以及第二缺失比例;
根据所述第一缺失比例以及所述第二缺失比例采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
可选地,所述根据所述第一缺失比例以及所述第二缺失比例采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域的步骤,包括:
根据所述第一缺失比例以及所述第二缺失比例,计算得到第一资源配置比例以及第二资源配置比例;
根据所述第一资源配置比例渲染所述主视眼聚焦区域,根据所述第二资源配置比例渲染所述辅视眼聚焦区域。
可选地,所述根据所述用户身份信息确定当前使用头戴式设备用户的主视眼信息以及辅视眼信息的步骤,包括:
判断数据库是否存在所述用户身份信息,所述数据库存储的数据信息包括若干预存用户身份信息及与所述预存用户身份信息关联的主视眼信息以及辅视眼信息;
若存在,则获取所述数据库中与所述用户身份信息对应的主视眼信息以及辅视眼信息;
若不存在,则向服务器发送获取所述用户身份信息对应的主视眼信息以及辅视眼信息的消息,根据返回的信息中确定所述用户的主视眼信息以及辅视眼信息。
可选地,所述根据返回的信息中确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
若返回的信息中存在主视眼信息以及辅视眼信息,则从返回的信息中确定所述用户的主视眼信息以及辅视眼信息;
若返回的信息中不存在主视眼信息及辅视眼信息,则执行主视眼以及辅视眼的测试操作,依据测试操作结果确定所述用户的主视眼信息以及辅视眼信息。
可选地,所述执行主视眼以及辅视眼的测试操作,依据测试操作结果确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
将用于执行所述测试操作的测试图像显示于所述头戴式设备的显示界面;
依次缩小所述测试图像直至所述用户看到所述测试图像的外边缘;
获取所述用户右眼观看所述测试图像相较于双眼观看所述测试图像的缺失比例,记为第一缺失比例,以及获取所述用户左眼观看所述测试图像相较于双眼观看所述测试图像的缺失比例,记为第二缺失比例;
根据所述第一缺失比例以及所述第二缺失比例确定当前用户的主视眼信息以及辅视眼信息。
可选地,所述根据所述第一缺失比例以及所述第二缺失比例确定当前用户的主视眼信息以及辅视眼信息的步骤,包括:
若所述第一缺失比例小于所述第二缺失比例,则确定所述当前用户的右眼为主视眼,左眼为辅视眼;
若所述第一缺失比例大于所述第二缺失比例,则确定所述当前用户的左眼为主视眼,右眼为辅视眼;
若所述第一缺失比例等于所述第二缺失比例,则确定所述当前用户的左眼和右眼都为主视眼。
可选地,所述获取当前使用头戴式设备的用户身份信息的步骤,包括:
获取所述用户的双眼虹膜图像;
将所述双眼虹膜图像与数据库中的虹膜图像进行匹配;
若匹配成功,则获取与所述虹膜图像关联的身份信息作为所述用户身份信息。
此外,为实现上述目的,本申请另一方面还提供一种头戴式设备,所述头戴式设备包括:
获取模块,获取当前使用头戴式设备的用户身份信息;
第一确定模块,根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;
第二确定模块,根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;
渲染模块,采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
此外,为实现上述目的,本申请另一方面还提供一种计算机可读存储介质,其上存储有图像渲染程序,所述图像渲染程序被处理器执行时实现如上所述中任一项所述的方法的步骤。
本申请提出了一种头戴式设备的渲染方法,该方法在用户使用头戴式设备时,自动检测用户的身份信息,通过获取该用户的身份信息来获取用户的主视眼信息以及辅视眼信息,进而确定用户在使用头戴式设备观看时主视眼聚焦区域以及辅视眼聚焦区域,安排不同的资源配置对该用户的主视眼聚焦区域以及辅视眼聚焦区域进行渲染。通过上述方式,硬件资源将合理地分配到需要重点渲染的区域,达到了提高渲染主视眼聚焦区域以及辅视眼聚焦区域的流畅性的效果,改善图像模糊、卡顿的情况,提高用户观看体验。
附图说明
图1为本申请实施例方案涉及的装置的结构示意图;
图2为本申请头戴式设备的渲染方法一实施例的流程示意图;
图3为本申请头戴式设备的渲染方法主视眼聚焦区域以及辅视眼聚焦区域的示意图;
图4为本申请头戴式设备的渲染方法另一实施例流程示意图;
图5为本申请头戴式设备的渲染方法的又一实施例的流程示意图;
图6为本申请头戴式设备的渲染方法另一实施例的流程示意图;
图7为本申请头戴式设备的渲染方法模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例的主要解决方案是:获取当前使用头戴式设备的用户身份信息;根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
由于现有技术在对头戴式设备显示区域进行渲染时,采用的硬件资源分配非常简单,使用这种固定的渲染方式对显示区域进行统一渲染不适合用户的用眼需求,且需要耗费大量的硬件资源;或者在硬件资源不足时,在渲染过程中可能会导致播放图像产生卡顿等现象,降低用户的观看体验感。
本申请提供上述的解决方案,旨在提高硬件设备在对显示区域进行渲染时资源分配的合理性。
本申请实施例提出了一种头戴式设备,该头戴式设备包括显示部、支撑部以及控制电路。所述显示部用于显示图像信息,支撑部用于用户在使用该设备时、能够支撑所述设备以将将所述显示部位于用户视力范围内,所述控制电路用于控制头戴式设备进行渲染等控制操作。
在本申请实施例中,如图1所示,图1是本申请实施例方案涉及的设备的硬件运行环境的终端结构示意图。
如图1所示,该终端可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile  memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,终端还可以包括摄像头、指纹识别器、声纹识别器、虹膜识别器等等。当然,所述终端还可配置陀螺仪、气压计、湿度计、温度传感器等其他传感器,在此不再赘述。
本领域技术人员可以理解,图1中示出的终端结构并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机可读存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及头戴式设备的渲染程序。
本申请还提供一种头戴式设备的渲染方法。
参照图2,提出本申请头戴式设备的渲染方法的一实施例。在本实施中,所述头戴式设备的渲染方法包括:
步骤S10,获取当前使用头戴式设备的用户身份信息;
当头戴式设备开启时,控制启动头戴式设备的摄像头装置,采集用户的脸部信息,具体的可以通过拍摄照片的方式获取用户的脸部信息,将用户的脸部信息传送至头戴式设备的数据库区域,将所述用户脸部信息与数据库中存储的脸部信息进行对比,进而确定当前使用头戴式设备的用户身份信息。所述用户身份信息为唯一标识当前使用头戴式设备的用户身份的信息。
所述获取当前使用头戴式设备的用户身份信息的步骤,包括:
步骤S11,获取所述用户的双眼虹膜图像;
步骤S12,将所述双眼虹膜图像与数据库中的虹膜图像进行匹配;
步骤S13,若匹配成功,则获取与所述虹膜图像关联的用户身份信息作为所述当前使用头戴式设备的用户身份信息。
本申请一些实施例中,结合头戴式设备的用户佩戴特点可以通过采集用户的双眼虹膜图像进行获取。具体的,可将特定的虹膜获取摄像器位于头戴式设备的显示部一侧,当用户在使用头戴式设备时,则控制虹膜获取摄像器采集用户眼部图像,进而从眼部图像中提取虹膜信息,并将虹膜信息传递到数据库,将所述双眼虹膜图像与数据库中存储的虹膜图像进行匹配,在若匹 配成功时,获取与所述虹膜图像关联的用户身份信息作为当前使用头戴式设备的用户身份信息。所述数据库中存储有用户身份信息、并且与所述用户身份信息对应存储的主视眼信息以及辅视眼信息等数据。
可以理解的是,本申请中获取当前使用头戴式设备的用户身份信息的方式还可以通过声纹识别、指纹识别等方式获取,不局限于以上所述的识别方式。
步骤S20,根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;
头戴式设备将获取到的用户身份信息发送至处理器,处理器将其与存储器上的数据库中已保存的用户身份信息进行对比,若数据库中存在用户身份信息与当前用户身份信息匹配,则获取与该身份信息对应存储的主视眼信息以及辅视眼信息。其中,所述主视眼信息以及辅视眼信息为根据用户的用眼习惯而确定。
步骤S30,根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;
当获取到用户的主视眼信息以及辅视眼信息,即可确定显示界面中用户的主视眼聚焦区域以及辅视眼聚焦区域。
在人眼内,视网膜中的感光细胞将入射的光线转换成由视神经传递到大脑的信号,其中视网膜中间是眼睛最锐利的视力中心,且也是大多数颜色的感知位置。由此,当光聚焦在视网膜中间时,其会产生色彩最鲜明的视觉。基于此,用户在使用头戴式设备时,可以将头戴式设备显示区域中落入用户视网膜中间的区域称为聚焦区域,即ROI(region of interest)区域,如图3所示。聚焦区域为用户观看的主要区域,应安排比例较大的硬件资源进行渲染,以保证聚焦区域能正常进行渲染工作。所述渲染包括对当前播放的图像颜色、分辨率、像素、光影效果、阴影效果的调整。
其中,头戴式设备显示区域落入用户主视眼的视网膜中间的区域称为主视眼聚焦区域;落入用户辅视眼的视网膜中间的区域称为辅视眼聚焦区域。
具体的,用户在使用头戴式设备观看电影等需要对图像进行渲染的播放信息时,聚焦区域会随着情节的改变而发生变化,对于这种情况可采用眼球追踪技术实时追踪用户视线的变化,及时获取主视眼聚焦区域以及辅视眼聚 焦区域,对其分配足够的硬件资源,以达到及时渲染的目的,提高用户的体验感。
步骤S40,采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
头戴式设备的硬件资源计算新的像素并且在显示器上显示这些像素时,需要耗费大量的计算能力和时间。例如当显示器具有500万个像素时,若所显示的每一个像素都要进行渲染,则其在将显示区域进行渲染操作时,处理器等硬件资源会产生较大负担。所述资源配置为通过计算用户主视眼所观察到的测试图像相较于基于双眼观察到的测试图像的缺失比例以及辅视眼的所观察到的测试图像相较于双眼观察到的测试图像的缺失比例计算而得。
在本实施例中,通过获取用户信息,进而确定用户的主视眼信息以及辅视眼信息,再确定用户在观看图像时的主视眼聚焦区域以及辅视眼聚焦区域,根据计算得到对主视眼聚焦区域以及辅视眼聚焦区域进行正常渲染工作时所需要的硬件资源配置,能够根据用户的个人用眼习惯对聚焦区域进行正常的渲染,提高了头戴式设备对图像渲染区域渲染的准确度,且能根据主视眼聚焦区域以及辅视眼聚焦区域对硬件资源进行合理地分配,使渲染更流畅。
参照图4,图4为本申请的另一实施例,所述采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域的步骤,包括:
步骤S41,获取第一缺失比例以及第二缺失比例;
步骤S42,根据所述第一缺失比例以及所述第二缺失比例采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
所述第一缺失比例为用户使用右眼观看测试图像时相较于使用双眼观看测试图像的缺失比例,所述第二缺失比例为用户使用左眼观看测试图像相较于使用双眼观看图像的缺失比例。根据所述第一缺失比例以及第二缺失比例确定渲染头戴式设备的显示区域的主视眼聚焦区域以及辅视眼聚焦区域的资源配置。
其中,所述根据所述第一缺失比例以及所述第二缺失比例采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域的步骤,包括:
步骤S421,根据所述第一缺失比例以及所述第二缺失比例,计算得到第一资源配置比例以及第二资源配置比例;
步骤S422,根据所述第一资源配置比例渲染所述主视眼聚焦区域,根据所述第二资源配置比例渲染所述辅视眼聚焦区域。
在本实施例中通过获取用户身份信息后,进而得到与所述用户身份信息关联的主视眼信息以及辅视眼信息、第一缺失比例、第二缺失比例。其中,根据所述主视眼信息以及辅视眼信息确定用户在使用所述头戴式设备的主视眼聚焦区域以及辅视眼聚焦区域,根据所述第一缺失比例以及所述第二缺失比例得到主视眼聚焦区域的第一资源配置比例以及辅视眼聚焦区域的第二资源配置比例。
具体的,通过所述测试操作确定用户使用右眼观看所述测试图像的缺失比例相较于双眼观看所述测试图像的缺失比例为第一缺失比例;用户使用左眼观看所述测试图像的缺失比例相较于双眼观看所述测试图像的缺失比例为第二缺失比例。比较所述第一缺失比例以及所述第二缺失比例的大小,记主视眼的缺失比例(较小的缺失比例)为α、辅视眼的缺失比例(较大的缺失比例)为β。则在GPU资源配置为M时,第一资源配置比例(主视眼聚焦区域的资源配置比例)为:γ1=1–[(α(α/+β))],则主视眼聚焦区域的资源配置为M1,其计算公式为M1=γ1*M;第二资源配置比例(辅视眼聚焦区域资源配置比例)为:γ2=1–[(β(β/+β))],辅视眼聚焦区域的资源配置为M2,其计算公式为M2=γ2*M。
例如,当第一缺失比例为1/5,第二缺失比例为1/4时,可确定用户的主视眼为右眼,辅视眼为左眼,α为1/5,β为1/4,在分配GPU资源为M时,根据上述资源配置比例的计算公式可得知主视眼聚焦区域所需要的资源配置(第一资源配置)为5/9*M;辅视眼聚焦区域所需要的资源配置(第二资源配置)为4/9*M。根据所述资源配置分配硬件资源分别渲染主视眼聚焦区域以及辅视眼聚焦区域,以保证渲染区域在进行渲染时的流畅性,不影响用户观看。
参照图5,图5是本申请的又一实施例,所述根据所述用户身份信息确定当前使用头戴式设备用户的主视眼信息以及辅视眼信息的步骤,包括:
步骤S21,判断数据库是否存在所述用户身份信息,所述数据库存储的数据信息包括若干预存用户身份信息及与所述预存用户身份信息关联的主视眼信息以及辅视眼信息;
步骤S22,若存在,则获取所述数据库中与所述用户身份信息对应的主视眼信息以及辅视眼信息;
步骤S23,若不存在,则向服务器发送获取所述用户身份信息对应的主视眼信息以及辅视眼信息的消息,根据返回的信息中确定所述用户的主视眼信息以及辅视眼信息。
通过将当前使用头戴式设备的用户身份信息与数据库中预存的用户身份信息进行对比,判断头戴式设备本地数据库是否存在所述用户身份信息,若存在,则根据用户身份信息获取其对应的主视眼信息以及辅视眼信息,进而确定用户观看头戴式设备显示区域中的主视眼聚焦区域以及辅视眼聚焦区域,合理安排资源配置对主视眼聚焦区域和辅视眼聚焦区域进行渲染操作。所述主视眼聚焦区域以及辅视眼聚焦区域为当前头戴式设备所显示区域的一部分。
所述根据返回的信息中确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
步骤S231,若返回的信息中存在主视眼信息以及辅视眼信息,则从返回的信息中确定所述用户的主视眼信息以及辅视眼信息;
步骤S232,若返回的信息中不存在主视眼信息及辅视眼信息,则执行主视眼以及辅视眼的测试操作,依据测试操作结果确定所述用户的主视眼信息以及辅视眼信息。
当头戴式设备的数据库中不存在所述用户信息时,可以将当前使用头戴式设备的用户信息发送至服务器端,访问服务器端是否存在所述用户对应的用户身份信息。所述服务器端连接多个头戴式设备。用户可以将头戴式设备的数据存储至服务器端,以供当前使用的头戴式设备所存储的数据清空时或用户使用其它头戴式设备时,可以通过登录用户个人中心的方式访问头戴式设备的服务器(也可设置头戴式设备自动进行访问),下载对应的个人数据信息至头戴式设备。
在本实施例中,当头戴式设备本地数据库中不存在当前使用头戴式设备的用户身份信息时,可将获取的用户信息发送至服务器,查看是否存在所述用户身份信息,方便获取用户身份信息以及对应存储的主视眼信息和辅视眼信息。
参照图6,图6为本申请的另一实施例,所述执行主视眼以及辅视眼的测试操作,依据测试操作结果确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
步骤S2321,将用于执行所述测试操作的测试图像显示于所述头戴式设备的显示界面;
步骤S2322,依次缩小所述测试图像直至所述用户看到所述测试图像的外边缘;
步骤S2323,获取所述用户右眼观看所述测试图像相较于双眼观看所述测试图像的缺失比例,记为第一缺失比例,以及获取所述用户左眼观看所述测试图像相较于双眼观看所述测试图像的缺失比例,记为第二缺失比例;
步骤S2324,根据所述第一缺失比例以及所述第二缺失比例确定当前用户的主视眼信息以及辅视眼信息。
若头戴式设备连接的服务器端不存在当前使用头戴式设备的用户身份信息,则认为当前用户为第一次使用所述头戴式设备,需要对当前用户进行主视眼信息以及辅视眼信息的确定测试,其测试过程为,首先启动头戴式设备的摄像头或虹膜装置采集进行测试的用户身份信息,然后在所述头戴式设备的显示界面中插入用于测试操作的测试图像,依次缩小所述测试图像直至用户双眼同时看到所述测试图像外边缘,在此过程中可控制头戴式设备发出语音提示用户以确定是否停止缩小所述显示测试图像。若接收到用户的确认信息(可以通过语音识别的方式)则提示用户闭上左眼,通过语音的方式获取用户使用右眼观察所述图像相较于使用双眼观看测试图像时的缺失比例,记录为第一缺失比例;进一步地获取用户闭上右眼,使用左眼观察所述测试图像相较于使用双眼观看图像的缺失比例,记录为第二缺失比例,将所述缺失比例信息与用户身份信息对应存储。
所述测试图像可为在头戴式设备LCD屏幕中间加的一个品牌logo图像。首先打开四个外部摄像头采集外部的图像,并显示在头戴式设备的LCD屏幕上,然后在LCD屏幕中间加一个品牌logo图像,逐渐缩小LCD屏幕显示的区域,发送语音提示,提示用户保持双眼同时睁开。在LCD屏幕中只能看到logo图像,其它区域为黑色遮挡区时,语音提示用户睁开左眼并闭上右眼观察logo图像,然后睁开右眼并闭上左眼观察logo图像,分别记录用户使用右眼观看logo图像相较于使用双眼观看logo图像的缺失比例,称为第一缺失比例;使用左眼观看所述logo图像的缺失比例称为第二缺失比例。通过比较第一缺失比例与第二缺失比例之间的大小以准确地获取用户的主视眼信息以及辅视眼信息。
所述根据所述缺失比例确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
步骤S2325,若所述第一缺失比例小于所述第二缺失比例,则确定所述当前用户的右眼为主视眼,左眼为辅视眼;
步骤S2326,若所述第一缺失比例大于所述第二缺失比例,则确定所述当前用户的左眼为主视眼,右眼为辅视眼;
步骤S2327,若所述第一缺失比例等于所述第二缺失比例,则确定所述当前用户的左眼和右眼都为主视眼。
当接收到用户输入的第一缺失比例以及第二缺失比例的信息后,处理器通过比较,进而判断第一缺失比例和第二缺失比例的大小,当第一缺失比例小于所述第二缺失比例,确定用户的右眼为主视眼,左眼为辅视眼;当第一缺失比例大于所述第二缺失比例,确定用户的左眼为主视眼,右眼为辅视眼;当第一缺失比例等于所述第二缺失比例,确定用户的左右眼都为主视眼。
在本实施例中,通过确定当前使用头戴式设备的用户身份信息,并获取其主视眼信息以及辅视眼信息,进而根据所述主视眼信息以及所述辅视眼信息确定所述用户在使用头戴式设备时的主视眼聚焦区域以及辅视眼聚焦区域,进而分配资源进行渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。实现根据用户的用眼习惯渲染主视眼聚焦区域以及辅视眼聚焦区域。能够使渲染的区域符合用户的用眼需求,观看头戴式设备显示区域的图像更立体、清晰。
此外,本申请还提供了一种头戴式设备,所述头戴式设备包括:
获取模块,获取当前使用头戴式设备的用户身份信息;
第一确定模块,根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;
第二确定模块,根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;
渲染模块,采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
此外,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有头戴式设备的渲染程序,所述头戴式设备的渲染程序被处理器执行时实现上述所述的头戴式设备的渲染方法。
本说明书中各个实施例采用并列或者递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处可参见方法部分说明。
本领域普通技术人员还可以理解,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (10)

  1. 一种头戴式设备的渲染方法,其特征在于,所述方法包括:
    获取当前使用头戴式设备的用户身份信息;
    根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;
    根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;
    采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
  2. 如权利要求1所述的头戴式设备的渲染方法,其特征在于,所述采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域的步骤,包括:
    获取第一缺失比例以及第二缺失比例;
    根据所述第一缺失比例以及所述第二缺失比例采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
  3. 如权利要求2中所述的头戴式设备的渲染方法,其特征在于,所述根据所述第一缺失比例以及所述第二缺失比例采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域的步骤,包括:
    根据所述第一缺失比例以及所述第二缺失比例,计算得到第一资源配置比例以及第二资源配置比例;
    根据所述第一资源配置比例渲染所述主视眼聚焦区域,根据所述第二资源配置比例渲染所述辅视眼聚焦区域。
  4. 如权利要求1所述的头戴式设备的渲染方法,其特征在于,所述根据所述用户身份信息确定当前使用头戴式设备用户的主视眼信息以及辅视眼信息的步骤,包括:
    判断数据库是否存在所述用户身份信息,所述数据库存储的数据信息包括若干预存用户身份信息及与所述预存用户身份信息关联的主视眼信息以及辅视眼信息;
    若存在,则获取所述数据库中与所述用户身份信息对应的主视眼信息以及辅视眼信息;
    若不存在,则向服务器发送获取所述用户身份信息对应的主视眼信息以及辅视眼信息的消息,根据返回的信息中确定所述用户的主视眼信息以及辅视眼信息。
  5. 如权利要求4所述的头戴式设备的渲染方法,其特征在于,所述根据返回的信息中确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
    若返回的信息中存在主视眼信息以及辅视眼信息,则从返回的信息中确定所述用户的主视眼信息以及辅视眼信息;
    若返回的信息中不存在主视眼信息及辅视眼信息,则执行主视眼以及辅视眼的测试操作,依据测试操作结果确定所述用户的主视眼信息以及辅视眼信息。
  6. 如权利要求5所述的头戴式设备的渲染方法,其特征在于,所述执行主视眼以及辅视眼的测试操作,依据测试操作结果确定所述用户的主视眼信息以及辅视眼信息的步骤,包括:
    将用于执行所述测试操作的测试图像显示于所述头戴式设备的显示界面;
    依次缩小所述测试图像直至所述用户看到所述测试图像的外边缘;
    获取所述用户右眼观看所述测试图像相较于双眼观看所述测试图像的缺失比例,记为第一缺失比例,以及获取所述用户左眼观看所述测试图像相较于双眼观看所述测试图像的缺失比例,记为第二缺失比例;
    根据所述第一缺失比例以及所述第二缺失比例确定当前用户的主视眼信息以及辅视眼信息。
  7. 如权利要求6所述的头戴式设备的渲染方法,其特征在于,所述根据所述第一缺失比例以及所述第二缺失比例确定当前用户的主视眼信息以及辅视眼信息的步骤,包括:
    若所述第一缺失比例小于所述第二缺失比例,则确定所述当前用户的右眼为主视眼,左眼为辅视眼;
    若所述第一缺失比例大于所述第二缺失比例,则确定所述当前用户的左眼为主视眼,右眼为辅视眼;
    若所述第一缺失比例等于所述第二缺失比例,则确定所述当前用户的左眼和右眼都为主视眼。
  8. 如权利要求1所述的头戴式设备的渲染方法,其特征在于,所述获取当前使用头戴式设备的用户身份信息的步骤,包括:
    获取所述用户的双眼虹膜图像;
    将所述双眼虹膜图像与数据库中的虹膜图像进行匹配;
    若匹配成功,则获取与所述虹膜图像关联的用户身份信息作为所述当前使用头戴式设备的用户身份信息。
  9. 一种头戴式设备,其特征在于,所述头戴式设备包括:
    获取模块,获取当前使用头戴式设备的用户身份信息;
    第一确定模块,根据所述用户身份信息确定当前用户的主视眼信息以及辅视眼信息;
    第二确定模块,根据所述当前用户的主视眼信息以及辅视眼信息确定所述用户的主视眼聚焦区域以及辅视眼聚焦区域;
    渲染模块,采用不同的资源配置渲染所述主视眼聚焦区域以及所述辅视眼聚焦区域。
  10. 一种计算机可读存储介质,其特征在于,其上存储有图像渲染程序,所述图像渲染程序被处理器执行时实现权利要求1至8中任一项所述的头戴式设备的渲染方法。
PCT/CN2020/123467 2020-07-10 2020-10-24 头戴式设备及其渲染方法、存储介质 WO2022007247A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010660813.9A CN111857336B (zh) 2020-07-10 2020-07-10 头戴式设备及其渲染方法、存储介质
CN202010660813.9 2020-07-10

Publications (1)

Publication Number Publication Date
WO2022007247A1 true WO2022007247A1 (zh) 2022-01-13

Family

ID=73153583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123467 WO2022007247A1 (zh) 2020-07-10 2020-10-24 头戴式设备及其渲染方法、存储介质

Country Status (2)

Country Link
CN (1) CN111857336B (zh)
WO (1) WO2022007247A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114578940A (zh) * 2020-11-30 2022-06-03 华为技术有限公司 一种控制方法、装置和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105408838A (zh) * 2013-08-09 2016-03-16 辉达公司 基于用户观察的屏幕区域动态调整gpu特征
CN207654139U (zh) * 2017-05-16 2018-07-27 杨铭轲 检测、抑制单眼弃用的终端
CN109766011A (zh) * 2019-01-16 2019-05-17 北京七鑫易维信息技术有限公司 一种图像渲染方法和装置
US20190317599A1 (en) * 2016-09-16 2019-10-17 Intel Corporation Virtual reality/augmented reality apparatus and method
CN110830783A (zh) * 2019-11-28 2020-02-21 歌尔科技有限公司 一种vr影像处理方法、装置、vr眼镜及可读存储介质
CN111314687A (zh) * 2019-11-28 2020-06-19 歌尔科技有限公司 一种vr影像处理方法、装置、vr眼镜及可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130154913A1 (en) * 2010-12-16 2013-06-20 Siemens Corporation Systems and methods for a gaze and gesture interface
US9239661B2 (en) * 2013-03-15 2016-01-19 Qualcomm Incorporated Methods and apparatus for displaying images on a head mounted display
WO2016017144A1 (en) * 2014-07-31 2016-02-04 Seiko Epson Corporation Display device, control method for display device, and program
KR101870142B1 (ko) * 2016-08-12 2018-06-25 이성준 노안용 콘택트렌즈
US10642352B2 (en) * 2017-05-18 2020-05-05 Tectus Coporation Gaze calibration via motion detection for eye-mounted displays
CN107315470B (zh) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 图形处理方法、处理器和虚拟现实系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105408838A (zh) * 2013-08-09 2016-03-16 辉达公司 基于用户观察的屏幕区域动态调整gpu特征
US20190317599A1 (en) * 2016-09-16 2019-10-17 Intel Corporation Virtual reality/augmented reality apparatus and method
CN207654139U (zh) * 2017-05-16 2018-07-27 杨铭轲 检测、抑制单眼弃用的终端
CN109766011A (zh) * 2019-01-16 2019-05-17 北京七鑫易维信息技术有限公司 一种图像渲染方法和装置
CN110830783A (zh) * 2019-11-28 2020-02-21 歌尔科技有限公司 一种vr影像处理方法、装置、vr眼镜及可读存储介质
CN111314687A (zh) * 2019-11-28 2020-06-19 歌尔科技有限公司 一种vr影像处理方法、装置、vr眼镜及可读存储介质

Also Published As

Publication number Publication date
CN111857336B (zh) 2022-03-25
CN111857336A (zh) 2020-10-30

Similar Documents

Publication Publication Date Title
US9720238B2 (en) Method and apparatus for a dynamic “region of interest” in a display system
US20190004600A1 (en) Method and electronic device for image display
US10284817B2 (en) Device for and method of corneal imaging
US9076033B1 (en) Hand-triggered head-mounted photography
CN109074681A (zh) 信息处理装置、信息处理方法和程序
US11354805B2 (en) Utilization of luminance changes to determine user characteristics
CN103190883A (zh) 一种头戴式显示装置和图像调节方法
CN111556305B (zh) 图像处理方法、vr设备、终端、显示系统和计算机可读存储介质
AU2017201463A1 (en) Methods and systems for authenticating users
US10929957B2 (en) Display method, display device, electronic equipment, and storage medium
CN110051319A (zh) 眼球追踪传感器的调节方法、装置、设备及存储介质
CN113467619A (zh) 画面显示方法、装置和存储介质及电子设备
US20180288333A1 (en) Displaying Images on a Smartglasses Device Based on Image Data Received from External Camera
CN105144704B (zh) 显示设备和显示方法
WO2018219290A1 (zh) 一种信息终端
JP2023515205A (ja) 表示方法、装置、端末機器及びコンピュータプログラム
WO2022007247A1 (zh) 头戴式设备及其渲染方法、存储介质
CN113495629A (zh) 笔记本电脑显示屏亮度调节系统及方法
CN106095375B (zh) 显示控制方法和装置
CN109917908B (zh) 一种ar眼镜的图像获取方法及系统
JP2023090721A (ja) 画像表示装置、画像表示用プログラム及び画像表示方法
JP5725159B2 (ja) 測定装置、立体画像表示装置及び測定方法
CN105786430B (zh) 信息处理方法及电子设备
US20230309824A1 (en) Accommodation tracking based on retinal-imaging
CN114356088B (zh) 一种观看者跟踪方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20944121

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20944121

Country of ref document: EP

Kind code of ref document: A1