CN115202475A - Display method, display device, electronic equipment and computer-readable storage medium - Google Patents

Display method, display device, electronic equipment and computer-readable storage medium Download PDF

Info

Publication number
CN115202475A
CN115202475A CN202210763979.2A CN202210763979A CN115202475A CN 115202475 A CN115202475 A CN 115202475A CN 202210763979 A CN202210763979 A CN 202210763979A CN 115202475 A CN115202475 A CN 115202475A
Authority
CN
China
Prior art keywords
value
focal length
multimedia information
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210763979.2A
Other languages
Chinese (zh)
Inventor
李巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Jinghao Optical Co Ltd
Original Assignee
Jiangxi Jinghao Optical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Jinghao Optical Co Ltd filed Critical Jiangxi Jinghao Optical Co Ltd
Priority to CN202210763979.2A priority Critical patent/CN115202475A/en
Publication of CN115202475A publication Critical patent/CN115202475A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

The embodiment of the invention discloses a display method, a display device, electronic equipment and a computer readable storage medium. The method can comprise the following steps: acquiring a human eye watching region of a user; then obtaining the depth information of the eye gazing area; then, a focal length value can be determined according to the depth information of the human eye gazing area, and the focal length of the zoom lens group is adjusted to the focal length value; and finally, zooming the multimedia information based on the focal length value, and displaying the zoomed multimedia information. According to the embodiment of the invention, the problem of convergence regulation conflict can be solved by dynamically adjusting the focal length of the zoom lens group.

Description

Display method, display device, electronic equipment and computer-readable storage medium
Technical Field
The present invention relates to the field of display technologies, and in particular, to a display method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Virtual Reality (VR) technology is a multimedia technology emerging in recent years, and establishes a virtual reality environment by using computer hardware, software, sensors and the like, so that a user can experience and interact with a virtual world through a VR device.
The display principle of VR is based on binocular parallax, two-dimensional (2D) images of the same picture at different angles are respectively displayed in front of the left and right eyes of an observer through a display screen, and a scene with a certain stereoscopic effect is generated through parallax fusion of the left and right eyes. At the in-process that uses VR equipment, the region of watching of observer's eyes is different, and eyes can carry out convergence regulation, and when seeing the object of nearly department, eyes can gather together relatively, and when seeing object far away, eyes can disperse relatively. However, the image distance of the VR device imaged by the lens is fixed, and the light emitted by the display screen has no depth information, so that the focal length of the eyes of the observer is a fixed value. In the real world, the convergence adjustment and the focal length adjustment of the human eyes are coordinated, and thus, if the focal length of the eyes of the observer is fixed, only the convergence adjustment is performed, which causes a conflict of the convergence adjustment.
Disclosure of Invention
The embodiment of the invention discloses a display method, a display device, electronic equipment and a computer-readable storage medium, which are used for solving the problem of convergence regulation conflict.
A first aspect discloses a display method, which may be applied to a VR/AR (augmented reality) device, a module (e.g., a chip) in the VR/AR device, and a logic module or software that can implement all or part of the functions of the VR/AR device, where the VR/AR device is provided with a zoom lens group, and the following description is given by taking the application to the VR device as an example. The method can comprise the following steps: acquiring a human eye watching area of a user; acquiring depth information of the human eye gazing area; determining a focal length value according to the depth information; adjusting the focal length of the zoom lens group to the focal length value; zooming the multimedia information based on the focal length value; and displaying the zoomed multimedia information.
In the embodiment of the invention, the VR device can firstly acquire the eye watching area of the user and then acquire the depth information of the eye watching area, and then the VR device can determine a proper focal value according to the depth information and can adjust the focal length of the zoom lens group to the focal value, so that the virtual imaging of the VR device is matched with the depth information of the eye watching area of the user, thereby ensuring that the convergence adjustment and the focal length adjustment of the eyes of the user are consistent and avoiding the problem of convergence adjustment conflict. Then, since the content viewed by the user changes before and after the focal length value is adjusted, and the change may cause discomfort to the human eyes, the VR device may scale the multimedia information based on the focal length value and display the scaled multimedia information so as to maintain the content viewed by the user unchanged, thereby avoiding the problem of discomfort to the human eyes caused by the rapid change of the content viewed by the user.
As a possible implementation, the acquiring the human eye gaze area of the user includes: acquiring eye movement data of the user based on an eyeball tracking technology; a human eye gaze region of the user is determined based on the eye movement data.
In the embodiment of the invention, the VR device can acquire the eye movement data of the user through an eyeball tracking technology, and then the VR device can accurately determine the eye gazing area of the user based on the eye movement data.
As a possible implementation, the acquiring the depth information of the gaze area of the human eye includes: and acquiring the depth information of the human eye gazing area by an instant positioning and mapping SLAM technology or a depth sensor.
In the embodiment of the invention, the VR equipment can accurately acquire the depth information of the human eye gazing area through the instant positioning and map building SLAM technology or the depth sensor.
As a possible implementation, the scaling the multimedia information based on the focal value includes: determining an image distance q based on the focal length value and an object distance p, wherein the VR/AR device is further provided with a display module, the object distance is a distance between the display module and the zoom lens group, and the image distance is a distance between a virtual imaging position and the zoom lens group; calculating to obtain a scaling value m according to a formula m = p/q; and scaling the multimedia information according to the scaling value m.
In the embodiment of the present invention, the VR device may determine the image distance based on the focal length value and the object distance, then determine the zoom value based on the image distance and the object distance, and then zoom the multimedia information according to the zoom value, so that the content seen by the user may be maintained unchanged, and the user experience may be improved.
As a possible implementation, the scaling the multimedia information according to the scaling value includes: under the condition that the multimedia information is two-dimensional information, carrying out layer scaling or rendering scaling on the multimedia information according to the scaling value; and under the condition that the multimedia information is three-dimensional information, performing visual field rendering and zooming on the multimedia information according to the zooming value.
In the embodiment of the present invention, the VR device may perform different scaling adjustments for two-dimensional information and three-dimensional (3D) information, and for two-dimensional information, the VR device may perform layer scaling or rendering scaling on the multimedia information according to the scaling value; for three-dimensional information, the VR device may perform view rendering scaling on the multimedia information according to the scaling value.
As a possible implementation, the method further comprises: acquiring depth information of a non-gazing area; and blurring the multimedia information of the non-gazing area based on the depth information of the human eye gazing area and the non-gazing area.
In the embodiment of the invention, the VR equipment can also acquire the depth information of the non-gazing area, and then blurring the multimedia information of the non-gazing area based on the human eye gazing area and the depth information of the non-gazing area, so that the picture seen by a user can accord with the visual effect of the real world, the picture is more real, and the user experience can be improved.
As a possible implementation, the blurring the multimedia information of the non-gaze region based on the depth information of the human eye gaze region and the non-gaze region includes: determining a depth difference value based on the depth information of the human eye gaze area and the non-gaze area; determining a blurring value according to the depth difference; and performing blurring processing on the multimedia information of the non-gazing area based on the blurring value.
In the embodiment of the invention, the VR device may determine the depth difference value based on the depth information of the eye gazing area and the non-gazing area, and then may determine the virtualization value according to the depth difference value, so that the VR device may perform more accurate virtualization processing on the multimedia information of the non-gazing area based on the virtualization value.
A second aspect discloses a display apparatus, which may be a VR/AR device provided with a zoom lens group, or a module (e.g., a chip) in the VR/AR device. The apparatus may include:
a first acquisition unit configured to acquire a human eye gaze region of a user;
the second acquisition unit is used for acquiring the depth information of the human eye gazing area;
a determining unit for determining a focal length value according to the depth information;
an adjusting unit for adjusting the focal length of the zoom lens group to the focal length value;
the zooming unit is used for zooming the multimedia information based on the focal length value;
and the display unit is used for displaying the zoomed multimedia information.
As a possible implementation manner, the first obtaining unit is specifically configured to:
acquiring eye movement data of the user based on an eyeball tracking technology;
a human eye gaze region of the user is determined based on the eye movement data.
As a possible implementation manner, the second obtaining unit is specifically configured to:
and acquiring the depth information of the human eye gazing area by an instant positioning and mapping SLAM technology or a depth sensor.
As a possible implementation, the scaling unit scaling the multimedia information based on the focal value includes:
determining an image distance q based on the focal length value and an object distance p, wherein the VR/AR device is further provided with a display module, the object distance is a distance between the display module and the zoom lens group, and the image distance is a distance between a virtual imaging position and the zoom lens group;
calculating to obtain a scaling value m according to a formula m = p/q;
and scaling the multimedia information according to the scaling value m.
As a possible implementation manner, the scaling unit scaling the multimedia information according to the scaling value includes:
under the condition that the multimedia information is two-dimensional information, carrying out layer scaling or rendering scaling on the multimedia information according to the scaling value;
and under the condition that the multimedia information is three-dimensional information, performing visual field rendering and zooming on the multimedia information according to the zooming value.
As a possible embodiment, the display device may further include:
a third acquisition unit configured to acquire depth information of a non-gazing region;
and the processing unit is used for blurring the multimedia information of the non-gazing area based on the depth information of the human eye gazing area and the non-gazing area.
As a possible implementation manner, the processing unit performs blurring processing on the multimedia information of the non-gaze region based on the depth information of the human eye gaze region and the non-gaze region, including:
determining a depth difference value based on the depth information of the human eye gaze region and the non-gaze region;
determining a blurring value according to the depth difference;
and performing blurring processing on the multimedia information of the non-gazing area based on the blurring value.
A third aspect discloses an electronic device, which may be a VR/AR device or a module (e.g., a chip) in the VR/AR device, and the electronic device may include: a display module, a processor, and a memory. The display module is used for displaying content, the memory is used for storing computer programs, and the processor is used for calling the computer programs. When the processor executes the computer program stored in the memory, the processor is caused to execute the display method disclosed in the first aspect or any embodiment of the first aspect.
A fourth aspect discloses an electronic device, which may be a VR/AR device or a module (e.g., a chip) in the VR/AR device, and which may include: the system comprises an eyeball tracking module, a depth information module, an adjusting module, a display module and a processor. The eyeball tracking module is used for acquiring a human eye watching area of a user; the depth information module is used for acquiring the depth information of the human eye gazing area; the processor is configured to determine a focal length value according to the depth information; the adjusting module is configured to adjust the focal length of the zoom lens group to the focal length value; the processor is further configured to scale the multimedia information based on the focal length value; the display module is used for displaying the zoomed multimedia information.
A fifth aspect discloses a computer-readable storage medium having stored thereon a computer program or computer instructions which, when executed, implement the display method disclosed in the above aspects.
A sixth aspect discloses a chip comprising a processor for executing a program stored in a memory, which program, when executed, causes the chip to perform the above method.
As a possible implementation, the memory is located off-chip.
A seventh aspect discloses a computer program product comprising computer program code which, when executed, causes the above-mentioned display method to be performed.
It is to be understood that the display apparatus provided in the second aspect, the electronic device provided in the third aspect, the electronic device provided in the fourth aspect, the computer-readable storage medium provided in the fifth aspect, the chip provided in the sixth aspect, and the computer program product provided in the seventh aspect are all configured to execute the display method provided in the first aspect of the present application and any possible implementation manner of the first aspect. Therefore, the beneficial effects achieved by the method can refer to the beneficial effects in the corresponding method, and the details are not repeated here.
Drawings
FIG. 1 is a schematic flow chart of a display method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a field of view rendering zoom according to an embodiment of the present disclosure;
FIGS. 3A-3C are schematic diagrams of a multimedia message scaling apparatus according to an embodiment of the present invention;
FIGS. 4A-4D are diagrams illustrating another exemplary scaling of multimedia information according to embodiments of the present invention;
FIG. 5 is a schematic structural diagram of a display device according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application.
Detailed Description
The embodiment of the invention discloses a display method, a display device, electronic equipment and a computer-readable storage medium, which are used for solving the problem of convergence regulation conflict. The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those skilled in the art can explicitly and implicitly understand that the embodiments described herein can be combined with other embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application. The terms "first," "second," "third," and the like in the description and claims of this application and in the accompanying drawings are used differently than to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of steps or elements, or alternatively, a list of steps or elements not expressly listed or inherent to such process, method, article, or apparatus may also be included.
Only some, but not all, of the material relevant to the present application is shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but could have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
As used in this specification, the terms "component," "module," "system," "unit," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a unit may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or distributed between two or more computers. In addition, these units can execute from various computer readable media having various data structures stored thereon. The units may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., from a second unit of data interacting with another unit in a local system, distributed system, and/or across a network.
For a better understanding of the embodiments of the present invention, some terms and related technologies of the embodiments of the present invention will be described below.
The augmented reality technology is a technology for fusing virtual information with a real world, and relates to various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensors and the like.
In the real world, in order to clearly see a target, the two eyeballs are adjusted to move the visual lines of the two eyes to the direction of the target (i.e., convergence focus adjustment). When a user looks at a near target, the eyeballs are relatively gathered, and when the user looks at a distant object, the eyeballs are relatively scattered. Thereafter, in order to be able to see the target, the eyeball needs to be adjusted to the correct focal distance (i.e., focal distance adjustment). It can be seen that the focal length adjustment and the convergence adjustment of the eyes are coordinated with each other during the process of the user observing an object in the real world.
The current AR and VR display technologies mainly adopt the stereoscopic display principle. The stereoscopic display principle is based on binocular parallax, and a scene with a certain stereoscopic effect is generated through visual fusion of left and right eyes. Based on the stereoscopic display principle, the VR device and the AR device mainly include a display screen and lenses, perform virtual imaging through the lenses so as to display 2D images with parallax respectively in front of left and right eyes of an observer or a user, and form 3D visual sensation through parallax fusion of the left and right eyes. However, since the positions of the display screen and the lenses are fixed and the focal lengths of the lenses are fixed, the image distances of the 2D images observed by the left and right eyes of the user are also fixed, and the light emitted from the display screen has no depth information, the focal lengths of the eyes of the user are fixed to a fixed value (i.e., the diopter of the eyes of the user does not change). Therefore, no matter where the user looks in the VR world (i.e., the virtual world), the diopter of the user's eyes is not changed, i.e., diopter adjustment is not performed. In contrast, convergence adjustment occurs in the user's eyes when the user looks at objects at different distances in the VR world. Therefore, the user's eyes have a problem of Vergence Accommodation Conflict (VAC), i.e., the diopter accommodation and vergence accommodation of the user's eyes do not match. Under the condition that a user uses the VR/AR equipment for a long time, the user is easy to have the problems of eyestrain, nausea, dizziness and the like caused by the conflict of the visual convergence and adjustment.
At present, in order to solve the problem of the convergence adjustment conflict of the visual convergence, a variable-focus lens group can be adopted on VR/AR equipment, the image distance of VR virtual imaging is changed by dynamically adjusting the focal length of the variable-focus lens group, the focus adjustment of eyes can be matched with the binocular convergence adjustment, and the user experience is improved. However, at the same time, when the focal length of the variable focal lens group is dynamically adjusted, the fields of view observed by the left and right eyes of the user cannot be matched naturally, and the observed display screen is changed. In particular, a change in diopter (i.e., a change in focal length) of the variable focus lens group results in a change in the user's field of view, as well as a change in the image distance of the VR virtual image. In the VR world, for a picture at near (i.e. close range), the focal length of the variable focal lens group is changed to make the close range view smaller and the object in the view enlarged; for a distant picture (i.e. a distant view), the focal length of the variable focal lens group is changed to result in a wider view of the distant view and a smaller object in the view. Meanwhile, the fast change of the focal length of the zoom lens may cause the fast switching of the picture seen by the user (i.e. the change of the field of view and the size of the target in the field of view), which may cause the user to feel uncomfortable and affect the user experience.
The display method provided by the embodiment of the invention can be executed by a VR/AR device, and the VR/AR device is generally referred to as an electronic device for description. The electronic device may include an image processing device and a head-mounted display device (e.g., VR glasses or VR helmet) that may include a zoom lens group, a sensor module, and a display module. The sensor module is used for collecting data, and the collected data may include, but is not limited to, image data, angle data, orientation data, and the like. It is understood that the sensor modules may include, but are not limited to, image sensors, cameras, accelerometers, distance sensors, gyroscopes, light sensors, temperature sensors, heart rate sensors, pedometers, microphones, and the like. The image processing device can be a smart phone, a tablet computer, a notebook computer and the like. The image processing device and the head-mounted display device may be directly or indirectly connected through wired or wireless communication, and embodiments of the present invention are not limited herein.
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a display method according to an embodiment of the present invention. As shown in fig. 1, the display method may include the following steps.
101. The human eye gazing area of the user is obtained.
In order to dynamically adjust the focal length of the zoom lens group, the focal point adjustment (i.e., focus adjustment) of the eyes of the user is matched with the binocular convergence adjustment (i.e., convergence adjustment), and the user can acquire the human eye watching area of the user in real time by the electronic device during the use of the electronic device. The human eye watching area is an area corresponding to the sight line of the eyes of the user on the display module.
The electronic device may obtain eye movement data of the user based on eye tracking techniques, and determine a human eye gaze area of the user based on the eye movement data. Specifically, the electronic device may track and record the motion state of the user's eyes through an eyeball tracking technology, and obtain eye movement data of the user, where the eye movement data may include position coordinates of a fixation point of the user's eyes, a retention time, a pupil diameter, an iris angle change, and the like. Wherein the electronic device may identify a location on the display module where the user's eye dwell time exceeds a certain threshold as a point of regard, e.g. a location where the gaze time exceeds 20ms may be identified as a point of regard.
The electronic device may determine a human eye gaze area of the user based on the position coordinates of the user's eye gaze point. The electronic device may determine the human eye gaze area of the user in different ways, one way being: the electronic equipment can directly determine the position corresponding to the position coordinates of the eye fixation point of the user as the human eye fixation area. The other mode is as follows: the electronic device may determine an area range based on the position coordinates of the user's eye gaze point, for example, a circle may be taken with a fixed radius (e.g., 2 mm) with the position coordinates of the user's eye gaze point as a center, and the area of the circle may be determined as the user's eye gaze area. The other mode is as follows: the electronic device may divide the display module into a plurality of regions in advance, and then the electronic device may determine a region where the position of the eye gaze point of the user is located as a human eye gaze region of the user.
It is to be understood that the electronic device may be provided with an eye tracking device (e.g. an eye tracker), by which the movement state of the user's eye may be acquired.
102. And acquiring the depth information of the human eye gazing area.
After the electronic device acquires the human eye gazing area of the user, the depth information of the human eye gazing area can be further acquired. The depth information of the eye-gazing region may be understood as distance information of multimedia information of the eye-gazing region. The distance information of the multimedia information of the eye gazing area is also the image distance corresponding to the multimedia information of the eye gazing area, and the image distance is the VR imaging image distance which should be satisfied by the multimedia information of the eye gazing area. In the VR world, targets of different distances may have different depths (i.e., distances). For example, distant mountains present in the VR world have a relatively large depth (i.e., a relatively large distance from the human eye), while for near objects, the depth is relatively small (i.e., a relatively close distance from the human eye). The multimedia information may be an image, a video, etc., and the embodiment of the present invention is not limited herein.
The electronic device may obtain depth information of a human eye gazing area through a simultaneous localization and mapping (SLAM) technique or a depth sensor. The electronic device can also determine depth information of the human eye gazing area according to the depth information table. Specifically, the multimedia information displayed on the display module may correspond to a depth information table. For example, an image may include a plurality of pixels, where each pixel may correspond to a depth, or a plurality of adjacent pixels (e.g., 2 × 2 pixels) may correspond to a depth. Therefore, the electronic device can obtain the depth of the human eye gazing area according to the depth information table corresponding to the currently displayed multimedia information. For example, the eye-gaze region may include X pixels, and assuming that one pixel corresponds to one depth, the electronic device may determine an average depth of the X pixels as the depth information of the eye-gaze region.
103. A focus value is determined from the depth information.
After the electronic equipment acquires the depth information of the region watched by the human eyes, in order to adjust the focal length of the zoom lens group, the focal point adjustment of the eyes of the user is matched with the binocular convergence adjustment, and the electronic equipment can determine the focal value according to the depth information of the region watched by the user.
Specifically, the image distance imaged by the VR through the zoom lens group should match the depth information of the eye gaze area (i.e. the image distance imaged by the VR should be the same as the image distance corresponding to the multimedia information of the eye gaze area), and therefore, the electronic device may determine the image distance that the VR imaging should satisfy according to the depth information of the eye gaze area. The electronic device may then determine the focal length that the zoom lens group should achieve when the image distance is satisfied from the lens imaging formula (i.e., the gaussian imaging formula). The electronic device can determine the depth of a human eye gazing area as an image distance which the VR imaging should satisfy, and then determine a focal length value which the zoom lens group should satisfy according to the image distance.
104. And adjusting the focal length of the zoom lens group to the focal length value.
After the electronic device determines a focal length value that the zoom lens group should satisfy, the electronic device may adjust the focal length of the zoom lens group to the focal length value. It is to be understood that adjusting the focal length of the zoom lens group may be equivalent to adjusting the diopter of the zoom lens group.
It should be noted that the zoom lens group may include a main lens and a variable focus lens, and the electronic device adjusts the focal length of the zoom lens group, that is, adjusts the focal length of the variable focus lens. The variable focus lens may comprise any suitable lens, such as a glass lens, a polymer lens, a liquid crystal lens, an electro-deformable lens, or some combination thereof. The variable focus lens may adjust the direction of light emitted from the display module such that the light emitted from the display module appears at a particular focal length/image plane from the user. In some embodiments, the variable focus lens may include a liquid crystal lens, and the liquid crystal lens may adjust the focal power faster, so that the eye accommodation speed can be kept up, and further the convergence accommodation conflict can be solved, thereby improving the user experience. It should be noted that the main lens and the variable focus lens in the zoom lens group may be parallel and coaxial.
105. Scaling the multimedia information based on the focus value.
After the electronic device adjusts the focal length of the zoom lens group, in order to keep the picture content seen by the user unchanged, the electronic device may zoom the multimedia information based on the focal length value.
In particular, the electronic device may determine a zoom value based on the focal length value and the object distance, according to which the multimedia information is zoomed. The object distance is the distance between the display module and the zoom lens group. The calculation formula of the scaling value m may be as shown in the following formula (1).
m=p/q(1)
Where p is an object distance (i.e., a distance between the display module and the zoom lens group), and q is an image distance (i.e., a distance between the virtual imaging position and the zoom lens group).
The electronic device may determine an image distance of VR imaging, q in equation (1) above, based on the focal length of the zoom lens group and the object distance. It should be understood that the distance between the display module and the zoom lens group may be constant (i.e. p is constant), and thus, when the focal length of the zoom lens group is changed, only the image distance of VR imaging may be changed.
It should be noted that the zoom value determined by the electronic device based on the focal length value and the object distance is a zoom value for the original multimedia information (e.g., a zoom value of a length and a width of an image), and the original multimedia information can be understood as the non-zoomed multimedia information.
The multimedia information displayed by the display module can be two-dimensional information or three-dimensional information. Different multimedia information can adopt different scaling modes, and under the condition that the multimedia information is two-dimensional information, the electronic equipment can perform layer scaling or rendering scaling on the multimedia information according to the scaling value. And under the condition that the multimedia information is three-dimensional information, the electronic equipment can perform view rendering and zooming on the multimedia information according to the zooming value.
Layer scaling refers to directly changing the size of a layer to be output to a display module, so that the size of multimedia information can be changed correspondingly. The layer scaling can be directly processed by the bottom layer of the processor of the electronic equipment without being set by a program of a user, and the processing speed is high. However, when the image layer is enlarged, if the multimedia information exceeds the display area threshold (i.e. the size of the multimedia information exceeds the maximum size that the display module can display), the electronic device may still render an area beyond the display range.
The rendering and zooming are performed by taking a display area of the whole display module as a canvas, when the multimedia information is zoomed (for example, an image is enlarged or reduced), if the size of the multimedia information exceeds a display area threshold value, the corresponding multimedia information is rendered only for a human eye watching area of a user, and the area exceeding the display range is not rendered. If the size of the multimedia information does not exceed the display area threshold, the complete multimedia information is displayed. Rendering scaling requires processing by programs at the application layer of the electronic device.
The view rendering scaling refers to rendering pictures according to different views (i.e. rendering 3-dimensional multimedia information). The 3-dimensional multimedia information can be modeled by a complete environment, so that the view rendering scaling can be carried out on the basis of the modeling of the original environment by adopting the view rendering scaling mode. Referring to fig. 2, fig. 2 is a schematic view of view rendering and zooming according to an embodiment of the present invention. As shown in fig. 2, since the 3-dimensional multimedia information has complete environment modeling, the view rendering scaling may include scaling in three dimensions of X, Y, and Z, so that the 3-dimensional multimedia information after the view rendering scaling also has complete environment modeling.
It should be noted that the electronic device may also scale the multimedia information based on the depth information. Therefore, it can be understood that the two steps of adjusting the focal length of the zoom lens group by the electronic device and zooming the multimedia information by the electronic device can be performed simultaneously, that is, after the electronic device obtains the depth information, the multimedia information can be zoomed based on the depth information while performing the above steps 103 and 104.
106. And displaying the zoomed multimedia information.
After the electronic equipment zooms the multimedia information based on the focal length value, the zoomed multimedia information can be displayed through the display module. The display module may be an electronic display, which may comprise a single electronic display or a plurality of electronic displays (e.g., may comprise two electronic displays, one in front of each of the user's left and right eyes). The electronic display may include a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, an Active Matrix Organic Light Emitting Diode (AMOLED), a transparent Organic Light Emitting Diode (OLED) display, a projector, or a combination thereof.
Referring to fig. 3A to fig. 3C, fig. 3A to fig. 3C are schematic diagrams illustrating a multimedia message scaling according to an embodiment of the present invention. As shown in fig. 3A, the maximum width of the display screen is x, and the maximum height is y. The original multimedia information may be an image or a frame of video, and the width and height of the original picture displayed on the display screen are x1 and y1, respectively, x1< x, y1< y. Before adjusting the focal length of the zoom lens group, the focal length of the zoom lens group may be f1, and the picture seen by the user may be an original picture, i.e., the height and width of the user visible range are x1 and y1, respectively. Then, assuming that the focal length of the zoom lens group is adjusted to f2, the visible range of the user is reduced to x2 × y 2. x2< x1, y2< y1. If the picture displayed on the display screen is not adjusted, the picture seen by the user changes, as shown in fig. 3B, the user can only see a part of the original picture, and the articles seen in the VR world are correspondingly enlarged or reduced. Therefore, the electronic device may correspondingly zoom the multimedia information according to the change of the user visual range (i.e., determine a zoom value based on the focal length value of the zoom lens group, and zoom the multimedia information according to the zoom value), as shown in fig. 3C, the length and the width of the original image may be scaled to x2 and y2, and at this time, the user visual range is also x2 y2, so that it may be ensured that the image content seen by the user is unchanged before and after the focal length of the zoom lens group is adjusted (i.e., the image content in the user visual range area in fig. 3A and 3C is the same), and user experience is improved. It should be understood that the scaling shown in fig. 3C may be layer scaling, i.e., scaling the original layer from x1 × y1 to x2 × y2, or rendering scaling.
Referring to fig. 4A-4D, fig. 4A-4D are schematic diagrams illustrating another multimedia information scaling according to an embodiment of the invention. As shown in fig. 4A, the original frame may have a width and a height x and y4, the width is equal to the maximum width of the frame that can be displayed on the display screen, and the height y4 is smaller than the maximum height y of the frame that can be displayed on the display screen. Before adjusting the focal length of the zoom lens group, the focal length of the zoom lens group may be f3, where the height and width of the user visible range are x3 and y3, respectively, and the picture seen by the user may be a part of the original picture. x3< x, y3< y4. Then, assuming that the focal length of the zoom lens group is adjusted to f4, the visible range of the user becomes x5 × y5, x5> x3, y5> y3. If the picture displayed on the display screen is not adjusted, the picture seen by the user changes, as shown in fig. 4B, the user can see a larger picture, and the articles seen in the VR world correspondingly zoom in or zoom out. Thus, the electronic device may determine a zoom value based on the focal length of the variable focus lens group, from which it may be determined that the original picture needs to be zoomed in to x6 ×.y. As shown in fig. 4C, the electronic device may zoom the original multimedia information into x6 × y by scaling the layer, that is, zoom the original layer from x × y4 into x6 × y, where the content of the picture that can be seen by the user is the same as the content that can be seen before the focal length is changed (i.e., the content of the picture in the area of the user's visible range in fig. 4A and 4C is the same). As shown in fig. 4C, when the electronic device performs layer scaling, the layer size (x × y 4) is larger than the maximum picture (x × y) that can be displayed by the display screen, and the electronic device renders an area beyond the display range, but cannot display the area on the display screen. As shown in fig. 4D, compared to layer scaling, when the electronic device scales and renders the area beyond the display range, the electronic device may only render an x × y picture.
It should be understood that the scaling of the multimedia information shown in fig. 3A-4D is only exemplary and not limiting.
Optionally, the electronic device may further obtain depth information of the non-gazing region, and perform blurring processing on the multimedia information of the non-gazing region based on the depth information of the human eye gazing region and the non-gazing region. The electronic equipment performs virtualization processing on the multimedia information in the non-watching area, so that the observation experience of the user in the real world can be simulated more truly, and the watching comfort of the user can be improved.
Specifically, the electronic device may determine a depth difference value based on the depth information of the eye-gazing region and the non-gazing region (that is, determine a depth difference value between the eye-gazing region and the non-gazing region), and then, the electronic device may determine a blurring value according to the depth difference value, and perform blurring processing on the multimedia information of the non-gazing region based on the blurring value. One depth difference value may correspond to one blurring value, and the larger the depth difference value is, the larger the corresponding blurring value may be, and the smaller the depth difference value is, the smaller the corresponding blurring value may be. One way for the electronic device to perform blurring processing on the non-gazing area is as follows: the electronic equipment divides the non-gazing area into a plurality of non-gazing area blocks, calculates the depth difference value between each non-gazing area block and the human eye gazing area to obtain the depth difference value corresponding to each non-gazing area block, then the electronic equipment can determine the virtualization value corresponding to each non-gazing area block according to the depth difference value corresponding to each non-gazing area block, and then the electronic equipment can perform virtualization processing respectively according to the virtualization value corresponding to each non-gazing area block.
It should be understood that, when the electronic device determines the virtualization value, the depth information of the human eye gazing area may also be considered, that is, the electronic device may determine the virtualization value of the non-gazing area according to the depth information and the depth difference value of the human eye gazing area, and may determine the virtualization value more accurately.
In the embodiment of the invention, the electronic equipment can be provided with the zoom lens group, the visual convergence adjustment conflict can be improved by adjusting the focal length of the zoom lens group, and meanwhile, on the basis, the electronic equipment can also zoom multimedia information so as to keep the pictures seen by a user before and after the focal length of the zoom lens group is adjusted unchanged, so that the eye fatigue of the user can be reduced, and the use experience of the user is improved. In addition, the electronic equipment can also perform blurring processing on the non-watching region of the human eyes, so that the observation experience of the user in the real world can be simulated more truly.
It should be understood that the display method is illustrated in fig. 1 by taking the VR/AR device (i.e., the electronic device) as an execution subject, but the present application is not limited to the execution subject of the display method. For example, the VR/AR device in fig. 1 may also be a chip, a chip system, or a processor supporting the VR/AR device to implement the method, and may also be a logic module or software capable of implementing all or part of the functionality of the VR/AR device.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a display device according to an embodiment of the disclosure. The display device may be a VR/AR device, or a module (e.g., a chip) in the VR/AR device, and the VR/AR device is provided with a zoom lens group. As shown in fig. 5, the apparatus may include:
a first obtaining unit 501, configured to obtain a human eye gazing area of a user;
a second obtaining unit 502, configured to obtain depth information of the eye gazing area;
a determining unit 503 for determining a focus value from the depth information;
an adjusting unit 504 for adjusting the focal length of the zoom lens group to the focal length value;
a scaling unit 505, configured to scale the multimedia information based on the focal value;
a display unit 506, configured to display the scaled multimedia information.
In an embodiment, the first obtaining unit 501 is specifically configured to:
acquiring eye movement data of the user based on an eyeball tracking technology;
a human eye gaze region of the user is determined based on the eye movement data.
In an embodiment, the second obtaining unit 502 is specifically configured to:
and acquiring depth information of a human eye gazing area through an SLAM technology or a depth sensor.
In one embodiment, the scaling unit 505 scaling the multimedia information based on the focal length value comprises:
determining an image distance q based on the focal length value and an object distance p, wherein the VR/AR device is further provided with a display module, the object distance is a distance between the display module and the zoom lens group, and the image distance is a distance between a virtual imaging position and the zoom lens group;
calculating to obtain a scaling value m according to a formula m = p/q;
and scaling the multimedia information according to the scaling value m.
In one embodiment, the scaling unit 505 scales the multimedia information according to the scaling value includes:
under the condition that the multimedia information is two-dimensional information, carrying out layer scaling or rendering scaling on the multimedia information according to the scaling value;
and under the condition that the multimedia information is three-dimensional information, performing visual field rendering and zooming on the multimedia information according to the zooming value.
In one embodiment, the display device may further include:
a third obtaining unit 507, configured to obtain depth information of a non-gazing area;
a processing unit 508, configured to perform blurring processing on the multimedia information in the non-gazing region based on the depth information of the human eye gazing region and the non-gazing region.
In one embodiment, the processing unit 508 performs blurring processing on the multimedia information of the non-gazing region based on the depth information of the human eye gazing region and the non-gazing region, including:
determining a depth difference value based on the depth information of the human eye gaze region and the non-gaze region;
determining a blurring value according to the depth difference;
and performing blurring processing on the multimedia information of the non-gazing area based on the blurring value.
More detailed descriptions about the first obtaining unit 501, the second obtaining unit 502, the determining unit 503, the adjusting unit 504, the scaling unit 505, the displaying unit 506, the third obtaining unit 507, and the processing unit 508 can be directly obtained by directly referring to the related descriptions in the method embodiment shown in fig. 1, which are not repeated herein.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. The electronic device may be a VR/AR device, or a module (e.g., a chip) in the VR/AR device, and the VR/AR device is provided with a zoom lens group. As shown in fig. 6, the electronic device may include an eye tracking module, a depth information module, an adjustment module, a display module, and a processor. Meanwhile, the eyeball tracking module, the depth information module, the adjusting module and the display module can be respectively connected with the processor.
The eyeball tracking module may be configured to obtain eye movement data of the human eye of the user, and may obtain a human eye gazing area of the user through the eye movement data (i.e., perform step 101). The processor may obtain the depth information of the human eye gazing area through the depth information module (i.e., perform step 102). The processor may then also determine a focus value based on the depth information (i.e. perform step 103 above). Then, the processor may adjust the focal length of the zoom lens assembly to the focal length value through the adjusting module (i.e., perform step 104). The processor may then scale the multimedia information based on the depth information or the focal length value and display the scaled multimedia information via the display module (i.e., perform steps 105 and 106 described above).
In some embodiments, after the eye tracking module obtains the eye movement data of the human eyes of the user, the eye movement data may be sent to the processor, and then the processor may obtain the human eye fixation area of the user through the eye movement data.
Optionally, the eyeball tracking module may be an eye tracker, the depth information module may be a SLAM module or a depth sensor, and the display module may be an electronic display. In one embodiment, the electronic device may include a head-mounted display device (e.g., VR glasses or VR helmet) within which the eye tracking module, the depth information module, the adjustment module, and the display module may be disposed.
In some embodiments, the electronic device may further include a camera module (e.g., a camera) that may capture pictures or video of the surrounding environment in real-time. The electronic device may also include a position sensor, such as one or more accelerometers, one or more gyroscopes, one or more magnetometers, or a combination thereof. In one embodiment, the position sensor may include multiple accelerometers for measuring translational motion (forward/backward, up/down, left/right) and multiple gyroscopes for measuring rotational motion (e.g., pitch, yaw, roll). Optionally, the electronic device may further include at least one memory.
More detailed descriptions of the eyeball tracking module, the depth information module, the adjustment module, the display module, and the processor may be directly obtained by referring to the description of the electronic device in the embodiment of the method shown in fig. 1, which is not repeated herein.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another electronic device disclosed in the embodiment of the present application, where the electronic device may be a VR/AR device or a module (e.g., a chip) in the VR/AR device. As shown in fig. 7, the electronic device 700 may include: at least one processor 701, such as a Central Processing Unit (CPU), at least one memory 705, and at least one communication bus 702. Optionally, the electronic device 700 may further include at least one of a network interface 704, a user interface 703, and a display module 706. Wherein a communication bus 702 is used to enable connective communication between these components. Alternatively, the user interface 703 may comprise a handheld device, a keyboard (keyboard), and the network interface 704 may comprise a standard wired interface, a wireless interface (e.g., a bluetooth interface, WI-FI interface). The memory 705 may be a Random Access Memory (RAM) memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory 705 may also be at least one storage device located remotely from the processor 701. As shown in fig. 7, the memory 705, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a device control application program.
In the electronic device 700 shown in fig. 7, the network interface 704 may provide a network communication function; the user interface 703 is primarily used as an interface for providing input to a user.
In one embodiment, processor 701 may be configured to invoke a device control application stored in memory 705, which may implement:
acquiring a human eye watching area of a user; acquiring depth information of the eye gazing area; determining a focal length value according to the depth information; adjusting the focal length of the zoom lens group to the focal length value; zooming the multimedia information based on the focal value; and displaying the zoomed multimedia information.
It should be understood that the electronic device 700 described in this embodiment of the present application may be used to execute the method performed by the electronic device in the embodiment of fig. 1, and reference may be directly made to the relevant description, which is not described in detail herein.
The embodiment of the invention also discloses a computer readable storage medium, wherein the computer readable storage medium is stored with instructions, and the instructions are executed to execute the method in the embodiment of the method.
Embodiments of the present invention also disclose a computer program product comprising instructions that, when executed, perform the method in the above method embodiments.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present application should be included in the scope of the present application.

Claims (10)

1. A display method, applied to a virtual reality/augmented reality device provided with a variable focus lens set, the method comprising:
acquiring a human eye watching region of a user;
acquiring depth information of the human eye gazing area;
determining a focal length value according to the depth information;
adjusting the focal length of the zoom lens group to the focal length value;
zooming the multimedia information based on the focal value;
and displaying the zoomed multimedia information.
2. The method of claim 1, wherein the obtaining a human eye gaze region of a user comprises:
acquiring eye movement data of the user based on an eyeball tracking technology;
determining a human eye gaze area of the user based on the eye movement data.
3. The method of claim 1, wherein the obtaining depth information of the eye gaze region comprises:
and acquiring depth information of a human eye gazing area by an instant positioning and map building SLAM technology or a depth sensor.
4. The method of claim 1, wherein scaling multimedia information based on the focal value comprises:
determining an image distance q based on the focal length value and an object distance p, wherein the VR/AR device is further provided with a display module, the object distance is a distance between the display module and the zoom lens group, and the image distance is a distance between a virtual imaging position and the zoom lens group;
calculating to obtain a scaling value m according to a formula m = p/q;
and scaling the multimedia information according to the scaling value m.
5. The method of claim 4, wherein scaling the multimedia information according to the scaling value comprises:
under the condition that the multimedia information is two-dimensional information, carrying out layer scaling or rendering scaling on the multimedia information according to the scaling value;
and under the condition that the multimedia information is three-dimensional information, performing visual field rendering and zooming on the multimedia information according to the zooming value.
6. The method according to any one of claims 1-5, further comprising:
acquiring depth information of a non-gazing area;
and blurring the multimedia information of the non-gazing area based on the depth information of the human eye gazing area and the non-gazing area.
7. The method of claim 6, wherein blurring the multimedia information of the non-gaze region based on the depth information of the eye-gaze region and the non-gaze region comprises:
determining a depth difference value based on depth information of the human eye gaze region and the non-gaze region;
determining a blurring value according to the depth difference value;
and performing virtualization processing on the multimedia information of the non-gazing area based on the virtualization value.
8. A display apparatus, wherein the apparatus is a virtual reality VR/augmented reality AR device or a module in a VR/AR device provided with a zoom lens group, the apparatus comprising:
a first acquisition unit configured to acquire a human eye gaze region of a user;
the second acquisition unit is used for acquiring the depth information of the human eye gazing area;
a determining unit for determining a focal length value according to the depth information;
an adjustment unit for adjusting a focal length of the zoom lens group to the focal length value;
the zooming unit is used for zooming the multimedia information based on the focal length value;
and the display unit is used for displaying the zoomed multimedia information.
9. An electronic device, comprising: the system comprises an eyeball tracking module, a depth information module, an adjusting module, a display module and a processor; wherein:
the eyeball tracking module is used for acquiring a human eye watching region of a user;
the depth information module is used for acquiring the depth information of the human eye gazing area;
the processor is used for determining a focal length value according to the depth information;
the adjusting module is used for adjusting the focal length of the zoom lens group to the focal length value;
the processor is further configured to scale multimedia information based on the focal value;
and the display module is used for displaying the zoomed multimedia information.
10. A computer-readable storage medium, in which a computer program or computer instructions is stored which, when executed, implements the method of any one of claims 1-7.
CN202210763979.2A 2022-06-30 2022-06-30 Display method, display device, electronic equipment and computer-readable storage medium Pending CN115202475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210763979.2A CN115202475A (en) 2022-06-30 2022-06-30 Display method, display device, electronic equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210763979.2A CN115202475A (en) 2022-06-30 2022-06-30 Display method, display device, electronic equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN115202475A true CN115202475A (en) 2022-10-18

Family

ID=83578076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210763979.2A Pending CN115202475A (en) 2022-06-30 2022-06-30 Display method, display device, electronic equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN115202475A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562497A (en) * 2022-11-04 2023-01-03 浙江舜为科技有限公司 Augmented reality information interaction method, augmented reality device, and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562497A (en) * 2022-11-04 2023-01-03 浙江舜为科技有限公司 Augmented reality information interaction method, augmented reality device, and storage medium
CN115562497B (en) * 2022-11-04 2024-04-05 浙江舜为科技有限公司 Augmented reality information interaction method, augmented reality device, and storage medium

Similar Documents

Publication Publication Date Title
JP6353214B2 (en) Image generating apparatus and image generating method
US10999412B2 (en) Sharing mediated reality content
US11184597B2 (en) Information processing device, image generation method, and head-mounted display
US20180160048A1 (en) Imaging system and method of producing images for display apparatus
JP2017142569A (en) Method and program for providing head-mounted display with virtual space image
US11244496B2 (en) Information processing device and information processing method
CN113568170A (en) Virtual image generation system and method of operating the same
WO2020006519A1 (en) Synthesizing an image from a virtual perspective using pixels from a physical imager array
CN112655202B (en) Reduced bandwidth stereoscopic distortion correction for fisheye lenses of head-mounted displays
CN106168855B (en) Portable MR glasses, mobile phone and MR glasses system
JP2020024417A (en) Information processing apparatus
US11956415B2 (en) Head mounted display apparatus
US20230239457A1 (en) System and method for corrected video-see-through for head mounted displays
CN114371779B (en) Visual enhancement method for sight depth guidance
JP2019125215A (en) Information processing apparatus, information processing method, and recording medium
US11212502B2 (en) Method of modifying an image on a computational device
CN115202475A (en) Display method, display device, electronic equipment and computer-readable storage medium
CN106851249A (en) Image processing method and display device
RU2020126876A (en) Device and method for forming images of the view
CN111506188A (en) Method and HMD for dynamically adjusting HUD
KR101947372B1 (en) Method of providing position corrected images to a head mount display and method of displaying position corrected images to a head mount display, and a head mount display for displaying the position corrected images
CN117043722A (en) Apparatus, method and graphical user interface for map
CN114581514A (en) Method for determining fixation point of eyes and electronic equipment
CN114020150A (en) Image display method, image display device, electronic apparatus, and medium
JPWO2017191703A1 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination