CN113325947A - Display method, display device, terminal equipment and storage medium - Google Patents
Display method, display device, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN113325947A CN113325947A CN202010130618.5A CN202010130618A CN113325947A CN 113325947 A CN113325947 A CN 113325947A CN 202010130618 A CN202010130618 A CN 202010130618A CN 113325947 A CN113325947 A CN 113325947A
- Authority
- CN
- China
- Prior art keywords
- depth
- information
- depth plane
- target
- plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000003190 augmentative effect Effects 0.000 claims description 26
- 238000004590 computer program Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 5
- 210000001508 eye Anatomy 0.000 description 32
- 238000005516 engineering process Methods 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 10
- 230000004424 eye movement Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008447 perception Effects 0.000 description 5
- 238000001028 reflection method Methods 0.000 description 4
- 239000000945 filler Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 210000005252 bulbus oculi Anatomy 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003183 myoelectrical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 210000004087 cornea Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 210000000624 ear auricle Anatomy 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 210000001331 nose Anatomy 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a display method, a display device, terminal equipment and a storage medium. The method comprises the following steps: acquiring the watching information of a user on a current image; determining a corresponding target depth plane based on the gaze information; and adjusting display parameters of other depth planes, wherein the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane. By the method, the depth sense of the current image watched by the user can be enhanced.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a display method, a display device, terminal equipment and a storage medium.
Background
Virtual Reality (VR) technology is a computer simulation system that can create and experience a Virtual world, and VR technology uses a computer to create a Virtual environment, which is a system simulation of interactive three-dimensional dynamic views and physical behaviors with multi-source information fusion, and immerses users in the environment.
The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and the real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced.
In the process of developing virtual scenes by using VR and AR technologies, the perception of depth by a user's visual system needs to be fully utilized to create stronger stereoscopic impression and depth impression. However, under some scenes (such as scenes with too small field angle or scenes containing distant objects), the stereoscopic feeling and the sense of depth in the virtual scene cannot be realized only by the visual system of the user.
Disclosure of Invention
The embodiment of the invention provides a display method, a display device, terminal equipment and a storage medium, and the method is used for enhancing the depth sense of a user watching a current image.
In a first aspect, an embodiment of the present invention provides a display method, including:
acquiring the watching information of a user on a current image;
determining a corresponding target depth plane based on the gaze information;
and adjusting display parameters of other depth planes, wherein the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane.
Further, the gazing information comprises gazing point information; correspondingly, the determining a corresponding target depth plane based on the gaze information includes:
determining a target object corresponding to the fixation point information on the current image;
and determining the depth plane where the target object is located as a target depth plane.
Further, the display parameter includes a blur radius.
Further, in the case that the number of the remaining depth planes is at least two, the blur radius of each of the remaining depth planes is proportional to the distance of the remaining depth plane from the target depth plane.
Further, the distance between the remaining depth plane and the target depth plane is determined by a difference between the distance information of the remaining depth plane and the distance information of the target depth plane.
Further, the method further comprises:
determining a depth plane and corresponding distance information contained in each frame of image in a virtual reality or augmented reality video, wherein the current image is the image currently displayed in the virtual reality or augmented reality video, and the distance information is the absolute distance information between the corresponding depth plane and the user.
Further, the determining a depth plane included in each frame of image in the virtual reality or augmented reality video includes:
acquiring a target image in the virtual reality or augmented reality video frame by frame;
acquiring depth information of an object included in the target image;
and segmenting the target image based on the depth information to obtain at least one depth plane, and determining the distance information of each depth plane obtained by segmentation according to the depth information.
Further, the depth information of the objects included in the same depth plane is the same.
In a second aspect, an embodiment of the present invention further provides a display device, including:
the acquisition module is used for acquiring the watching information of the user on the current image;
a determination module for determining a corresponding target depth plane based on the gaze information;
and the adjusting module is used for adjusting display parameters of other depth planes, the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane.
In a third aspect, an embodiment of the present invention further provides a terminal device, including:
one or more processors;
storage means for storing one or more programs;
the one or more programs are executed by the one or more processors, so that the one or more processors implement the method provided by the embodiment of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method provided by the embodiment of the present invention.
The embodiment of the invention provides a display method, a display device, terminal equipment and a storage medium, wherein the method comprises the steps of firstly acquiring the watching information of a user on a current image; then determining a corresponding target depth plane based on the gazing information; and finally, adjusting display parameters of other depth planes, wherein the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane. By the aid of the technical scheme, the depth sense of the current image watched by the user can be enhanced.
Drawings
Fig. 1 is a schematic flowchart of a display method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a display method according to a second embodiment of the present invention;
fig. 2a is a schematic diagram illustrating an effect of image preprocessing according to a second embodiment of the present invention;
fig. 2b is a schematic view of a scene including a plurality of depth planes according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a display device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like. In addition, the embodiments and features of the embodiments in the present invention may be combined with each other without conflict.
The term "include" and variations thereof as used herein are intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment".
Example one
Fig. 1 is a schematic flowchart of a display method according to an embodiment of the present invention, where the method is applicable to a situation of improving a depth sense of an image, and the method may be executed by a display device, where the device may be implemented by software and/or hardware and is generally integrated on a terminal device, where the terminal device in this embodiment includes but is not limited to: devices capable of enabling virtual reality scene display, such as VR devices; or a device capable of augmented reality scene display, such as an AR device.
The display method can be regarded as a depth perception enhancement method of a three-dimensional virtual scene, and depth perception refers to the process of judging the distance of different objects by a visual system of human eyes. Generally, the sources of cues for the visual system to perceive depth can be divided into two broad categories. One is monocular cues that are obtained by the visual information of one eye. The other is a binocular cue, which must be matched by both eyes.
Focusing and defocusing are one of the main monocular cues for the visual system to perceive depth. When an observer looks at an object, the picture of the object in the same depth plane is relatively clear (in focus), while the picture in different depth planes is relatively blurred (out of focus), and the degree of blurring is affected by the absolute distance difference between the depth planes.
Binocular disparity is one of the main binocular cues for the visual system to perceive depth. The closer the object is to the observer, the greater the difference in the objects seen by the two eyes, which creates binocular parallax. The brain can estimate the distance of an object to the eye using a measurement of this parallax.
The display method of the invention utilizes the eye tracking technology to enhance the depth sense of the user when watching the image, and the eye tracking technology can estimate the fixation point by means of an eye tracker through an image recognition algorithm. Eye tracking, which may also be referred to as gaze tracking, may estimate the gaze and/or fixation point of the eye by measuring eye movement. The line of sight may be understood as a three-dimensional vector and the point of regard may be understood as a two-dimensional coordinate of said three-dimensional vector on a plane, such as the plane of gaze.
The invention can realize eye movement tracking by a pupil-cornea reflection method in the optical recording method, and can also estimate the eye movement by a method which is not based on an eye image, such as a contact/non-contact sensor (such as an electrode and a capacitance sensor).
The optical recording method uses a camera or a video camera to record the eye movement of a subject, namely, to acquire an eye image reflecting the eye movement, and to extract eye features from the acquired eye image for establishing a model of sight line/fixation point estimation. Wherein the eye features may include: pupil location, pupil shape, iris location, iris shape, eyelid location, canthus location, spot (also known as purkinje spot) location, and the like.
The working principle of the pupil-cornea reflex method can be summarized as follows: acquiring an eye image; the gaze/fixation point is estimated from the eye image.
The hardware requirements of the pupil-cornea reflection method are as follows:
(1) light source: generally, the infrared light source is used, because the infrared light does not affect the vision of eyes; and may be a plurality of infrared light sources arranged in a predetermined manner, such as a delta shape and/or a straight shape;
(2) an image acquisition device: such as an infrared camera device, an infrared image sensor, a camera or a video camera, etc.
The pupil-cornea reflection method can be implemented as follows:
1. eye image acquisition: the light source irradiates the eye, the eye is shot by the image acquisition equipment, and the reflection point of the light source on the cornea, namely a light spot (also called purkinje spot), is shot correspondingly, so that the eye image with the light spot is obtained.
2. Gaze/fixation point estimation: when the eyeball rotates, the relative position relationship between the pupil center and the light spot changes, and the correspondingly acquired eye images with the light spot reflect the position change relationship.
3. And estimating the sight line/the fixation point according to the position change relation.
As shown in fig. 1, a display method according to a first embodiment of the present invention includes the following steps:
and S110, acquiring the gazing information of the user on the current image.
The scene of the invention can be a scene for watching the augmented reality image by the user or a scene for watching the virtual reality video by the user. Such as a scene in which a user is viewing virtual reality video through a VR device.
In this embodiment, the user may be a person who is currently performing image viewing. The current image may be the image at which the user is currently gazing. Gaze information may be understood as information representing the eyes of the user when gazing at the current image. Gaze information includes, but is not limited to, gaze information and gaze point information, which may be information representing a user's gaze, such as a direction. The gazing point information may be information representing a user's gazing point, such as coordinates. The gaze information may be obtained by a gaze tracking device, which may be mounted on a device displaying the current image, such as a VR or AR device.
The invention can obtain the user's gaze information on the current image by pupil-cornea reflection method, and also can obtain the user's gaze information by other methods, for example, the eyeball tracking device can be MEMS, including MEMS infrared scanning reflector, infrared light source, infrared receiver; in another embodiment, the eye tracking device may be a capacitive sensor that detects eye movement by the capacitance between the eyes and the capacitive plate; in yet another embodiment, the eye tracking device may also be a myoelectric current detector, for example by placing electrodes at the bridge of the nose, forehead, ears or earlobe, detecting eye movements by the detected myoelectric current signal pattern. And is not limited herein.
And S120, determining a corresponding target depth plane based on the gazing information.
The target depth plane may be understood as a depth plane to which the gaze information corresponds in the current image. For example, in the case that the gaze information is gaze point information, the target depth plane may be considered as a depth plane where a target object corresponding to the gaze point information on the current image is located.
It will be appreciated that a plurality of objects may be included in the current image, each object being pre-populated with object information that may be used to identify the object. The object information includes position information and depth information, which can be regarded as information indicating the depth of the object in the current image. Each depth information may correspond to a depth plane, and thus, each object may correspond to a depth plane. The target depth plane may be considered to be the depth plane of the object at which the user is currently gazing. The target object may be considered to be the object at which the user is currently gazing.
When the target depth plane is determined, the gaze information can be matched with the position information in the object information of the object included in the current image, the object information corresponding to the gaze information is determined, and the target depth plane is determined based on the depth plane in the object information.
In one embodiment, in the case that the gazing point information is gazing point information, the present invention may compare the gazing point information with position information in object information of an object included in the current image, such as coordinate comparison. And taking an object with the position information equal to the gazing point information or the deviation within the set range in the current image as a target object, and taking the depth plane of the target object as a target depth plane.
S130, adjusting display parameters of the rest depth planes, wherein the display parameters of the rest depth planes are determined according to the distance between the rest depth planes and the target depth plane, and the rest depth planes are depth planes on the current image except the target depth plane.
In order to improve the depth sense of the current image, after the target depth plane is determined, the display parameters of the rest depth planes can be adjusted. The number of the rest depth planes can be at least one, and under the condition that at least two rest depth planes are included, the size of the display parameter of each adjusted rest depth plane can be the same or different.
The display parameter may be considered as a parameter that determines the display effect. Display parameters include, but are not limited to, pixel values and blur radius. Different display parameters can have different adjusting means, which are not limited here as long as the definition of the rest depth planes is lower than that of the target depth plane.
Specifically, the display parameters of the rest of the depth planes can be determined based on the distances between the rest of the depth planes and the target depth plane. Taking the display parameter as the blur radius as an example, the larger the distance between the rest depth planes and the target depth plane is, the larger the blur radius of the rest depth planes can be; the smaller the distance of the remaining depth planes from the target depth plane, the smaller the blur radius of the remaining depth planes may be. The specific numerical value of the blur radius of the rest depth planes is not limited, as long as the distance between the rest depth planes and the target depth plane is ensured to be in direct proportion to the blur radius. When the display parameter is a pixel value, the distances of the remaining depth planes from the target depth plane are inversely proportional to the pixel value.
The distances between the rest depth planes and the target depth plane can be directly determined by depth analysis of the current image, and can also be determined based on the absolute distance information between the rest depth planes and the user and the absolute distance information between the target depth plane and the user.
After the display parameters of the other depth planes are adjusted, the display parameters of the other depth planes are different from the display parameters of the target depth plane, so that the depth sense of the current image is improved.
In one embodiment, the display parameter includes a blur radius. The blur radius is proportional to the blur level of the image. The method can be realized by adopting a Gaussian fuzzy algorithm when the fuzzy radius is adjusted.
The display method provided by the embodiment of the invention comprises the steps of firstly, acquiring the watching information of a user on a current image; then determining a corresponding target depth plane based on the gazing information; and finally, adjusting display parameters of the rest depth planes, wherein the display parameters of the rest depth planes are determined according to the distances between the rest depth planes and the target depth plane, and the rest depth planes are the depth planes on the current image except the target depth plane. By the method, the depth sense of the current image watched by the user can be enhanced.
On the basis of the above-described embodiment, a modified embodiment of the above-described embodiment is proposed, and it is to be noted herein that, in order to make the description brief, only the differences from the above-described embodiment are described in the modified embodiment.
In one embodiment, in case the number of the remaining depth planes is at least two, the blur radius of each remaining depth plane is proportional to the distance of the remaining depth plane from the target depth plane.
Under the condition that the fuzzy radius of each other depth plane is in direct proportion to the distance between the other depth planes and the target depth plane, the fact that the other depth planes which are farther away from the target depth plane are more fuzzy is guaranteed, and the stereoscopic impression and the depth impression of the current image are improved.
In one embodiment, the distance of the remaining depth plane from the target depth plane is determined by a difference of the distance information of the remaining depth plane and the distance information of the target depth plane.
The distance information of the remaining depth planes may be understood as absolute distance information of the remaining depth planes from the user. The distance information of the target depth plane may be understood as absolute distance information of the target depth plane from the user.
When the display parameters of the remaining depth planes are adjusted, the difference between the distance information of the remaining depth planes and the distance information of the target depth plane may be used as the distance between the remaining depth planes and the target depth plane.
Example two
Fig. 2 is a schematic flow chart of a display method according to a second embodiment of the present invention, and the second embodiment is optimized based on the above embodiments. In this embodiment, the gazing information specifically includes gazing point information, and correspondingly, determining a corresponding target depth plane based on the gazing information includes:
determining a target object corresponding to the fixation point information on the current image;
and determining the depth plane where the target object is located as a target depth plane. Since the object itself is not necessarily a plane, but may also be a solid, the depth plane of the solid object may be determined as the depth plane by the plane where the closest distance from the object to the user is; or taking the plane where the center of the object is positioned as a depth plane; or any surface of the three-dimensional object is taken as a depth plane, which is not limited herein.
Further, the present embodiment further includes determining a depth plane and corresponding distance information included in each frame of image in the virtual reality or augmented reality video, where the current image is an image currently displayed in the virtual reality or augmented reality video, and the distance information is absolute distance information between the corresponding depth plane and the user.
Please refer to the first embodiment for a detailed description of the present embodiment.
As shown in fig. 2, a display method provided in the second embodiment of the present invention includes the following steps:
s210, determining a depth plane and corresponding distance information contained in each frame of image in a virtual reality or augmented reality video, wherein the current image is an image currently displayed in the virtual reality or augmented reality video, and the distance information is absolute distance information between the corresponding depth plane and the user.
The current image may be a frame image in a virtual reality or augmented reality video. Before displaying the current image, the present invention may process each frame of image in the virtual reality or augmented reality video, and determine the object information included in each image, where the object information may be information preset in the image, such as the depth plane and the corresponding distance information included in each image.
Virtual reality video may be thought of as video presented in virtual reality technology. Augmented reality video may be thought of as video presented in augmented reality technology. The depth plane comprised by the image may be determined from depth information of objects comprised in the image. The depth information of each object may be obtained by processing the image, or directly reading the depth information of each object in the image acquired by the depth camera, which is not limited herein as long as the depth information of each object in the image can be read. The invention can take the plane corresponding to each piece of different depth information as a depth plane, thereby splitting the image to include a plurality of depth planes.
After determining the depth planes included in the image, a corresponding distance information may be determined for each depth plane, and the distance information may be absolute distance information between the depth plane and the user, and how to determine the absolute distance information between the depth plane and the user is not limited herein, such as may be determined according to the depth information of each depth plane and the size of the display device. The distance information for each depth plane is determined, e.g., based on the distance of the plane of the current image displayed by the display device from the user's eyes and the depth information for each depth plane.
And S220, acquiring the gazing information of the user on the current image.
And S230, determining a target object corresponding to the gazing point information on the current image.
When the target depth plane is determined, the target object corresponding to the fixation point information on the current image can be determined in a coordinate comparison mode. For example, each object in the current image is traversed, and an object with the same coordinate as the gaze information or a deviation within a certain range is taken as the target object.
S240, determining the depth plane where the target object is located as a target depth plane.
After the target object is determined, the depth plane where the target object is located can be used as a target depth plane, namely the depth plane which is watched by the user at present.
And S250, adjusting the display parameters of the rest depth planes.
The invention is described below by way of example:
in the development of three-dimensional virtual scenes (such as VR videos), the perception of depth by a visual system needs to be fully utilized, so that stronger stereoscopic impression and depth impression are created. In the existing three-dimensional virtual scene, the visual system of the user mainly relies on binocular parallax to perceive depth, and when observing a distant object in the scene, the visual axis is close to parallel and binocular parallax is zero, so the clue source of the perceived depth loses effect. At the moment, the user can only perceive the depth through image information such as the relative size and perspective of the object by virtue of experience, and the stereoscopic impression and the depth impression of the three-dimensional virtual scene are greatly influenced.
Meanwhile, in the practical application of the three-dimensional virtual scene (such as VR head display), the problem that the field angle is too small often exists, so that few things are displayed in the monocular vision range. At this point, if an object in the scene is seen by the user with only one eye and not the other, it is likely difficult for the user to determine the depth of the object, which in turn affects the experience in the scene.
In the existing three-dimensional virtual scene, since the scene presents a fixed-focus picture, a user (i.e., a user) cannot acquire depth clues of focusing and defocusing of different depth planes. At this time, if the user cannot perceive depth by binocular parallax due to problems such as a long absolute distance of an object and a small angle of view, the user may have a serious influence on the experience of a game, interaction, and the like in the three-dimensional virtual scene.
According to the method, the scene picture is preprocessed by marking the absolute distance information of different depth planes in the three-dimensional virtual scene, the gaze point information of a user is obtained based on an eye movement tracking technology, the absolute distance information of the depth plane where the gaze point is located is obtained according to the gaze point position, so that focusing and defocusing depth clues can be provided for the user, the defects and shortcomings of the existing depth clues are effectively overcome, and the stereoscopic impression and the depth impression of the user in the three-dimensional virtual scene are greatly enhanced.
The display method provided by the invention can be used for the depth perception of a three-dimensional virtual scene, and the method can comprise the following steps:
step one, preprocessing three-dimensional virtual scene image
In a three-dimensional virtual scene, image regions at different depth planes are segmented frame by frame. Absolute distance information is then marked on each image area according to the depth of the plane in which the image lies.
Depth information for a specific object for each region within the image may be pre-contained in the image.
Fig. 2a is a schematic diagram illustrating an effect of image preprocessing according to a second embodiment of the present invention, and referring to fig. 2a, after an image is segmented, a first object 1, a second object 2, and a third object 3 located at different depth planes are obtained. The terms "first", "second", and "third" are used merely to distinguish the corresponding contents, and are not used to limit the order or interdependence relationship.
Depth information of each object may be included in the image in advance, and absolute distance information of each object with respect to the user may be determined based on the depth information of each object. Fig. 2b is a schematic view of a scene including a plurality of depth planes according to a second embodiment of the present invention, and referring to fig. 2b, distance information of a depth plane corresponding to a first object 1 is absolute distance information a between the first object 1 and a user 4, distance information of a depth plane corresponding to a second object 2 is absolute distance information b between the second object 2 and the user 4, and distance information of a depth plane corresponding to a third object 3 is absolute distance information c between the third object 3 and the user 4. As can be seen from fig. 2b, c > b > a, i.e. the absolute distance of the third object 3 from the user 4 is the farthest and the absolute distance of the first object 1 from the user 4 is the closest.
Taking fig. 2b as an example, when the user 4 gazes at the first object 1, the depth plane of the first object 1 is the target depth plane, and the display parameters of the depth plane of the second object 2 can be adjusted according to the distance of the second object 2 from the target depth plane. The display parameters of the depth plane of the third object 3 may be adjusted depending on the distance of the third object 3 from the target depth plane. Since the distance of the depth plane of the second object 2 from the target depth plane is smaller than the distance of the depth plane of the third object 3 from the target depth plane, the size of the adjustment of the display parameter of the second object 2 is smaller than the size of the adjustment of the display parameter of the third object 3, so that the second object 2 is clearer compared to the third object 3 when the user looks at the first object 1.
Referring to fig. 2b, the clarity can be characterized based on the thinness of the filler. The denser the filler in fig. 2b represents the higher the definition, the more sparse the filler represents the lower the definition. The distance of the depth plane of the first object 1 from the depth plane of the second object 2 is smaller than the distance of the depth plane of the first object 1 from the depth plane of the third object 3, so that the definition of the first object 1 when the user gazes at the second object 2 is higher than the definition of the first object 1 when the user gazes at the third object 3.
Step two, obtaining the information of the fixation point
When a user experiences a three-dimensional virtual scene, the real-time fixation point information of the user can be obtained through the eye tracker, and then the depth plane where the gazed image area is located is judged.
The eye tracker may be located on the VR device.
Step three, presenting focusing and defocusing effects of planes with different depths
The real-time image of the three-dimensional virtual scene is focused on a depth plane where a user fixation point is located, and image areas located on other depth planes present different out-of-focus states according to the difference value of absolute distances. At this time, in the three-dimensional virtual scene in front of the user, only the depth plane corresponding to the gazed object is clear, and objects in other depth planes present fuzzy states of different degrees according to the absolute distance difference value with the gazed depth plane.
In one embodiment, the absolute distance from the depth plane where the point of regard is located is sharper and more distant and blurry.
According to the invention, through the segmentation and marking of the three-dimensional virtual scene image and the combination of the eye movement tracking technology, a human eye vision system obtains depth clues of focusing and defocusing when observing the three-dimensional virtual scene. The method provides focusing and defocusing depth clues for the user in the three-dimensional virtual scene, effectively makes up the deficiency and deficiency of the depth clues caused by using a fixed-focus picture in the existing scene, and greatly enhances the stereoscopic impression and depth impression of the user in the three-dimensional virtual scene.
The display method provided by the second embodiment of the invention embodies the operation of determining the target depth plane and the operation of determining the depth plane and the corresponding distance information. By the method, the stereoscopic impression and depth impression of the virtual reality or augmented reality video can be improved.
The embodiments of the present invention provide several specific implementation manners based on the technical solutions of the above embodiments.
As a specific embodiment of this embodiment, the determining a depth plane included in each frame of image in the virtual reality or augmented reality video includes:
acquiring a target image in the virtual reality or augmented reality video frame by frame;
acquiring depth information of an object included in the target image;
and segmenting the target image based on the depth information to obtain at least one depth plane, and determining the distance information of each depth plane obtained by segmentation according to the depth information.
When the depth plane is determined, the method can acquire images in the virtual reality or augmented reality video frame by frame as target images, and acquire the depth information of objects included in the target images aiming at each target image, wherein each object can correspond to one depth information. After the depth information is determined, the target image can be segmented based on each depth information to obtain at least one depth plane, and the number of the depth planes can be determined based on the number of the depth information. When the number of the plurality of depth information is the same, the number of the plurality of depth information may be determined to be 1.
The invention can divide the target image to comprise a plurality of depth planes according to the depth information, and the distance information of each depth plane can be determined by the depth information. For example, the distance information of the depth plane is determined by the difference of the depth information corresponding to the depth plane.
In one embodiment, the depth information of objects included in the same depth plane is the same.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a display device according to a third embodiment of the present invention, where the display device is suitable for a situation of improving a sense of depth of an image, where the display device may be implemented by software and/or hardware and is generally integrated on a terminal device.
As shown in fig. 3, the apparatus includes: an acquisition module 31, a determination module 32 and an adjustment module 33;
an obtaining module 31, configured to obtain gazing information of a user on a current image;
a determining module 32 for determining a corresponding target depth plane based on the gaze information;
and an adjusting module 33, configured to adjust display parameters of the remaining depth planes, where the display parameters of the remaining depth planes are determined according to distances between the remaining depth planes and the target depth plane, and the remaining depth planes are depth planes on the current image except for the target depth plane.
In the embodiment, the apparatus first acquires the user's gaze information on the current image through the acquisition module 31; secondly, determining a corresponding target depth plane based on the gazing information through a determining module 32; and finally, adjusting display parameters of the rest of depth planes through an adjusting module 33, wherein the display parameters of the rest of depth planes are determined according to the distances between the rest of depth planes and the target depth plane, and the rest of depth planes are depth planes on the current image except the target depth plane.
The present embodiment provides a display device capable of enhancing a sense of depth of a current image viewed by a user.
Further, the gazing information comprises gazing point information; correspondingly, the determining a corresponding target depth plane based on the gaze information includes:
determining a target object corresponding to the fixation point information on the current image;
and determining the depth plane where the target object is located as a target depth plane.
Further, the display parameter includes a blur radius.
Further, in the case that the number of the remaining depth planes is at least two, the blur radius of each of the remaining depth planes is proportional to the distance of the remaining depth plane from the target depth plane.
Further, the distance between the remaining depth plane and the target depth plane is determined by a difference between the distance information of the remaining depth plane and the distance information of the target depth plane.
Further, the apparatus further comprises: an information determination module to:
determining a depth plane and corresponding distance information contained in each frame of image in a virtual reality or augmented reality video, wherein the current image is the image currently displayed in the virtual reality or augmented reality video, and the distance information is the absolute distance information between the corresponding depth plane and the user.
Further, the information determining module is configured to:
acquiring a target image in the virtual reality or augmented reality video frame by frame;
acquiring depth information of an object included in the target image;
and segmenting the target image based on the depth information to obtain at least one depth plane, and determining the distance information of each depth plane obtained by segmentation according to the depth information.
Further, the depth information of the objects included in the same depth plane is the same.
The display device can execute the display method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention. As shown in fig. 4, a terminal device provided in the fourth embodiment of the present invention includes: one or more processors 41 and storage 42; the processor 41 in the terminal device may be one or more, and one processor 41 is taken as an example in fig. 4; storage 42 is used to store one or more programs; the one or more programs are executed by the one or more processors 41 such that the one or more processors 41 implement a method according to any one of the embodiments of the present invention.
The terminal device may further include: an input device 43 and an output device 44.
The processor 41, the storage device 42, the input device 43 and the output device 44 in the terminal equipment may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 4.
The storage device 42 in the terminal device is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the methods provided in the first or second embodiment of the present invention (for example, the modules in the display device shown in fig. 3 include the obtaining module 31, the determining module 32, and the adjusting module 33). The processor 41 executes various functional applications and data processing of the terminal device by executing software programs, instructions and modules stored in the storage device 42, that is, implements the display method in the above-described method embodiment.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from processor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 43 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. The output device 44 may include a display device such as a display screen.
And, when the one or more programs included in the above-mentioned terminal device are executed by the one or more processors 41, the programs perform the following operations:
acquiring the watching information of a user on a current image;
determining a corresponding target depth plane based on the gaze information;
and adjusting display parameters of other depth planes, wherein the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane.
EXAMPLE five
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is used, when executed by a processor, to execute a display method provided by the present invention, where the method includes:
acquiring the watching information of a user on a current image;
determining a corresponding target depth plane based on the gaze information;
and adjusting display parameters of other depth planes, wherein the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane.
Optionally, the program may be further configured to perform a display method provided in any of the embodiments of the present invention when executed by a processor.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (11)
1. A display method, comprising:
acquiring the watching information of a user on a current image;
determining a corresponding target depth plane based on the gaze information;
and adjusting display parameters of other depth planes, wherein the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane.
2. The method of claim 1, wherein the gaze information comprises gaze point information; correspondingly, the determining a corresponding target depth plane based on the gaze information includes:
determining a target object corresponding to the fixation point information on the current image;
and determining the depth plane where the target object is located as a target depth plane.
3. The method of claim 1, wherein the display parameter comprises a blur radius.
4. The method of claim 3, wherein, in the case that the number of the remaining depth planes is at least two, the blur radius of each remaining depth plane is proportional to the distance of the remaining depth plane from the target depth plane.
5. The method of claim 1, wherein the distance of the remaining depth plane from the target depth plane is determined by a difference between the distance information of the remaining depth plane and the distance information of the target depth plane.
6. The method of claim 1, further comprising:
determining a depth plane and corresponding distance information contained in each frame of image in a virtual reality or augmented reality video, wherein the current image is the image currently displayed in the virtual reality or augmented reality video, and the distance information is the absolute distance information between the corresponding depth plane and the user.
7. The method of claim 6, wherein the determining the depth plane included in each frame of image in the virtual reality or augmented reality video comprises:
acquiring a target image in the virtual reality or augmented reality video frame by frame;
acquiring depth information of an object included in the target image;
and segmenting the target image based on the depth information to obtain at least one depth plane, and determining the distance information of each depth plane obtained by segmentation according to the depth information.
8. The method of claim 7, wherein the depth information of objects included in the same depth plane is the same.
9. A display device, comprising:
the acquisition module is used for acquiring the watching information of the user on the current image;
a determination module for determining a corresponding target depth plane based on the gaze information;
and the adjusting module is used for adjusting display parameters of other depth planes, the display parameters of the other depth planes are determined according to the distances between the other depth planes and the target depth plane, and the other depth planes are depth planes on the current image except the target depth plane.
10. A terminal device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010130618.5A CN113325947A (en) | 2020-02-28 | 2020-02-28 | Display method, display device, terminal equipment and storage medium |
JP2022551634A JP2023515205A (en) | 2020-02-28 | 2021-02-19 | Display method, device, terminal device and computer program |
PCT/CN2021/076919 WO2021169853A1 (en) | 2020-02-28 | 2021-02-19 | Display method and apparatus, and terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010130618.5A CN113325947A (en) | 2020-02-28 | 2020-02-28 | Display method, display device, terminal equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113325947A true CN113325947A (en) | 2021-08-31 |
Family
ID=77412782
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010130618.5A Pending CN113325947A (en) | 2020-02-28 | 2020-02-28 | Display method, display device, terminal equipment and storage medium |
Country Status (3)
Country | Link |
---|---|
JP (1) | JP2023515205A (en) |
CN (1) | CN113325947A (en) |
WO (1) | WO2021169853A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115562497A (en) * | 2022-11-04 | 2023-01-03 | 浙江舜为科技有限公司 | Augmented reality information interaction method, augmented reality device, and storage medium |
CN116850012A (en) * | 2023-06-30 | 2023-10-10 | 广州视景医疗软件有限公司 | Visual training method and system based on binocular vision |
CN117880630A (en) * | 2024-03-13 | 2024-04-12 | 杭州星犀科技有限公司 | Focusing depth acquisition method, focusing depth acquisition system and terminal |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105812778A (en) * | 2015-01-21 | 2016-07-27 | 成都理想境界科技有限公司 | Binocular AR head-mounted display device and information display method therefor |
CN109154723A (en) * | 2016-03-25 | 2019-01-04 | 奇跃公司 | Virtual and augmented reality system and method |
CN110555873A (en) * | 2018-05-30 | 2019-12-10 | Oppo广东移动通信有限公司 | Control method, control device, terminal, computer device, and storage medium |
CN110679147A (en) * | 2017-03-22 | 2020-01-10 | 奇跃公司 | Depth-based foveated rendering for display systems |
CN110727111A (en) * | 2019-10-23 | 2020-01-24 | 深圳惠牛科技有限公司 | Head-mounted display optical system and head-mounted display equipment |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001125944A (en) * | 1999-10-29 | 2001-05-11 | Hitachi Information & Control Systems Inc | Method and device for simulating fitting |
JP4443083B2 (en) * | 2001-10-09 | 2010-03-31 | 株式会社バンダイナムコゲームス | Image generation system and information storage medium |
EP3149528B1 (en) * | 2014-05-30 | 2023-06-07 | Magic Leap, Inc. | Methods and system for creating focal planes in virtual and augmented reality |
US10025060B2 (en) * | 2015-12-08 | 2018-07-17 | Oculus Vr, Llc | Focus adjusting virtual reality headset |
WO2018214067A1 (en) * | 2017-05-24 | 2018-11-29 | SZ DJI Technology Co., Ltd. | Methods and systems for processing an image |
CN111580671A (en) * | 2020-05-12 | 2020-08-25 | Oppo广东移动通信有限公司 | Video image processing method and related device |
-
2020
- 2020-02-28 CN CN202010130618.5A patent/CN113325947A/en active Pending
-
2021
- 2021-02-19 JP JP2022551634A patent/JP2023515205A/en active Pending
- 2021-02-19 WO PCT/CN2021/076919 patent/WO2021169853A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105812778A (en) * | 2015-01-21 | 2016-07-27 | 成都理想境界科技有限公司 | Binocular AR head-mounted display device and information display method therefor |
CN109154723A (en) * | 2016-03-25 | 2019-01-04 | 奇跃公司 | Virtual and augmented reality system and method |
CN110679147A (en) * | 2017-03-22 | 2020-01-10 | 奇跃公司 | Depth-based foveated rendering for display systems |
CN110555873A (en) * | 2018-05-30 | 2019-12-10 | Oppo广东移动通信有限公司 | Control method, control device, terminal, computer device, and storage medium |
CN110727111A (en) * | 2019-10-23 | 2020-01-24 | 深圳惠牛科技有限公司 | Head-mounted display optical system and head-mounted display equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115562497A (en) * | 2022-11-04 | 2023-01-03 | 浙江舜为科技有限公司 | Augmented reality information interaction method, augmented reality device, and storage medium |
CN115562497B (en) * | 2022-11-04 | 2024-04-05 | 浙江舜为科技有限公司 | Augmented reality information interaction method, augmented reality device, and storage medium |
CN116850012A (en) * | 2023-06-30 | 2023-10-10 | 广州视景医疗软件有限公司 | Visual training method and system based on binocular vision |
CN116850012B (en) * | 2023-06-30 | 2024-03-12 | 广州视景医疗软件有限公司 | Visual training method and system based on binocular vision |
CN117880630A (en) * | 2024-03-13 | 2024-04-12 | 杭州星犀科技有限公司 | Focusing depth acquisition method, focusing depth acquisition system and terminal |
CN117880630B (en) * | 2024-03-13 | 2024-06-07 | 杭州星犀科技有限公司 | Focusing depth acquisition method, focusing depth acquisition system and terminal |
Also Published As
Publication number | Publication date |
---|---|
WO2021169853A1 (en) | 2021-09-02 |
JP2023515205A (en) | 2023-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110187855B (en) | Intelligent adjusting method for near-eye display equipment for avoiding blocking sight line by holographic image | |
CN109086726B (en) | Local image identification method and system based on AR intelligent glasses | |
CN108427503B (en) | Human eye tracking method and human eye tracking device | |
WO2020015468A1 (en) | Image transmission method and apparatus, terminal device, and storage medium | |
US10241329B2 (en) | Varifocal aberration compensation for near-eye displays | |
US11675432B2 (en) | Systems and techniques for estimating eye pose | |
WO2021169853A1 (en) | Display method and apparatus, and terminal device and storage medium | |
WO2017183346A1 (en) | Information processing device, information processing method, and program | |
JP6454851B2 (en) | 3D gaze point location algorithm | |
CN112805659A (en) | Selecting depth planes for a multi-depth plane display system by user classification | |
US10936059B2 (en) | Systems and methods for gaze tracking | |
KR101788452B1 (en) | Apparatus and method for replaying contents using eye tracking of users | |
JP7081599B2 (en) | Information processing equipment, information processing methods, and programs | |
CN111710050A (en) | Image processing method and device for virtual reality equipment | |
CN111886564A (en) | Information processing apparatus, information processing method, and program | |
CN109901290B (en) | Method and device for determining gazing area and wearable device | |
CN111880654A (en) | Image display method and device, wearable device and storage medium | |
EP3945401A1 (en) | Gaze tracking system and method | |
CN111654688B (en) | Method and equipment for acquiring target control parameters | |
CN106708249A (en) | Interactive method, interactive apparatus and user equipment | |
CN117372475A (en) | Eyeball tracking method and electronic equipment | |
RU2815753C1 (en) | Display method and device, terminal device and data storage medium | |
CN115914603A (en) | Image rendering method, head-mounted display device and readable storage medium | |
CN113780414B (en) | Eye movement behavior analysis method, image rendering method, component, device and medium | |
US11579690B2 (en) | Gaze tracking apparatus and systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210831 |