WO2022226224A1 - Immersive viewing experience - Google Patents
Immersive viewing experience Download PDFInfo
- Publication number
- WO2022226224A1 WO2022226224A1 PCT/US2022/025818 US2022025818W WO2022226224A1 WO 2022226224 A1 WO2022226224 A1 WO 2022226224A1 US 2022025818 W US2022025818 W US 2022025818W WO 2022226224 A1 WO2022226224 A1 WO 2022226224A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- imagery
- specific display
- illustrates
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 35
- 239000002131 composite material Substances 0.000 claims description 26
- 230000008921 facial expression Effects 0.000 claims description 4
- 210000003128 head Anatomy 0.000 description 13
- 230000006641 stabilisation Effects 0.000 description 9
- 238000011105 stabilization Methods 0.000 description 9
- 239000011521 glass Substances 0.000 description 6
- 230000003190 augmentative effect Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000002271 resection Methods 0.000 description 4
- 238000010408 sweeping Methods 0.000 description 4
- 241000282994 Cervidae Species 0.000 description 3
- 241000086550 Dinosauria Species 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 208000004350 Strabismus Diseases 0.000 description 1
- 230000004308 accommodation Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 230000004418 eye rotation Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Definitions
- Movies are a form of entertainment.
- This patent discloses, a system, a method, an apparatus and software to achieve an improved immersive viewing experience.
- First upload a user's viewing parameter to a cloud wherein said cloud stores imagery (which in the preferred embodiments is extremely large datasets).
- Viewing parameters can include any action, gesture, body position, eye look angle, eye convergence/vergence or input (e.g., via a graphical user interface).
- user's viewing parameters are characterized (e.g., by a variety of devices, such as eye-facing cameras, cameras to record gestures) and sent to the cloud.
- Second a set of user-specific imagery is optimized from said imagery wherein said user-specific imagery is based on at least said viewing parameter.
- the field of view of the user-specific imagery is smaller than the imagery.
- the location where a user is looking would have high resolution and the location where the user is not looking would have low resolution. For example, if a user is looking at an object on the left, then the user-specific imagery would be high resolution on the left side. In some embodiments, a user-specific imagery would be streamed in near-real time.
- the user-specific imagery comprises a first portion with a first spatial resolution and a second portion with a second spatial resolution, and wherein said first spatial resolution is higher than said second spatial resolution.
- said viewing parameter comprises a viewing location and wherein said viewing location corresponds to said first portion.
- user-specific imagery comprises a first portion with a first zoom setting and a second portion with a second zoom setting, and wherein said first zoom setting is higher than said second zoom setting.
- a first portion is determined by said viewing parameter wherein said viewing parameter comprises at least one of the group consisting of: a position of said user's body; an orientation of said user's body; a gesture of said user's hand; a facial expression of said user; a position of said user's head; and an orientation of said user's head.
- a first portion is determined by a graphical user interface, such as a mouse or controller.
- Some embodiments comprise wherein the imagery comprises a first field of view (FOV) and wherein said user-specific imagery comprises a second field of view, and wherein said first FOV is larger than said second FOV.
- FOV field of view
- imagery comprises stereoscopic imagery and wherein said stereoscopic imagery is obtained via stereoscopic cameras or stereoscopic camera clusters.
- imagery comprises stitched imagery wherein said stitched imagery is generated by at least two cameras.
- imagery comprises composite imagery, wherein said composite imagery is generated by: taking an first image of a scene with a first set of camera settings wherein said first set of camera settings causes a first object to be in focus and a second object to be out of focus; and taking an second image of a scene with a second set of camera settings wherein said second set of camera settings causes said second object to be in focus and said first object to be out of focus.
- said imagery comprises composite imagery, wherein said composite imagery is generated by: taking an first image of a scene with a first set of camera settings wherein said first set of camera settings causes a first object to be in focus and a second object to be out of focus; and taking an second image of a scene with a second set of camera settings wherein said second set of camera settings causes said second object to be in focus and said first object to be out of focus.
- Some embodiments comprise wherein when user looks at said first object, said first image would be presented to said user and when user looks at said second object, said second image would be presented to said user.
- Some embodiments
- Some embodiments comprise wherein image stabilization is performed. Some embodiments comprise wherein said viewing parameter comprises convergence. Some embodiments comprise wherein user-specific imagery is 3D imagery wherein said 3D imagery is presented on a HDU, a set of anaglyph glasses or a set of polarized glasses.
- Some embodiments comprise wherein said user-specific imagery is presented to said user on a display wherein said user has at least a 0.5p steradian field of view.
- Some embodiments comprise wherein user-specific imagery is presented on a display.
- the display is a screen (e.g., TV, reflective screen coupled with a projector system, an extended reality head display unit including an augmented reality display, a virtual reality display or a mixed reality display).
- Figure 1 illustrates retrospective display of stereoscopic images.
- Figure 2 illustrates methods to determine which stereo pair to display to a user for a given time point.
- Figure 3 illustrates displaying a video recording on a HDU.
- Figure 4 illustrates a pre-recorded stereo viewing performed by user 1.
- Figure 5 illustrates performing long range stereoscopic imaging of a distant object using stereoscopic camera clusters.
- Figure 6 illustrates a capability of post-acquisition adjusting the images to bring into the best possible picture based on user eye tracking by the generation of a stereoscopic composite image.
- Figure 7 A illustrates an image with motion and the application of image stabilization processing.
- Figure 7B illustrates an image with motion displayed in a HDU.
- Figure 7C illustrates an image stabilization applied to the image using stereoscopic imagery.
- Figure 8 A illustrates a left image and a right image with a first camera setting.
- Figure 8B illustrates a left image and a right image with a second camera setting.
- Figure 9A illustrates a top down view of all data gathered of a scene at a time point.
- Figure 9B illustrates a displayed wide angle 2D image frame of the video recording.
- Figure 9C illustrates a top down view of User A’s viewing angle of -70° and 55° FOV.
- Figure 9D illustrates what User A would see given User As viewing angle of -70° and 55° FOV.
- Figure 9E illustrates a top down view of User B’s viewing angle of +50° and 85° FOV.
- Figure 9F illustrates what User B would see given User B’s viewing angle of +50° and 85° FOV.
- Figure 10A illustrates the field of view captured at a first time point by the left camera.
- Figure 10B illustrates the field of view captured at a first time point by the right camera.
- Figure IOC illustrates a first user’s personalized field of view (FOV) at a given time point.
- Figure 10D illustrates a second user’s personalized field of view (FOV) at a given time point.
- FOV field of view
- Figure 10E illustrates a third user’s personalized field of view (FOV) at a given time point.
- Figure 10F illustrates a fourth user’s personalized field of view (FOV) at a given time point.
- Figure 11 A illustrates a top down view of the first user’s left eye view.
- Figure 11B illustrates a top down view of the first user’s left eye view wherein a convergence point in close proximity to the left eye and right eye.
- Figure 11C illustrates a left eye view at time point 1 without convergence.
- Figure 11D illustrates a left eye view at time point 2 with convergence.
- Figure 12 illustrates the reconstruction of various stereoscopic images from previously acquired wide angle stereo images.
- Figure 13 A illustrates a top down view of a home theater.
- Figure 13B illustrates a side view of the home theater as shown in Figure 13 A.
- Figure 14A illustrates a top down view of a home theater.
- Figure 14B illustrates a side view of the home theater as shown in Figure 14A.
- Figure 15A illustrates a near- spherical TV with a user looking straight ahead at time point #1.
- Figure 15B shows the portion of the TV and the field of view being observed by the user at time point #1.
- Figure 15C illustrates a near- spherical TV with a user looking straight ahead at time point #2.
- Figure 15D shows the portion of the TV and the field of view being observed by the user at time point #2.
- Figure 15E illustrates a near- spherical TV with a user looking straight ahead at time point #3.
- Figure 15F shows the portion of the TV and the field of view being observed by the user at time point #3.
- Figure 16A illustrates an un-zoomed image.
- Figure 16B illustrates a digital -type zooming in on a portion of an image.
- Figure 17A illustrates an un-zoomed image.
- Figure 17B illustrates the optical -type zooming in on a portion of an image.
- Figure 18 A illustrates a single resolution image.
- Figure 18B illustrates a multi -resolution image.
- Figure 19A illustrates a large field of view wherein a first user is looking at a first portion of the image and a second user is looking at a second portion of the image.
- Figure 19B illustrates that only the first portion of the image in Figure 19A and that the second portion of the image in Figure 19A are high resolution and the remainder of the image is lower resolution.
- Figure 20A illustrates a low resolution image
- Figure 20B illustrates a high resolution image.
- Figure 20C illustrates a composite image
- Figure 21 illustrates a method and a process for performing near-real -time streaming of customized images.
- Figure 22A illustrates using resection in conjunction with stereoscopic cameras wherein a first camera location is unknown.
- Figure 22B illustrates using resection in conjunction with stereoscopic cameras wherein an object location is unknown.
- Figure 23 A illustrates a top down view of a person looking forward to the center of the screen of the home theater.
- Figure 23B illustrates a top down view of a person looking forward to the right side of the screen of the home theater.
- Figure 24 illustrates a method, system and apparatus for optimizing stereoscopic camera settings during image acquisition during movement.
- step A is to determine a location (e.g., (a n , b «, r supplement) coordinate) where a viewer is looking at time point n.
- This location could be a near, medium or far convergence point.
- Note #2 A collection of stereoscopic imagery has been collected and recorded. Step A follows the collection process and takes place at some subsequent time period during viewing by a user.
- 101 illustrates step B, which is to determine a FOV « corresponding to the location (e.g., (a generous, b supplement, x terminating) coordinate for time point n. Note: user had option to select the FOV).
- step C which is to select camera(s) that correspond to the FOV for left eye with option to perform additional image processing (e.g., use composite image, use vergence zone) to generate personalized left eye image at time point n (PLEI n ).
- step D which is to select camera(s) that correspond to the FOV for right eye with option to perform additional image processing (e.g., use composite image, use vergence zone) to generate personalized right eye image at time point n (PREI n ).
- step E which is to display PLEI n on a left eye display of a HDU.
- step F which is to display PREI n on a right eye display of a HDU.
- step G which is to increment time step to n+1 and go to Step A, above.
- Figure 2 illustrates methods to determine which stereo pair to display to a user for a given time point.
- 200 illustrates a text box of analyzing the user’s parameters to determine which stereoscopic image to display to the user.
- First use the viewing direction of a user’s head. For example, if user’s head is in a forward direction, a first stereo pair could be used and if a user’s head is in a direction toward the left a second stereo pair could be used.
- a distant object e.g., mountain in the distance
- a viewing direction of a near object e.g., leaf on a tree
- a viewing direction to a distant object e.g., mountain in the distance
- option to use combination of convergence and viewing angle e.g., a viewing direction of a near object (e.g., leaf on a tree)
- option to use combination of convergence and viewing angle e.g., a viewing direction of a distant object.
- accommodation of the user’s eyes For example, monitor a user’s pupil size and use change in size to indicate where (near / far) the user is looking.
- Figure 3 illustrates displaying a video recording on a HDU.
- 300 illustrates establishing a coordinate system. For example, use camera coordinate as the origin and use pointing direction of camera as an axis. This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
- 301 illustrates performing wide angle recording of a scene. For example, record data with a FOV larger than the FOV shown to a user).
- 302 illustrates performing an analysis of a user, as discussed in Figure 2 to determine where the user is looking at in the scene.
- 303 illustrates optimizing the display based on the analysis in 302.
- a feature (e.g., position, size, shape, orientation, color, brightness, texture, classification by AI algorithm) of a physical object determines a feature (e.g., position, size, shape, orientation, color, brightness, texture) of a virtual object.
- a user is using a mixed reality display in a room in a house wherein some of the areas in the room (e.g., a window during the daytime) are bright and some of the areas in the room are dark (e.g., a dark blue wall).
- the position of placement of virtual objects is based on the location of objects within the room. For example, a virtual object could be colored white if the background is a dark blue wall, so that it stands out.
- FIG. 4 illustrates a pre-recorded stereo viewing performed by user 1.
- 400 illustrates user 1 performing a stereo recording using a stereo camera system (e.g., smart phone, etc.). This is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
- 401 illustrates the stereo recording being stored on a memory device.
- 402 illustrates a user (e.g., User 1 or other user(s)) retrieving the stored stereo recording.
- the stereo recording may be transmitted to the other user(s) and the other user(s) would receive the stored stereo recording.
- 403 illustrates a user (e.g., User 1 or other user(s)) viewing the stored stereo recording on a stereo display unit (e.g., augmented reality, mixed reality, virtual reality display).
- a stereo display unit e.g., augmented reality, mixed reality, virtual reality display.
- Figure 5 illustrates performing long range stereoscopic imaging of a distant object using stereoscopic camera clusters.
- 500 illustrates positioning two camera clusters at at least 50 feet apart.
- 501 illustrates elects a target at at least 1 mile away.
- 502 illustrates precisely aiming each camera cluster such that the centerline of focus intersects at the target.
- 503 illustrates acquiring stereoscopic imagery of the target.
- 504 illustrates viewing and/or analyzing the acquired stereoscopic imagery.
- Some embodiments use cameras with telephoto lenses rather than camera clusters. Also, some embodiments, have stereo separation of less than or equal to 50 feet apart for optimized viewing of less than 1 mile away.
- Figure 6 illustrates a capability of post-acquisition adjusting the images to bring into the best possible picture based on user eye tracking by the generation of a stereoscopic composite image.
- the stereoscopic images displayed at this time point has several objects that might be of interest to a person observing the scene.
- a stereoscopic composite image will be generated to match at least one user’s input. For example, if a user is viewing (eye tracking determines viewing location) the mountains 600 or cloud 601 at a first time point, then the stereoscopic composite image pair delivered to a HDU would be generated such that the distant objects of the mountains 600 or cloud 601 were in focus and the nearby objects including the deer 603 and the flower 602 were out of focus.
- the stereoscopic composite images presented at this frame would be optimized for medium range.
- the stereoscopic composite images would be optimized for closer range (e.g., implement convergence, and blur out distant items, such as the deer 603, the mountains 600 and the cloud 601).
- a variety of user inputs could be used to indicate to a software suite how to optimize the stereoscopic composite images. Gestures such as squint could be used to optimize the stereoscopic composite image for more distant objects. Gestures such as lean forward could be used to zoom in to a distant object.
- a GUI could also be used to improve the immersive viewing experience.
- Figure 7A illustrates an image with motion and the application of image stabilization processing.
- 700A illustrates a left eye image of an object wherein there is motion blurring the edges of the object.
- 701A illustrates a left eye image of an object wherein image stabilization processing has been applied.
- Figure 7B illustrates an image with motion displayed in a HDU.
- 702 illustrates the HDU.
- 700A illustrates a left eye image of an object wherein there is motion blurring the edges of the object.
- 700B illustrates a right eye image of an object wherein there is motion blurring the edges of the object.
- 701A illustrates a left eye display, which is aligned with a left eye of a user.
- 701B illustrates a right eye display, which is aligned with a right eye of a user.
- Figure 7C illustrates an image stabilization applied to the image using stereoscopic imagery. A key task of image processing is the image stabilization using stereoscopic imagery.
- 700A illustrates a left eye image of an object wherein image stabilization processing has been applied.
- 700B illustrates a left eye image of an object wherein image stabilization processing has been applied.
- 701A illustrates a left eye display, which is aligned with a left eye of a user.
- 701B illustrates a right eye display, which is aligned with a right eye of a user.
- 702 illustrates the HDU.
- Figure 8 A illustrates a left image and a right image with a first camera setting. Note that the text on the monitor is in focus and the distant object of the knob on the cabinet is out of focus.
- Figure 8B illustrates a left image and a right image with a second camera setting. Note that the text on the monitor is out of focus and the distant object of the knob on the cabinet is in focus.
- a point of novelty is using at least two cameras. A first image from a first camera is obtained. A second image from a second camera is obtained. The first camera and the second camera are in the same viewing perspectives. Also, they are of the scene (e.g., a still scene or a same time point of an scene with movement/changes).
- a composite image is generated wherein a first portion of the composite image is obtained from the first image and a second portion of the composite image is obtained from the second image.
- Figure 9A illustrates a top down view of all data gathered of a scene at a time point.
- Figure 9B illustrates a displayed wide angle 2D image frame of the video recording. Note that displaying this whole field of view to a user would be distorted given the mismatch between the user’s intrinsic FOV (human eye FOV) and the camera system FOV.
- intrinsic FOV human eye FOV
- Figure 9C illustrates a top down view of User A’s viewing angle of -70° and 55° FOV.
- a key point of novelty is the user’s ability to select the portion of the stereoscopic imagery with the viewing angle. Note that the selected portion could realistically be up to -180°, but not more.
- Figure 9D illustrates what User A would see given User A’s viewing angle of -70° and 55° FOV. This improves over the prior art because it allows different viewers to see different portions of the field of view. While a human has a horizontal field of view of slightly more than 180 degrees, a human can only read text over approximately 10 degrees of the field of view, can only assess shape over approximately 30 degrees of the field of view and can only assess colors over approximately 60 degrees of the field of view. In some embodiments, filtering (subtracting) is performed. A human has a vertical field of view of approximately 120 degrees with an upward (above the horizontal) field of view of 50 degrees and a downward (below the horizontal) field of view of approximately 70 degrees. Maximum eye rotation however, is limited to approximately 25 degrees above the horizontal and approximately 30 degrees below the horizontal. Typically, the normal line of sight from the seated position is approximately 15 degreed below the horizontal.
- Figure 9E illustrates a top down view of User B’s viewing angle of +50° and 85° FOV.
- a key point of novelty is the user’s ability to select the portion of the stereoscopic imagery with the viewing angle.
- the FOV of User B is larger than the FOV of User A. Note that the selected portion could realistically be up to -180°, but not more because of the limitations of the human eye.
- Figure 9F illustrates what User B would see given User B’s viewing angle of +50° and 85° FOV. This improves over the prior art because it allows different viewers to see different portions of the field of view.
- multiple cameras are recording for a 240° film.
- 4 cameras each with a 60° sector) for simultaneous recording.
- the sectors are filmed sequentially - one at a time.
- Some scenes of a film could be filmed sequentially and other scenes could be filed simultaneously.
- a camera set up could be used with overlap for image stitching.
- Some embodiments comprise using a camera ball system described in described in US Patent Application 17/225,610, which is incorporated by reference in its entirety. After the imagery is recorded, imagery from the cameras are edited to sync the scenes and stitch them together.
- LIDAR devices can be integrated into the camera systems for precise camera direction pointing.
- Figure 10A illustrates the field of view captured at a first time point by the left camera.
- the left camera 1000 and right camera 1001 are shown.
- the left FOV 1002 is shown by the white region and is approximately 215° and would have an a ranging from +90° to -135° (sweeping from +90° to -135° in a counterclockwise direction).
- the area not imaged within the left FOV 1003 would be approximately 135° and would have an a ranging from +90° to -135° (sweeping from +90° to -135° in a clockwise direction).
- Figure 10B illustrates the field of view captured at a first time point by the right camera.
- the left camera 1000 and right camera 1001 are shown.
- the right FOV 1004 is shown by the white region and is approximately 215° and would have an a ranging from +135° to -90° (sweeping from +135° to -90° in a counterclockwise direction).
- the area not imaged within the right FOV 1005 would be approximately 135° and would have an a ranging from +135° to -90° (sweeping from +135° to -90° in a counterclockwise direction).
- Figure IOC illustrates a first user’s personalized field of view (FOV) at a given time point.
- 1000 illustrates the left camera.
- 1001 illustrates the right camera.
- 1006a illustrates the left boundary of the left eye FOV for the first user, which is shown in light gray.
- 1007a illustrates the right side boundary of the left eye FOV for the first user, which is shown in light gray.
- 1008a illustrates the left boundary of the right eye FOV for the first user, which is shown in light gray.
- 1009a illustrates the right side boundary of the right eye FOV for the first user, which is shown in light gray.
- 1010a illustrates the center line of the left eye FOV for the first user.
- 1011a illustrates the center line of the right eye FOV for the first user.
- center line of the left eye FOV 1010a for the first user and the center line of the right eye FOV 1011a for the first user are parallel, which is equivalent to a convergence point at infinity.
- the first user is looking in the forward direction. It is suggested that during filming of a moving that most of the action in the scene occur in this forward looking direction.
- Figure 10D illustrates a second user’s personalized field of view (FOV) at a given time point.
- 1000 illustrates the left camera.
- 1001 illustrates the right camera.
- 1006b illustrates the left boundary of the left eye FOV for the second user, which is shown in light gray.
- 1007b illustrates the right side boundary of the left eye FOV for the second user, which is shown in light gray.
- 1008b illustrates the left boundary of the right eye FOV for the second user, which is shown in light gray.
- 1009b illustrates the right side boundary of the right eye FOV for the second user, which is shown in light gray.
- 1010b illustrates the center line of the left eye FOV for the second user.
- 1011b illustrates the center line of the right eye FOV for the second user.
- center line of the left eye FOV 1010b for the second user and the center line of the right eye FOV 1011b for the second user meet at a convergence point 1012. This allows the second user to view a small object with greater detail. Note that the second user is looking in the forward direction. It is suggested that during filming of a moving that most of the action in the scene occur in this forward looking direction.
- Figure 10E illustrates a third user’s personalized field of view (FOV) at a given time point.
- 1000 illustrates the left camera.
- 1001 illustrates the right camera.
- 1006c illustrates the left boundary of the left eye FOV for the third user, which is shown in light gray.
- 1007c illustrates the right side boundary of the left eye FOV for the third user, which is shown in light gray.
- 1008c illustrates the left boundary of the right eye FOV for the third user, which is shown in light gray.
- 1009c illustrates the right side boundary of the right eye FOV for the third user, which is shown in light gray.
- 1010c illustrates the center line of the left eye FOV for the third user.
- 1011c illustrates the center line of the right eye FOV for the third user.
- center line of the left eye FOV 1010c for the third user and the center line of the right eye FOV 1011c for the third user are approximately parallel, which is equivalent to looking at a very far distance.
- the third user is looking in a moderately leftward direction.
- the overlap of the left eye FOV and right eye FOV provide stereoscopic viewing to the third viewer.
- Figure 10F illustrates a fourth user’s personalized field of view (FOV) at a given time point.
- 1000 illustrates the left camera.
- 1001 illustrates the right camera.
- 1006d illustrates the left boundary of the left eye FOV for the fourth user, which is shown in light gray.
- 1107d illustrates the right side boundary of the left eye FOV for the fourth user, which is shown in light gray.
- 1008d illustrates the left boundary of the right eye FOV for the fourth user, which is shown in light gray.
- 1009d illustrates the right side boundary of the right eye FOV for the fourth user, which is shown in light gray.
- lOlOd illustrates the center line of the left eye FOV for the fourth user.
- 101 Id illustrates the center line of the right eye FOV for the fourth user.
- center line of the left eye FOV lOlOd for the fourth user and the center line of the right eye FOV 101 Id for the fourth user are approximately parallel, which is equivalent to looking at a very far distance. Note that the fourth user is looking in a far leftward direction. Note that the first user, second user, third user and fourth user are all seeing different views of the movie at the same time point. It should be noted that some of the designs, such as the camera cluster or ball system as described in
- Figure 11A illustrates a top down view of the first user’s left eye view at time point 1.
- 1100 illustrates the left eye view point.
- 1101 illustrates the right eye viewpoint.
- 1102 illustrates the portion of the field of view (FOV) not covered by either camera.
- 1103 illustrates the portion of the FOV that is covered by at least one camera.
- Figure 11B illustrates a top down view of the first user’s left eye view wherein a convergence point in close proximity to the left eye and right eye. 1100 illustrates the left eye view point.
- 1101 illustrates the right eye viewpoint.
- 1102 illustrates the portion of the field of view (FOV) not covered by either camera.
- 1103 illustrates the portion of the FOV that is covered by at least one camera.
- Figure 12 illustrates the reconstruction of various stereoscopic images from previously acquired wide angle stereo images.
- 1200 illustrates acquiring imagery from a stereoscopic camera system. This is camera system is discussed in more detail in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
- 1201 illustrates wherein a first camera for a left eye viewing perspective and a second camera for a right eye viewing perspective is utilized.
- 1202 illustrates selecting the field of view of the first camera based on the left eye look angle and the field of view for the second camera based on the right eye look angle. In the preferred embodiment, the selection would be performed by a computer (e.g., integrated into a head display unit) based on an eye tracking system tracking eye movements of a user.
- a computer e.g., integrated into a head display unit
- left eye image is generated from at least two lenses
- right eye image is generated from at least two lenses
- present stereoscopic image pair with nearby object in focus and distant objects out of focus When user is looking at distant object, present stereoscopic image pair with nearby object out of focus and distant object in focus.
- Second, use a variety of display devices e.g., Augmented Reality, Virtual Reality, Mixed Reality displays).
- FIG. 13A illustrates a top down view of a home theater.
- 1300 illustrates the user.
- 1301 illustrates the projector.
- 1302 illustrates the screen.
- this immersive home theater is displays a field of view larger than a user’s 1300 field of view. For example, if a user 1300 was looking straight forward, the home theater would display a horizontal FOV of greater than 180 degrees. Thus, the home theater’s FOV would completely cover the user’s horizontal FOV. Similarly, if the user was looking straight forward, the home theater would display a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s vertical FOV.
- An AR / VR / MR headset could be used in conjunction with this system, but would not be required.
- a conventional IMAX polarized projector could be utilized with IMAX-type polarized disposable glasses.
- the size of the home theater could vary.
- the home theater walls could be built with white, reflective panels and framing.
- the projector would have multiple heads to cover the larger field of view.
- Figure 13B illustrates a side view of the home theater as shown in Figure 13A.
- 1300 illustrates the user.
- 1301 illustrates the projector.
- 1302 illustrates the screen.
- this immersive home theater is displays a field of view larger than a user’s 1300 field of view. For example, if a user 100 was looking forward while on a recliner, the home theater would display a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s FOV. Similarly, if the user was looking straight forward, the home theater would display a horizontal FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the user’s FOV.
- Figure 14A illustrates a top down view of a home theater.
- 1400 A illustrates a first user.
- 1400B illustrates a first user.
- 1401 illustrates the projector.
- 1402 illustrates the screen.
- this immersive home theater is displays a field of view larger than the FOV of the first user 1400 A or the second user 1400B.
- the first user 1400 A was looking straight forward, the first user 1400A would see a horizontal FOV of greater than 180 degrees.
- the home theater’s FOV would completely cover the user’s horizontal FOV.
- the home theater would display a vertical FOV of greater than 120 degrees, as shown in Figure 14B.
- the home theater’s FOV would completely cover the user’s vertical FOV.
- An AR / VR / MR headset could be used in conjunction with this system, but would not be required.
- Cheap anaglyph or polarized glasses could also be used.
- a conventional IMAX polarized projector could be utilized with IMAX-type polarized disposable glasses.
- the size of the home theater could vary.
- the home theater walls could be built with white, reflective panels and framing. The projector would have multiple heads to cover the larger field of view.
- Figure 14B illustrates a side view of the home theater as shown in Figure 14A.
- 1400A illustrates the first user.
- 1401 illustrates the projector.
- 1402 illustrates the screen.
- this immersive home theater is displays a field of view larger than the first user’s 1400A field of view. For example, if the first user 1400 A was looking forward while on a recliner, the user would see a vertical FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the FOV of the first user 1400 A. Similarly, if the first user 1400 A was looking straight forward, the home theater would display a horizontal FOV of greater than 120 degrees. Thus, the home theater’s FOV would completely cover the FOV of the first user 1400 A.
- a typical high resolution display has 4000 pixels over a 1.37 m distance. This would be equivalent to 10 x 10 6 pixels per 1.87 m 2 .
- the surface area of a hemisphere is 2 x p x r 2 , which is equal to (4)(3.14)(2 2 ) or 50.24 m 2 .
- a spatial resolution was desired to be equal to that of a typical high resolution display, this would equal (50.24 m 2 )(10 x 10 6 pixels per 1.87 m 2 ) or 429 million pixels.
- the frame rate 60 frames per second. This is 26 times the amount of data as compared to a standard 4K monitor.
- a field of view comprises a spherical coverage with a 4p steradians. This can be accomplished via a HDU.
- a field of view comprises sub-spherical coverage with at least 3p steradians.
- a field of view comprises sub-spherical coverage with at least 2p steradians.
- a field of view comprises sub-spherical coverage with at least 1p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.5p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.25p steradians. In some embodiments, a field of view comprises sub-spherical coverage with at least 0.05p steradians. In some embodiments, a sub-spherical IMAX system is created for an improved movie theater experience with many viewers. The chairs would be positioned in a similar position as standard movie theaters, but the screen would be sub-spherical. In some embodiments, non-spherical shapes could also be used.
- Figure 15A illustrates time point #1 wherein a user looking straight ahead and sees a horizontal field of view of approximately 60 degrees horizontal and 40 degrees vertical with a reasonably precise field of view (e.g., user can see shapes and colors in peripheral FOV).
- Figure 15B shows the center portion of the TV and the field of view being observed by the user at time point #1.
- data would be streamed (e.g., via the internet).
- a novel feature of this patent is called “viewing-parameter directed streaming”.
- a viewing parameter is used to direct the data streamed. For example, if the user 1500 were looking straight forward, then a first set of data would be streamed to correspond with the straight forward viewing angle of the user 1500. If, however, the user were looking at to the side of the screen, a second set of data would be streamed to correspond with the looking to the side viewing angle of the user 1500.
- viewing parameters that could control viewing angles include, but are not limited to, the following: user’s vergence; user’s head position; user’s head orientation.
- any feature (age, gender, preference) or action of a user (viewing angle, positions, etc.) could be used to direct streaming.
- another novel feature is the streaming of at least two image qualities. For example, a first image quality (e.g., high quality) would be streamed in accordance with a first parameter (e.g., within user’s 30° horizontal FOV and 30° vertical FOV). And, a second image quality (e.g., lower quality) would be also be streamed that did not meet this criteria (e.g., not within user’s 30° horizontal FOV and 30° vertical FOV). Surround sound would be implemented in this system.
- a first image quality e.g., high quality
- a second image quality e.g., lower quality
- Figure 15C illustrates time point #2 wherein a user looking to the user’s left side of the screen and sees a horizontal field of view of approximately 60 degrees horizontal and 40 degrees vertical with a reasonably precise field of view (e.g., user can see shapes and colors in peripheral FOV).
- Figure 15D illustrates time point #2 with the field of view being observed by the user at time point #2, which is different as compared to Figure 15B.
- the area of interest is half that of time point #1.
- greater detail and higher resolution of objects within a small FOV within the scene is provided to the user. Outside of this high resolution field of view zone, a lower resolution image quality could be presented on the screen.
- Figure 15E illustrates time point #3 wherein a user looking to the user’s right side of the screen.
- Figure 15F illustrates time point #3 and sees a circularly shaped high-resolution FOV.
- Figure 16A illustrates an un-zoomed image.
- 1600 illustrates the image.
- 1601 A illustrates a box illustrated to denote the area within image 1600 that is set to be zoomed in on.
- Figure 16B illustrates a digital-type zooming in on a portion of an image. This can be accomplished via methods described in US Patent 8,384,771 (e.g., 1 pixel turns into 4), which is incorporated by reference in its entirety.
- a the area to be zoomed in on can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs).
- GUIs graphical user interfaces
- Figure 17A illustrates an un-zoomed image.
- 1700 illustrates the image.
- 1701 A illustrates a box illustrated to denote the area within image 1700 that is set to be zoomed in on.
- Figure 17B illustrates the optical -type zooming in on a portion of an image.
- a the area to be zoomed in on can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs).
- GUIs graphical user interfaces
- the area within the image 1701A that was of denoted in Figure 17A is now zoomed in on as shown in 170 IB and also note that the image inside of 170 IB appears higher image quality. This can be done by selectively displaying the maximum quality imagery in region 170 IB and enlarging region 1701B. Not only is the cloud bigger, the resolution of the cloud is also better.
- Figure 18A illustrates a single resolution image.
- 1800 A illustrates the image.
- 1801 A illustrates a box illustrated to denote the area within image 1800A that is set to have resolution improved in.
- Figure 18B illustrates a multi-resolution image. Note that the area where resolution is improved can be accomplished through a variety of user inputs including: gesture tracking systems; eye tracking systems; and, graphical user interfaces (GUIs) to include a joystick or controller. Note that the area within the image 1801A that was of denoted in Figure 18A is now displayed with higher resolution as shown in 180 IB. In some embodiments, the image inside of 180 IB can be changed in other options as well (e.g., different color scheme, different brightness settings, etc.). This can be done by selectively displaying a higher (e.g., maximum) quality imagery in region 1801B without enlarging region 1701B.
- a higher e.g., maximum
- Figure 19A illustrates a large field of view wherein a first user is looking at a first portion of the image and a second user is looking at a second portion of the image.
- 1900A is the large field of view, which is of a first resolution.
- 1900B is the location where a first user is looking which is set to become high resolution, as shown in Figure 19B.
- 1900C is the location where a second user is looking which is set to become high resolution, as shown in Figure 19B.
- Figure 19B illustrates that only the first portion of the image in Figure 19A and that the second portion of the image in Figure 19A are high resolution and the remainder of the image is low resolution.
- 1900A is the large field of view, which is of a first resolution (low resolution).
- 1900B is the location of the high resolution zone of a first user, which is of a second resolution (high resolution in this example).
- 1900C is the location of the high resolution zone of a second user, which is of a second resolution (high resolution in this example).
- a first high resolution zone be used for a first user.
- a second high resolution zone can be used for a second user. This system could be useful for the home theater display as shown in Figures 14A and 14B.
- Figure 20 A illustrates a low resolution image
- Figure 20B illustrates a high resolution image.
- Figure 20C illustrates a composite image. Note that this composite image has a first portion 2000 that is of low resolution and a second portion 2001 that is of high resolution. This was described in US Patent 16/893,291, which is incorporated by reference in its entirety. The first portion is determined by the user’s viewing parameter (e.g., viewing angle). A point of novelty is near-real time streaming of the first portion 2000 with the first image quality and the second portion with the second image quality. Note that the first portion could be displayed differently from the second portion. For example, the first portion and second portion could differ in visual presentation parameters including: brightness; color scheme; or other. Thus, in some embodiments, a first portion of the image can be compressed and a second portion of the image is not compressed.
- viewing parameter e.g., viewing angle
- a composite image is generated with the arranging of some high resolution images and some low resolution images stitched together for display to a user.
- some portions of a large (e.g., 429 million pixel) image are high resolution and some portions of the large image are low resolution.
- the portions of the large image that are high resolution will be streamed in accordance with the user’s viewing parameters (e.g., convergence point, viewing angle, head angle, etc.).
- Figure 21 illustrates a method and a process for performing near-real -time streaming of customized images.
- the displays include, but are not limited to the following: a large TV; an extended reality (e.g., Augmented Reality, Virtual Reality, or Mixed Reality display); a projector system on a screen; a computer monitor, or the like.
- a key component of the display is the ability to track where in the image a user is looking and what the viewing parameters are.
- the viewing parameters include, but are not limited to the following: viewing angle; vergence/convergence; user preferences (e.g., objects of particular interest, filtering - some objects rated “R” can be filtered for a particular user, etc.).
- each frame in the movie or video would be of extremely large data (especially if the home theater shown in Figure 14A and 14B is used in combination with the camera cluster as described in US Patent Application 17/225,610, which is incorporated by reference in its entirety.
- the cloud refers to storage, databases, etc.
- the cloud is capable of cloud computing.
- a point of novelty in this patent is the sending of the viewing parameters of user(s) to the cloud, processing of the viewing parameters in the cloud (e.g., selecting a field of view or composite stereoscopic image pair as discussed in Figure 12) and determining which portions of extremely large data to stream to optimize the individual user’s experience. For example, multiple users could have their movie synchronized.
- a user named Kathy could be looking at the chandelier and Kathy’s images would be optimized (e.g., images with maximum resolution and optimized color of the chandelier are streamed to Kathy’s mobile device and displayed on Kathy’s HDU).
- a user named Bob could be looking at the old man and Bob’s images would be optimized (e.g., images with maximum resolution and optimized color of the old man are streamed to Bob’s mobile device and displayed on Bob’s HDU).
- the cloud would stored a tremendous dataset at each time point, but only portions of it would be streamed and those portions are determined by the user’s viewing parameters and/or preferences. So, the book case, long table, carpet and wall art may all be within the field of view for Dave, Kathy and Bob, but these objects would not be optimized for display (e.g., the highest possible resolution of these images stored in the cloud was not streamed).
- pre-emptive is introduced. If it is predicted that an upcoming scene is may cause a specific user viewing parameter to change (e.g., user head turn), then pre emptively streaming of that additional image frames can be performed. For example, if the time of a movie is at 1:43:05 and a dinosaur is going to make a noise and pop out from the left side of the screen at 1:43:30. Thus, the whole scene could be downloaded in a low resolution format and additional sets of data of selective portions of the FOV could be downloaded as needed (e.g., based on user’s viewing parameter, based on upcoming dinosaur scene where user is predicted to look). Thus, the dinosaur popping out will always be in its maximum resolution. Such technique creates a more immersive and improved viewing experience.
- a specific user viewing parameter e.g., user head turn
- Figure 22A illustrates using resection in conjunction with stereoscopic cameras.
- Camera #1 has a known location (e.g., latitude and longitude from a GPS). From Camera #1, a range (2 miles) and direction (330° North North West) to an object 2200 is known. The location of the object 2200 can be computed.
- Camera #2 has an unknown location, but the range (1 mile) and direction (30° North Northeast) to the object 2200 is known. Since the object 2200’s location can be computed, the geometry can be solved and the location of camera #2 determined.
- Figure 22A illustrates using resection in conjunction with stereoscopic cameras.
- Camera #1 has a known location (e.g., latitude and longitude from a GPS).
- Camera #1 and Camera #2 have known locations. From Camera #1, a direction (330° North Northwest) to an object 2200B is known. From Camera #2, a direction (30° North Northeast) to an object 2200B is known. The location of the object 2200B can be computed.
- Figure 23 A illustrates a top down view of a person looking forward to the center of the screen of the home theater.
- the person 2300 is looking forward toward the center section 2302B of the screen 2301 of the home theater.
- the streaming is customized to have the center section 2302B optimized (e.g., highest possible resolution), the left section 2302 A non-optimized (e.g., low resolution or black), and the right section 2302C non-optimized (e.g., low resolution or black).
- a monitoring system to detect user’s viewing direction and other viewing parameters, such as gesture or facial expression
- a controller to receive commands from the user must also be in place to be inputted for the appropriate streaming.
- FIG. 23B illustrates a top down view of a person looking forward to the right side of the screen of the home theater.
- the person 2300 is looking toward the right side of section 2302C of the screen 2301 of the home theater.
- the streaming is customized to have the right section 2302C optimized (e.g., highest possible resolution), the left section 2302A non-optimized (e.g., low resolution or black), and the center section 2302B non- optimized (e.g., low resolution or black).
- a monitoring system to detect user’s viewing direction and other viewing parameters, such as gesture or facial expression
- a controller to receive commands from the user must also be in place to be inputted for the appropriate streaming.
- Figure 24 illustrates a method, system and apparatus for optimizing stereoscopic camera settings during image acquisition during movement.
- 2400 illustrates determining a distance of an object (e.g., use laser range finder) at a time point.
- An object tracking / target tracking system can be implemented.
- 2401 illustrates adjusting a zoom setting of a stereoscopic camera system to be optimized for said distance as determined in step 2400. In the preferred embodiment, this would be performed when using a zoom lens, as opposed to performing digital zooming.
- 2402 illustrates adjusting the distance of separation (stereo distance) between stereoscopic cameras to be optimized for said distance as determined in step 2400. Note that there is also an option to also adjust the orientation of the cameras to be optimized for said distance as determined in step 2400.
- 2403 illustrates acquiring stereoscopic imagery of the target at time point in step 2400.
- 2404 illustrates recording, view and/or analyzing the acquired stereoscopic imagery.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Stereoscopic And Panoramic Photography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023558524A JP2024518243A (en) | 2021-04-22 | 2022-04-21 | Immersive Viewing Experience |
CN202280030471.XA CN117321987A (en) | 2021-04-22 | 2022-04-21 | Immersive viewing experience |
EP22792523.7A EP4327552A1 (en) | 2021-04-22 | 2022-04-21 | Immersive viewing experience |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/237,152 US11589033B1 (en) | 2021-02-28 | 2021-04-22 | Immersive viewing experience |
US17/237,152 | 2021-04-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022226224A1 true WO2022226224A1 (en) | 2022-10-27 |
Family
ID=83723167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/025818 WO2022226224A1 (en) | 2021-04-22 | 2022-04-21 | Immersive viewing experience |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4327552A1 (en) |
JP (1) | JP2024518243A (en) |
CN (1) | CN117321987A (en) |
WO (1) | WO2022226224A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090219283A1 (en) * | 2008-02-29 | 2009-09-03 | Disney Enterprises, Inc. | Non-linear depth rendering of stereoscopic animated images |
US20180165830A1 (en) * | 2016-12-14 | 2018-06-14 | Thomson Licensing | Method and device for determining points of interest in an immersive content |
US10551993B1 (en) * | 2016-05-15 | 2020-02-04 | Google Llc | Virtual reality content development environment |
US20200371673A1 (en) * | 2019-05-22 | 2020-11-26 | Microsoft Technology Licensing, Llc | Adaptive interaction models based on eye gaze gestures |
US11206364B1 (en) * | 2020-12-08 | 2021-12-21 | Microsoft Technology Licensing, Llc | System configuration for peripheral vision with reduced size, weight, and cost |
-
2022
- 2022-04-21 CN CN202280030471.XA patent/CN117321987A/en active Pending
- 2022-04-21 EP EP22792523.7A patent/EP4327552A1/en active Pending
- 2022-04-21 JP JP2023558524A patent/JP2024518243A/en active Pending
- 2022-04-21 WO PCT/US2022/025818 patent/WO2022226224A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090219283A1 (en) * | 2008-02-29 | 2009-09-03 | Disney Enterprises, Inc. | Non-linear depth rendering of stereoscopic animated images |
US10551993B1 (en) * | 2016-05-15 | 2020-02-04 | Google Llc | Virtual reality content development environment |
US20180165830A1 (en) * | 2016-12-14 | 2018-06-14 | Thomson Licensing | Method and device for determining points of interest in an immersive content |
US20200371673A1 (en) * | 2019-05-22 | 2020-11-26 | Microsoft Technology Licensing, Llc | Adaptive interaction models based on eye gaze gestures |
US11206364B1 (en) * | 2020-12-08 | 2021-12-21 | Microsoft Technology Licensing, Llc | System configuration for peripheral vision with reduced size, weight, and cost |
Also Published As
Publication number | Publication date |
---|---|
EP4327552A1 (en) | 2024-02-28 |
JP2024518243A (en) | 2024-05-01 |
CN117321987A (en) | 2023-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9842433B2 (en) | Method, apparatus, and smart wearable device for fusing augmented reality and virtual reality | |
US11257233B2 (en) | Volumetric depth video recording and playback | |
US20200288113A1 (en) | System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view | |
US9137524B2 (en) | System and method for generating 3-D plenoptic video images | |
CN105103034B (en) | Display | |
US11218690B2 (en) | Apparatus and method for generating an image | |
US20050264858A1 (en) | Multi-plane horizontal perspective display | |
US20060114251A1 (en) | Methods for simulating movement of a computer user through a remote environment | |
AU2006217569A1 (en) | Automatic scene modeling for the 3D camera and 3D video | |
CN113891060B (en) | Free viewpoint video reconstruction method, play processing method, device and storage medium | |
WO2012166593A2 (en) | System and method for creating a navigable, panoramic three-dimensional virtual reality environment having ultra-wide field of view | |
US11218681B2 (en) | Apparatus and method for generating an image | |
CN110291577A (en) | The part of pairing for the experience of improved augmented reality and global user interface | |
JP2019512177A (en) | Device and related method | |
CN111602391B (en) | Method and apparatus for customizing a synthetic reality experience from a physical environment | |
EP4327552A1 (en) | Immersive viewing experience | |
US11589033B1 (en) | Immersive viewing experience | |
US11366319B1 (en) | Immersive viewing experience | |
DeHart | Directing audience attention: cinematic composition in 360 natural history films | |
Zhang et al. | Walk Through a Virtual Museum with Binocular Stereo Effect and Spherical Panorama Views Based on Image Rendering Carried by Tracked Robot | |
WO2019043288A1 (en) | A method, device and a system for enhanced field of view |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22792523 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202317057477 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023558524 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280030471.X Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022792523 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022792523 Country of ref document: EP Effective date: 20231122 |