US20190279015A1 - Method and electronic device for enhancing efficiency of searching for regions of interest in a virtual environment - Google Patents
Method and electronic device for enhancing efficiency of searching for regions of interest in a virtual environment Download PDFInfo
- Publication number
- US20190279015A1 US20190279015A1 US16/012,582 US201816012582A US2019279015A1 US 20190279015 A1 US20190279015 A1 US 20190279015A1 US 201816012582 A US201816012582 A US 201816012582A US 2019279015 A1 US2019279015 A1 US 2019279015A1
- Authority
- US
- United States
- Prior art keywords
- interest
- region
- scene
- center
- pip
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000002708 enhancing effect Effects 0.000 title claims abstract description 22
- 238000004088 simulation Methods 0.000 claims description 13
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 20
- 238000002370 liquid polymer infiltration Methods 0.000 description 13
- 230000000007 visual effect Effects 0.000 description 13
- 230000002411 adverse Effects 0.000 description 4
- 241000282472 Canis lupus familiaris Species 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- FMFKNGWZEQOWNK-UHFFFAOYSA-N 1-butoxypropan-2-yl 2-(2,4,5-trichlorophenoxy)propanoate Chemical compound CCCCOCC(C)OC(=O)C(C)OC1=CC(Cl)=C(Cl)C=C1Cl FMFKNGWZEQOWNK-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
Images
Classifications
-
- G06K9/3233—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/44—Receiver circuitry for the reception of television signals according to analogue transmission standards
- H04N5/445—Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
- H04N5/45—Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
Definitions
- the present disclosure relates to technologies for enhancing efficiency of searching for regions of interest in a virtual environment, and, more particularly, to a method and an electronic device that present a region of interest of an invisible scene as a picture-in-picture (PIP) in a visible scene so as to enhance efficiency of searching for the region of interest in a virtual environment.
- PIP picture-in-picture
- 360-degree panoramic videos are becoming increasingly popular. Particularly, they can provide a user with immersive visual experiences when being applied in the fields of mobile and virtual reality.
- 360-degree panoramic videos can present full scene content of virtual reality.
- the user can only see portions of the contents of virtual reality.
- a user can operate an electronic device to search for objects of interest or regions of interest in virtual reality. For example, the user can make a gesture on a touch screen to operate an electronic device, or move his head to make a tilting or turning operation when he wears a head-mounted display device.
- Such a method is convenient for the user if regions of interest are in the visual field of the user. But if the regions of interest are not in the visual field of the user, the user cannot know in advance how much time he will spend searching for them. The reason is that the user does not know in which direction or how far away the regions of interest are.
- the present disclosure provides a method and an electronic device for enhancing efficiency of searching for a region of interest in virtual reality (a virtual environment).
- the region of interest is presented as a picture-in-picture (PIP) in a visible scene so as to save time for the user to search for the region of interest.
- PIP picture-in-picture
- the present disclosure also provides a method for enhancing efficiency of searching for a region of interest in a virtual environment, wherein the virtual environment is comprised of a visible scene and an invisible scene.
- the method comprises the steps of: (1) locating a position of the region of interest in the invisible scene and a position of a center of the visible scene; and (2) displaying the region of interest as a PIP of the visible scene at an intersection of a line connecting the position of the region of interest and the position of the center of the visible scene and a boundary of the visible scene.
- the position of the region of interest is a center-of-mass coordinate of the region of interest.
- the method further comprises projecting the visible scene and the PIP on a display device through a projecting means.
- the projecting means is selected from the group consisting of equirectangular projection, cube mapping and equi-angular cubemapping.
- the method further comprises rotating the PIP around a center thereof to generate a rotated picture such that the rotated picture has an outer edge facing the region of interest and an inner edge facing the center of the visible scene.
- the method further comprises fixing the outer edge of the rotated picture and tilting the rotated picture by a tilting angle ⁇ along a Z-axis in a three-dimensional space to generate a simulation picture.
- the tilting angle is varied based on the distance between the region of interest and the center of the visible scene.
- a maximum angle of the tilting angle is 105 degrees and a minimum angle of the tilting angle is 0 degree.
- the present disclosure provides another method for enhancing efficiency of searching for a region of interest in a virtual environment, wherein the virtual environment is comprised of a visible scene and an invisible scene.
- the method comprises: (1) locating a position of the region of interest in the invisible scene and a position of a center of the visible scene; and (2) displaying a guiding symbol at an intersection of a line connecting the position of the region of interest and the position of the center of the visible scene and a boundary of the visible scene, wherein the guiding symbol points to the region of interest.
- the guiding symbol is an arrow.
- the distance of the region of interest from the center of the visible scene is indicated by a length, a width, an area or a shape of the arrow.
- the longer the arrow is the greater the distance between the region of interest and the center of the visible scene is, and the shorter the arrow is, the less the distance between the region of interest and the center of the visible scene is.
- the present disclosure provides a further method for enhancing efficiency of searching for regions of interest in a virtual environment, wherein the virtual environment is comprised of a visible scene and an invisible scene.
- the method comprises the steps of: (1) calculating a first distance between a first region of interest in the invisible scene and a center of the visible scene and calculating a second distance between a second region of interest in the invisible scene and the center of the visible scene, wherein the first region of interest and the second region of interest correspond to a first PIP and a second PIP in the visible scene, respectively, and the second PIP partially overlaps with the first PIP, and wherein the first distance is greater than the second distance; and (2) based on the first distance and the second distance, using linear interpolation to find a position between the center of the visible scene and the first PIP for displaying the second PIP.
- the present disclosure further provides an electronic device for enhancing efficiency of searching for a region of interest in a virtual environment, which comprises: a storage unit for storing computer readable program codes; and a processor executing the computer readable program codes to implement a method for enhancing efficiency of searching for a region of interest in a virtual environment.
- FIG. 1 is a schematic diagram showing presentation of a region of interest as a picture-in-picture (PIP) in a visible scene on a mobile device for enhancing efficiency of searching for the region of interest according to the present disclosure
- PIP picture-in-picture
- FIG. 2 is a schematic diagram showing presentation of a region of interest of an invisible scene as a PIP in a visible scene according to the present disclosure
- FIG. 3 is a schematic flow diagram showing a method for presenting a region of interest as a PIP in a visible scene according to an embodiment of the present disclosure
- FIG. 4 is a schematic flow diagram showing a method for simulation processing of a PIP in a visible scene according to an embodiment of the present disclosure
- FIGS. 5 a to 5 d are schematic diagrams showing variation of a visible scene and a PIP when the PIP is simulation processed according to an embodiment of the present disclosure
- FIG. 6 is a schematic diagram showing application of a guiding symbol in a visible scene for enhancing efficiency of searching for a region of interest according to an embodiment of the present disclosure
- FIG. 7 is a schematic flow diagram showing a method for applying a guiding symbol in a visible scene according to an embodiment of the present disclosure
- FIGS. 8 a and 8 b are schematic diagrams showing separation of partially overlapped PIPs in a visible scene for enhancing efficiency of searching for regions of interest according to an embodiment of the present disclosure
- FIG. 9 is a schematic flow diagram showing a method for separating partially overlapped PIPs according to an embodiment of the present disclosure.
- FIG. 10 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
- Computer readable instructions refer to routines, application programs, application modules, programs modules, programs, components, data structures, algorithms and so on.
- Computer readable instructions can be implemented on various system configurations, including a single-processor or multiprocessor system, a minicomputer, a mainframe computer, a personal computer, a palmtop computing device, a programmable consumer electronic device based on a microprocessor, or a combination thereof.
- the logical operations described herein are implemented as a sequence of acts implemented by a computer or program modules running on a computing system and/or as interconnected machine logic circuits or circuit modules within the computing system.
- the implementation is a matter of choice dependent on the performance and other requirements of the computing system. Therefore, the logical operations described herein are referred to variously as states, operations, structural devices, acts or modules. Such operations, structural devices, acts and modules can be implemented in software, firmware, special purpose digital logic or any combination thereof.
- FIG. 1 is a schematic diagram showing presentation of a region of interest as a picture-in-picture (PIP) in a visible scene on a mobile device for enhancing efficiency of searching for the region of interest according to the present disclosure.
- PIP picture-in-picture
- the present disclosure can present the region 140 of interest of the invisible scene 130 as a picture-in-picture (PIP) 150 in the visible scene 120 .
- PIP picture-in-picture
- the user can view the PIP 150 in the visible scene 120 to know what region 140 of interest exists in the invisible scene 130 and in which direction the region 140 of interest is located relative to the visible scene 120 , thereby saving time for the user to search for the region 140 of interest in the invisible scene 130 .
- FIG. 2 is a schematic diagram showing presentation of a region of interest of an invisible scene as a PIP in a visible scene according to the present disclosure.
- the visible range of the visible scene 200 is a region from a center 210 to an edge 250 of the visible scene 200 , and a region 220 of interest is located in the invisible scene 280 .
- the region 220 of interest of the invisible scene 280 is displayed at a position along a line 240 connecting the position of the region 220 of the interest and the position of the center 210 .
- the user when seeing the PIP at a position 230 , the user will know that the region 220 of interest is located along the line 240 and hence can search for the region 220 of interest along the line 240 .
- the region 220 of interest of the invisible scene 280 is displayed on the edge 250 of the visible scene 200 .
- the regions of interest are displayed as corresponding PIPs on the edge 250 of the visible scene 200 . Since these PIPs are not presented in the main view region of the visible scene 200 (for example, the region near the center 210 ), the visual experiences of the user will not be adversely affected.
- the presentation of a PIP on the edge 250 of the visible scene 200 includes, but not limited to, the presentation of the entire PIP inside the visible scene 200 (that is, the entire PIP is presented inside the edge 250 ) or the presentation of a portion of the PIP inside the visible scene 200 (that is, a portion of the PIP falls outside the edge 250 and is not presented inside the visible scene 200 ).
- FIG. 3 is a schematic flow diagram showing a method for presenting a region of interest as a PIP in a visible scene according to an embodiment of the present disclosure.
- the region 220 of interest in the invisible scene 280 is located to determine its position in the virtual environment 290 .
- a center-of-mass coordinate of the region 220 of interest (for example, its longitude and latitude in the virtual environment 290 ) is used to represent the position of the region 220 of interest in the virtual environment 290 .
- the region 220 of interest may be manually chosen and marked and its position in the virtual environment 290 is manually marked and recorded so as to locate the position of the region 220 of interest.
- recognition techniques using feature engineering or artificial intelligence such as deep learning may be used to find what regions 220 of interest exist in the invisible scene 280 and locate their positions.
- step S 320 the position of the center 210 of the visible scene 200 in the virtual environment 290 is located.
- the region 220 of interest is displayed as a PIP of the visible scene 200 at an intersection (i.e., position 230 ) of the line connecting the position of the region 220 of interest and the position of the center 210 of the visible scene 200 and the edge 250 of the visible scene 200 .
- the visible scene 200 and the PIP may be projected on a display of an electronic device through a projecting means.
- the electronic device may be, but not limited to, a desktop computer, a notebook computer, a smart phone and a wearable device.
- the visible scene 200 and the PIP may be projected on the display through equirectangular projection, cube mapping, equi-angular cubemapping and so on.
- FIG. 4 is a schematic flow diagram showing a method for simulation processing of a PIP in a visible scene according to an embodiment of the present disclosure.
- step S 410 the PIP at the display position 230 is rotated around its center to generate a rotated picture.
- an outer edge of the rotated picture faces the region 220 of interest in the invisible scene 280 and an inner edge of the rotated picture faces the center 210 of the visible scene 200 .
- the direction and angle of the rotation may be changed according to the orientation of the region 220 of interest relative to the center 210 of the visible scene 200 .
- the PIP may be rotated clockwise (or counterclockwise) around its center to generate the rotated picture.
- the PIP may be rotated counterclockwise (or clockwise) around its center to generate the rotated picture.
- step S 420 the outer edge of the rotated picture is fixed and then the rotated picture is tilted by a tilting angle ⁇ along a Z-axis in the three-dimensional space to generate a simulation picture.
- the simulation picture presents more 3D spatial information.
- the user when experiencing virtual reality, the user has a more realistic spatial experience and captures visual information of the region of interest more easily.
- the tilting angle is varied based on the distance between the region 220 of interest and the center 210 of the visible scene 200 .
- the user when viewing a plurality of simulation pictures corresponding to different regions 220 of interest, the user can determine the distances between the regions 220 of the interest and the center 210 of the visible scene 200 according to the tilting angles.
- a maximum angle of the tilting angle may be 105 degrees (maxTilt) and a minimum angle of the tilting angle may be 0 degree (minTilt). Since the titling angle is limited to the range of minTilt to maxTilt, the present disclosure avoids too large a tilting angle that could adversely affect the user's visual experience.
- the tilting angle ⁇ can be calculated according to the following equation:
- dist_max represents the distance between the farthest region of interest of the invisible scene 280 and the center 210 of the visible scene 200
- dist represents the distance between the current region of interest and the center 210 of the visible scene 200 .
- FIGS. 5 a to 5 d are schematic diagrams showing variation of a visible scene and a PIP when the PIP is simulation processed according to an embodiment of the present disclosure.
- the PIP 235 in the visible scene 200 is located at the intersection of the edge 250 of the visible scene 200 and the line 240 and corresponds to a region of interest (not shown) that is located at a position in the outwardly extending direction of the line 240 .
- the PIP 235 in the visible scene 200 is processed at step S 410 to generate a rotated picture 236 .
- the outer edge 2361 of the rotated picture 236 faces the region of interest outside the visible scene 200 while the inner edge 2362 of the rotated picture 236 faces the center 210 of the visible scene 200 .
- the rotated picture 236 is processed at step S 420 to generate a simulation picture 237 .
- ⁇ represents the tilting angle along a Z-axis in the three-dimensional space.
- the viewing angle of the visible scene 200 and the simulation picture 237 are changed by ⁇ degrees, as shown in FIG. 5 c.
- the visible scene 200 and the simulation picture 237 of FIG. 5 c are viewed in the original viewing angle.
- FIG. 6 is a schematic diagram showing application of a guiding symbol in a visible scene for enhancing efficiency of searching for a region of interest according to an embodiment of the present disclosure.
- a guiding symbol 201 is applied to guide the user toward the region 220 of interest so as to save time.
- the number of the guiding symbol 201 may be varied according to the number of the region 220 of interest. For example, if three regions 220 of interest exist in the invisible scene 280 , three guiding symbols 201 are displayed in the visible scene 200 . If one region 220 of interest is added to (or deleted from) the invisible scene 280 , the number of the guiding symbols 201 becomes 4 (or 2).
- FIG. 7 is a schematic flow diagram showing a method for applying a guiding symbol in a visible scene according to an embodiment of the present disclosure.
- steps S 710 and S 720 are substantially the same as steps S 310 and S 320 of FIG. 3 , respectively, but step S 730 is different from S 330 .
- a guiding symbol 201 is displayed at an intersection (i.e., position 230 ) of the line connecting the position of the region 220 of interest and the position of the center 210 of the visible scene 200 and the edge 250 of the visible scene 200 .
- the guiding symbol points to the region 220 of interest.
- an arrow is used as the guiding symbol 201 pointing to the region 220 of interest. Further, the length, width, area and/or shape of the arrow may be used to indicate the distance of the region 220 of interest from the center 210 of the visible scene 200 .
- a long arrow indicates a long distance between the region 220 of interest and the center 210 of the visible scene 200 and a short arrow indicates a short distance between the region of interest 220 and the center 210 of the visible scene 200 .
- a wide arrow indicates a long distance between the region 220 of interest and the center 210 of the visible scene 200 and a narrow arrow indicates a short distance between the region of interest 220 and the center 210 of the visible scene 200 .
- an arrow with a large area indicates a long distance between the region 220 of interest and the center 210 of the visible scene 200 and an arrow with a small area indicates a short distance between the region of interest 220 and the center 210 of the visible scene 200 .
- the guiding symbol is not limited to the arrow.
- a graphic symbol or a character symbol may be used.
- FIGS. 8 a and 8 b are schematic diagrams showing separation of partially overlapped PIPs in a visible scene for enhancing efficiency of searching for regions of interest according to an embodiment of the present disclosure.
- a star PIP corresponding to the star 810 and presented at a location 815 and a triangle PIP corresponding to the triangle 820 and presented at a location 825 are partially overlapped.
- the number of the regions of interest increases, the number of the PIPs in the visible scene increases accordingly and the PIPs may be overlapped. As such, it becomes difficult for the user to capture visual information of the regions of interest such as the orientations of the regions of interests from the overlapped PIPs.
- linear interpolation is used to find a position 826 (as shown FIG. 8 b ) for presenting the triangular PIP, thereby overcoming the overlapping problem and enhancing the efficiency of searching for the regions of interest.
- FIG. 9 is a schematic flow diagram showing a method for separating partially overlapped PIPs according to an embodiment of the present disclosure.
- a first distance between a far region of interest such as the star 810 in the invisible scene 280 and the center 210 of the visible scene 200 is calculated.
- a second distance between a near region of interest such as the triangle 820 in the invisible scene 280 and the center 210 of the visible scene 200 is calculated.
- the star PIP presented at the position 815 partially overlaps with the triangle PIP presented at the position 825 .
- step S 930 based on the first distance and the second distance, linear interpolation is used to find a position 826 between the center 210 of the visible scene 200 and the position 815 for displaying the triangle PIP corresponding to the triangle 820 of the invisible scene 280 .
- linear interpolation is based on the following equation:
- pip_near pos_center+(pip_far ⁇ pos_center) ⁇ dist_near/dist_far,
- pip_near represents the position for presenting the PIP corresponding to the region of interest near the center 210
- pip_far represents the position for presenting the PIP corresponding to the region of interest far from the center 210
- pos_center represents the center 210 of the visible scene 200
- dist_near represents the distance of the near region of interest from the center 210
- dist_far represents the distance of the far region of interest from the center 210 .
- an area for presenting PIPs may be defined in the visible scene 200 so as to prevent the PIPs from occupying the main view region and adversely affecting the visual experience of the user.
- a maximum distance between a display position and the center 210 is set to be MaxShowDist (for example, 35%, 40%, 45% of the length (width) of the visible scene 200 ) and a minimum distance between a display position and the center 210 is set to be MinShowDist (for example, 5%, 10%, 15% of the length (width) of the visible scene 200 ).
- the distance of a region of interest A (not shown) of the invisible scene 280 from the center 210 is dist_max and the distance of the display position pip_max of the corresponding PIP from the center 210 is MaxShowDist.
- the distance of a region of interest B (not shown) of the invisible scene 280 from the center 210 is dist_min and the distance of the display position pip_min of the corresponding PIP from the center 210 is MinShowDist.
- the display position of the corresponding PIP thereof can be calculated through the following equation:
- pip_candidate pip_min+(MaxShowDist ⁇ MinShowDist) ⁇ dist_max/dist_candidate
- FIG. 10 is a schematic diagram of an electronic device according to an embodiment of the present disclosure.
- the electronic device 1000 includes a storage unit 1100 , a sensing unit 1200 , a computing unit 1300 and a display unit 1400 .
- the electronic device 1000 includes, but not limited to, a desktop computer, a notebook computer, a smart phone or a wearable electronic device.
- the storage unit 1100 may be a computer readable memory device.
- the computer readable storage medium can be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive a flash drive, a floppy disk or a compact disk, and a comparable medium.
- the storage unit 1100 may store computer executable instructions for performing the above-described methods, and data related to the virtual environment 290 such as picture information and position information of the visible scene 200 , the invisible scene 280 and the regions 220 of interest.
- the sensing unit 1200 detects physical information of the electronic device, including but not limited to the orientation of the electronic device (for example, the electronic device facing east, west, south or north) and the state of the electronic device relative to the horizontal plane (for example, the electronic device having an angle of elevation or depression relative to the horizontal plane).
- the sensing unit 1200 can be implemented as an accelerator, a gyroscope and other devices capable of detecting the physical information of the electronic device.
- the computing unit 1300 may be implemented in the form of a computer processor, including but not limited to a single core processor, a dual microprocessor, a multi-core processor, AMD's Athlon, Duron and Opeteron processors, and Intel Celeron, Core 2 Duo, Core2 Quad, core i3, core i5 and core i7 processors.
- the computing unit 1300 controls operation of related hardware and software and executes the computer executable instructions of the storage unit 110 to implement the above-described methods.
- the display unit 1400 may be implemented as various physical and/or virtual display devices, including but not limited to a computer screen, a laptop computer screen, a mobile device screen, a PDA screen, a tablet screen, and a display screen of a wearable electronic device.
- the display unit 1400 may display data related to searching for regions of interest in a virtual environment. For example, the display unit 1400 may display a visible scene and PIPs.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- The present disclosure relates to technologies for enhancing efficiency of searching for regions of interest in a virtual environment, and, more particularly, to a method and an electronic device that present a region of interest of an invisible scene as a picture-in-picture (PIP) in a visible scene so as to enhance efficiency of searching for the region of interest in a virtual environment.
- In recent years, 360-degree panoramic videos are becoming increasingly popular. Particularly, they can provide a user with immersive visual experiences when being applied in the fields of mobile and virtual reality.
- Further, 360-degree panoramic videos can present full scene content of virtual reality. However, limited by the user' visual field and the size of a display of an electronic device such as an electronic device with a touch screen or a head-mounted display device, the user can only see portions of the contents of virtual reality.
- Although such a viewing mode brings relatively natural visual experiences to general users who browse without any subject limitation, it is not convenient for users who want to search for objects of interest such as extraterrestrials, balloons, cats, dogs, airplanes and idol singers, or regions of interest in virtual reality.
- Conventionally, a user can operate an electronic device to search for objects of interest or regions of interest in virtual reality. For example, the user can make a gesture on a touch screen to operate an electronic device, or move his head to make a tilting or turning operation when he wears a head-mounted display device.
- Such a method is convenient for the user if regions of interest are in the visual field of the user. But if the regions of interest are not in the visual field of the user, the user cannot know in advance how much time he will spend searching for them. The reason is that the user does not know in which direction or how far away the regions of interest are.
- As such, in a 360-degree panoramic video, to search for regions of interest that are not in the visual field of the user will adversely affect visual experiences of the user in virtual reality and even cause the user to miss some important events such as appearance of other more interesting virtual objects in the virtual reality.
- Therefore, how to overcome the above-described drawbacks has become critical.
- In view of the above-described drawbacks, the present disclosure provides a method and an electronic device for enhancing efficiency of searching for a region of interest in virtual reality (a virtual environment). In an embodiment, the region of interest is presented as a picture-in-picture (PIP) in a visible scene so as to save time for the user to search for the region of interest.
- The present disclosure also provides a method for enhancing efficiency of searching for a region of interest in a virtual environment, wherein the virtual environment is comprised of a visible scene and an invisible scene. The method comprises the steps of: (1) locating a position of the region of interest in the invisible scene and a position of a center of the visible scene; and (2) displaying the region of interest as a PIP of the visible scene at an intersection of a line connecting the position of the region of interest and the position of the center of the visible scene and a boundary of the visible scene.
- In an embodiment, the position of the region of interest is a center-of-mass coordinate of the region of interest.
- In an embodiment, the method further comprises projecting the visible scene and the PIP on a display device through a projecting means.
- In an embodiment, the projecting means is selected from the group consisting of equirectangular projection, cube mapping and equi-angular cubemapping.
- In an embodiment, the method further comprises rotating the PIP around a center thereof to generate a rotated picture such that the rotated picture has an outer edge facing the region of interest and an inner edge facing the center of the visible scene.
- In an embodiment, the method further comprises fixing the outer edge of the rotated picture and tilting the rotated picture by a tilting angle Φ along a Z-axis in a three-dimensional space to generate a simulation picture.
- In an embodiment, the tilting angle is varied based on the distance between the region of interest and the center of the visible scene.
- In an embodiment, a maximum angle of the tilting angle is 105 degrees and a minimum angle of the tilting angle is 0 degree.
- The present disclosure provides another method for enhancing efficiency of searching for a region of interest in a virtual environment, wherein the virtual environment is comprised of a visible scene and an invisible scene. The method comprises: (1) locating a position of the region of interest in the invisible scene and a position of a center of the visible scene; and (2) displaying a guiding symbol at an intersection of a line connecting the position of the region of interest and the position of the center of the visible scene and a boundary of the visible scene, wherein the guiding symbol points to the region of interest.
- In an embodiment, the guiding symbol is an arrow.
- In an embodiment, the distance of the region of interest from the center of the visible scene is indicated by a length, a width, an area or a shape of the arrow.
- In an embodiment, the longer the arrow is, the greater the distance between the region of interest and the center of the visible scene is, and the shorter the arrow is, the less the distance between the region of interest and the center of the visible scene is.
- The present disclosure provides a further method for enhancing efficiency of searching for regions of interest in a virtual environment, wherein the virtual environment is comprised of a visible scene and an invisible scene. The method comprises the steps of: (1) calculating a first distance between a first region of interest in the invisible scene and a center of the visible scene and calculating a second distance between a second region of interest in the invisible scene and the center of the visible scene, wherein the first region of interest and the second region of interest correspond to a first PIP and a second PIP in the visible scene, respectively, and the second PIP partially overlaps with the first PIP, and wherein the first distance is greater than the second distance; and (2) based on the first distance and the second distance, using linear interpolation to find a position between the center of the visible scene and the first PIP for displaying the second PIP.
- In an embodiment, the linear interpolation of step (2) is based on an equation pip_near=pos_center+(pip_far−pos_center)×dist_near/dist_far, wherein pip_near represents a position for displaying the second PIP, pip_far represents a position of the first PIP, pos_center represents the center of the visible scene, dist_near represents the second distance, and dist_far represents the first distance.
- The present disclosure further provides an electronic device for enhancing efficiency of searching for a region of interest in a virtual environment, which comprises: a storage unit for storing computer readable program codes; and a processor executing the computer readable program codes to implement a method for enhancing efficiency of searching for a region of interest in a virtual environment.
- It should be noted that the subject matter described above can be implemented as a computer-controlled device, a computer program, a computer system, or an artifact such as a computer readable storage medium.
- The above and other features and advantages of the present disclosure will become apparent from the following detailed description taken in conjunction with the accompanying drawings. The description is illustrative and not intended to limit the present disclosure.
-
FIG. 1 is a schematic diagram showing presentation of a region of interest as a picture-in-picture (PIP) in a visible scene on a mobile device for enhancing efficiency of searching for the region of interest according to the present disclosure; -
FIG. 2 is a schematic diagram showing presentation of a region of interest of an invisible scene as a PIP in a visible scene according to the present disclosure; -
FIG. 3 is a schematic flow diagram showing a method for presenting a region of interest as a PIP in a visible scene according to an embodiment of the present disclosure; -
FIG. 4 is a schematic flow diagram showing a method for simulation processing of a PIP in a visible scene according to an embodiment of the present disclosure; -
FIGS. 5a to 5d are schematic diagrams showing variation of a visible scene and a PIP when the PIP is simulation processed according to an embodiment of the present disclosure; -
FIG. 6 is a schematic diagram showing application of a guiding symbol in a visible scene for enhancing efficiency of searching for a region of interest according to an embodiment of the present disclosure; -
FIG. 7 is a schematic flow diagram showing a method for applying a guiding symbol in a visible scene according to an embodiment of the present disclosure; -
FIGS. 8a and 8b are schematic diagrams showing separation of partially overlapped PIPs in a visible scene for enhancing efficiency of searching for regions of interest according to an embodiment of the present disclosure; -
FIG. 9 is a schematic flow diagram showing a method for separating partially overlapped PIPs according to an embodiment of the present disclosure; and -
FIG. 10 is a schematic diagram of an electronic device according to an embodiment of the present disclosure. - The following illustrative embodiments are provided to illustrate the disclosure of the present disclosure, these and other advantages and effects can be apparent to those in the art after reading this specification.
- It should be noted that all the drawings are not intended to limit the present disclosure. Various modifications and variations can be made without departing from the spirit of the present disclosure.
- Further, some or all operations and/or equivalent operations of the disclosed methods or processes can be performed by executing computer readable instructions on a computer storage medium. Computer readable instructions refer to routines, application programs, application modules, programs modules, programs, components, data structures, algorithms and so on. Computer readable instructions can be implemented on various system configurations, including a single-processor or multiprocessor system, a minicomputer, a mainframe computer, a personal computer, a palmtop computing device, a programmable consumer electronic device based on a microprocessor, or a combination thereof.
- Therefore, it should be understood that the logical operations described herein are implemented as a sequence of acts implemented by a computer or program modules running on a computing system and/or as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Therefore, the logical operations described herein are referred to variously as states, operations, structural devices, acts or modules. Such operations, structural devices, acts and modules can be implemented in software, firmware, special purpose digital logic or any combination thereof.
- The present disclosure provides a method for enhancing efficiency of searching for regions of interest in a virtual environment.
FIG. 1 is a schematic diagram showing presentation of a region of interest as a picture-in-picture (PIP) in a visible scene on a mobile device for enhancing efficiency of searching for the region of interest according to the present disclosure. - As shown in
FIG. 1 , when the user uses amobile device 100 to experience avirtual environment 110 consisting of avisible scene 120 and aninvisible scene 130 and aregion 140 of interest such as an extraterrestrial, a balloon, a cat, a dog, an airplane or an idol singer is in theinvisible scene 130 of thevirtual environment 110, the present disclosure can present theregion 140 of interest of theinvisible scene 130 as a picture-in-picture (PIP) 150 in thevisible scene 120. - The user can view the
PIP 150 in thevisible scene 120 to know whatregion 140 of interest exists in theinvisible scene 130 and in which direction theregion 140 of interest is located relative to thevisible scene 120, thereby saving time for the user to search for theregion 140 of interest in theinvisible scene 130. -
FIG. 2 is a schematic diagram showing presentation of a region of interest of an invisible scene as a PIP in a visible scene according to the present disclosure. - As shown in
FIG. 2 , in avirtual environment 290 consisting of avisible scene 200 and aninvisible scene 280, the visible range of thevisible scene 200 is a region from acenter 210 to anedge 250 of thevisible scene 200, and aregion 220 of interest is located in theinvisible scene 280. - In an embodiment, when introduced into the
visible scene 200 and displayed as a PIP, theregion 220 of interest of theinvisible scene 280 is displayed at a position along aline 240 connecting the position of theregion 220 of the interest and the position of thecenter 210. - As such, when seeing the PIP at a
position 230, the user will know that theregion 220 of interest is located along theline 240 and hence can search for theregion 220 of interest along theline 240. - In an embodiment, when introduced into the
visible scene 200 and displayed as a PIP, theregion 220 of interest of theinvisible scene 280 is displayed on theedge 250 of thevisible scene 200. - As such, when the user wants to search the
invisible scene 280 for a plurality of regions of interest, the regions of interest are displayed as corresponding PIPs on theedge 250 of thevisible scene 200. Since these PIPs are not presented in the main view region of the visible scene 200 (for example, the region near the center 210), the visual experiences of the user will not be adversely affected. - It should be understood that the presentation of a PIP on the
edge 250 of thevisible scene 200 includes, but not limited to, the presentation of the entire PIP inside the visible scene 200 (that is, the entire PIP is presented inside the edge 250) or the presentation of a portion of the PIP inside the visible scene 200 (that is, a portion of the PIP falls outside theedge 250 and is not presented inside the visible scene 200). -
FIG. 3 is a schematic flow diagram showing a method for presenting a region of interest as a PIP in a visible scene according to an embodiment of the present disclosure. - First, at step S310, the
region 220 of interest in theinvisible scene 280 is located to determine its position in thevirtual environment 290. In an embodiment, a center-of-mass coordinate of theregion 220 of interest (for example, its longitude and latitude in the virtual environment 290) is used to represent the position of theregion 220 of interest in thevirtual environment 290. - In an embodiment, the
region 220 of interest may be manually chosen and marked and its position in thevirtual environment 290 is manually marked and recorded so as to locate the position of theregion 220 of interest. - Alternatively, recognition techniques using feature engineering or artificial intelligence such as deep learning may be used to find what
regions 220 of interest exist in theinvisible scene 280 and locate their positions. - Thereafter, at step S320, the position of the
center 210 of thevisible scene 200 in thevirtual environment 290 is located. - Then, at step S330, the
region 220 of interest is displayed as a PIP of thevisible scene 200 at an intersection (i.e., position 230) of the line connecting the position of theregion 220 of interest and the position of thecenter 210 of thevisible scene 200 and theedge 250 of thevisible scene 200. - In an embodiment, the
visible scene 200 and the PIP may be projected on a display of an electronic device through a projecting means. The electronic device may be, but not limited to, a desktop computer, a notebook computer, a smart phone and a wearable device. - Further, the
visible scene 200 and the PIP may be projected on the display through equirectangular projection, cube mapping, equi-angular cubemapping and so on. -
FIG. 4 is a schematic flow diagram showing a method for simulation processing of a PIP in a visible scene according to an embodiment of the present disclosure. - First, at step S410, the PIP at the
display position 230 is rotated around its center to generate a rotated picture. In an embodiment, an outer edge of the rotated picture faces theregion 220 of interest in theinvisible scene 280 and an inner edge of the rotated picture faces thecenter 210 of thevisible scene 200. - In an embodiment, the direction and angle of the rotation may be changed according to the orientation of the
region 220 of interest relative to thecenter 210 of thevisible scene 200. - For example, if the
region 220 of interest is located to the east, southeast or northeast of thecenter 210 of thevisible scene 200, the PIP may be rotated clockwise (or counterclockwise) around its center to generate the rotated picture. Otherwise, if theregion 220 of interest is located to the west, southwest or northwest of thecenter 210 of thevisible scene 200, the PIP may be rotated counterclockwise (or clockwise) around its center to generate the rotated picture. - Thereafter, at step S420, the outer edge of the rotated picture is fixed and then the rotated picture is tilted by a tilting angle Φ along a Z-axis in the three-dimensional space to generate a simulation picture.
- Compared with a PIP that is not processed, the simulation picture presents more 3D spatial information. As such, when experiencing virtual reality, the user has a more realistic spatial experience and captures visual information of the region of interest more easily.
- In an embodiment, the tilting angle is varied based on the distance between the
region 220 of interest and thecenter 210 of thevisible scene 200. - For example, the longer the distance between the
region 220 of interest and thecenter 210 of thevisible scene 200 is, the greater (or the less) the tilting angle becomes. - As such, when viewing a plurality of simulation pictures corresponding to
different regions 220 of interest, the user can determine the distances between theregions 220 of the interest and thecenter 210 of thevisible scene 200 according to the tilting angles. - In an embodiment, a maximum angle of the tilting angle may be 105 degrees (maxTilt) and a minimum angle of the tilting angle may be 0 degree (minTilt). Since the titling angle is limited to the range of minTilt to maxTilt, the present disclosure avoids too large a tilting angle that could adversely affect the user's visual experience.
- In an embodiment, the tilting angle Φ can be calculated according to the following equation:
-
Φ=maxTilt+(0−maxTilt)×(dist_max−dist)/dist_max, - wherein dist_max represents the distance between the farthest region of interest of the
invisible scene 280 and thecenter 210 of thevisible scene 200, and dist represents the distance between the current region of interest and thecenter 210 of thevisible scene 200. -
FIGS. 5a to 5d are schematic diagrams showing variation of a visible scene and a PIP when the PIP is simulation processed according to an embodiment of the present disclosure. - Referring to
FIG. 5a , thePIP 235 in thevisible scene 200 is located at the intersection of theedge 250 of thevisible scene 200 and theline 240 and corresponds to a region of interest (not shown) that is located at a position in the outwardly extending direction of theline 240. - Referring to
FIG. 5b , thePIP 235 in thevisible scene 200 is processed at step S410 to generate a rotatedpicture 236. Theouter edge 2361 of the rotatedpicture 236 faces the region of interest outside thevisible scene 200 while theinner edge 2362 of the rotatedpicture 236 faces thecenter 210 of thevisible scene 200. - Referring to
FIG. 5c , the rotatedpicture 236 is processed at step S420 to generate asimulation picture 237. Φ represents the tilting angle along a Z-axis in the three-dimensional space. To facilitate viewing of the spatial effect of thesimulation picture 237 generated by tilting the rotatedpicture 236 along a Z-axis in the three-dimensional space, the viewing angle of thevisible scene 200 and thesimulation picture 237 are changed by Φ degrees, as shown inFIG. 5 c. - Referring to
FIG. 5d , thevisible scene 200 and thesimulation picture 237 ofFIG. 5c are viewed in the original viewing angle. -
FIG. 6 is a schematic diagram showing application of a guiding symbol in a visible scene for enhancing efficiency of searching for a region of interest according to an embodiment of the present disclosure. - Referring to
FIG. 6 , if theregion 220 of interest is located outside the visible scene 200 (i.e., in the invisible scene 280), a guidingsymbol 201 is applied to guide the user toward theregion 220 of interest so as to save time. - The number of the guiding
symbol 201 may be varied according to the number of theregion 220 of interest. For example, if threeregions 220 of interest exist in theinvisible scene 280, three guidingsymbols 201 are displayed in thevisible scene 200. If oneregion 220 of interest is added to (or deleted from) theinvisible scene 280, the number of the guidingsymbols 201 becomes 4 (or 2). -
FIG. 7 is a schematic flow diagram showing a method for applying a guiding symbol in a visible scene according to an embodiment of the present disclosure. Referring toFIG. 7 , steps S710 and S720 are substantially the same as steps S310 and S320 ofFIG. 3 , respectively, but step S730 is different from S330. - At step S730, a guiding
symbol 201 is displayed at an intersection (i.e., position 230) of the line connecting the position of theregion 220 of interest and the position of thecenter 210 of thevisible scene 200 and theedge 250 of thevisible scene 200. The guiding symbol points to theregion 220 of interest. - In an embodiment, an arrow is used as the guiding
symbol 201 pointing to theregion 220 of interest. Further, the length, width, area and/or shape of the arrow may be used to indicate the distance of theregion 220 of interest from thecenter 210 of thevisible scene 200. - For example, a long arrow indicates a long distance between the
region 220 of interest and thecenter 210 of thevisible scene 200 and a short arrow indicates a short distance between the region ofinterest 220 and thecenter 210 of thevisible scene 200. In another embodiment, a wide arrow indicates a long distance between theregion 220 of interest and thecenter 210 of thevisible scene 200 and a narrow arrow indicates a short distance between the region ofinterest 220 and thecenter 210 of thevisible scene 200. In a further embodiment, an arrow with a large area indicates a long distance between theregion 220 of interest and thecenter 210 of thevisible scene 200 and an arrow with a small area indicates a short distance between the region ofinterest 220 and thecenter 210 of thevisible scene 200. - It should be noted that the guiding symbol is not limited to the arrow. In an embodiment, a graphic symbol or a character symbol may be used.
-
FIGS. 8a and 8b are schematic diagrams showing separation of partially overlapped PIPs in a visible scene for enhancing efficiency of searching for regions of interest according to an embodiment of the present disclosure. - As shown in
FIG. 8a , if the regions of interest are astar 810 and atriangle 820, a star PIP corresponding to thestar 810 and presented at alocation 815 and a triangle PIP corresponding to thetriangle 820 and presented at alocation 825 are partially overlapped. - As the number of the regions of interest increases, the number of the PIPs in the visible scene increases accordingly and the PIPs may be overlapped. As such, it becomes difficult for the user to capture visual information of the regions of interest such as the orientations of the regions of interests from the overlapped PIPs.
- According to the present disclosure, linear interpolation is used to find a position 826 (as shown
FIG. 8b ) for presenting the triangular PIP, thereby overcoming the overlapping problem and enhancing the efficiency of searching for the regions of interest. -
FIG. 9 is a schematic flow diagram showing a method for separating partially overlapped PIPs according to an embodiment of the present disclosure. - At step S910, a first distance between a far region of interest such as the
star 810 in theinvisible scene 280 and thecenter 210 of thevisible scene 200 is calculated. - At step S920, a second distance between a near region of interest such as the
triangle 820 in theinvisible scene 280 and thecenter 210 of thevisible scene 200 is calculated. - It should be noted that although the first distance is greater than the second distance, the star PIP presented at the
position 815 partially overlaps with the triangle PIP presented at theposition 825. - At step S930, based on the first distance and the second distance, linear interpolation is used to find a
position 826 between thecenter 210 of thevisible scene 200 and theposition 815 for displaying the triangle PIP corresponding to thetriangle 820 of theinvisible scene 280. - For example, the linear interpolation is based on the following equation:
-
pip_near=pos_center+(pip_far−pos_center)×dist_near/dist_far, - wherein pip_near represents the position for presenting the PIP corresponding to the region of interest near the
center 210, pip_far represents the position for presenting the PIP corresponding to the region of interest far from thecenter 210, pos_center represents thecenter 210 of thevisible scene 200, dist_near represents the distance of the near region of interest from thecenter 210, and dist_far represents the distance of the far region of interest from thecenter 210. - It should be noted that the above equation is used only when the far region of interest and the near region of interest are located in the same direction.
- In an embodiment, an area for presenting PIPs may be defined in the
visible scene 200 so as to prevent the PIPs from occupying the main view region and adversely affecting the visual experience of the user. - Also, it is ensured that the PIPs are completely presented inside the
edge 250 and hence the user captures complete visual information of the regions of interest. - In an embodiment, a maximum distance between a display position and the
center 210 is set to be MaxShowDist (for example, 35%, 40%, 45% of the length (width) of the visible scene 200) and a minimum distance between a display position and thecenter 210 is set to be MinShowDist (for example, 5%, 10%, 15% of the length (width) of the visible scene 200). - In an embodiment, the distance of a region of interest A (not shown) of the
invisible scene 280 from thecenter 210 is dist_max and the distance of the display position pip_max of the corresponding PIP from thecenter 210 is MaxShowDist. Further, the distance of a region of interest B (not shown) of theinvisible scene 280 from thecenter 210 is dist_min and the distance of the display position pip_min of the corresponding PIP from thecenter 210 is MinShowDist. - For a region of interest C (not shown) in the same direction as the region A of interest and the region B of interest, the display position of the corresponding PIP thereof can be calculated through the following equation:
-
pip_candidate=pip_min+(MaxShowDist−MinShowDist)×dist_max/dist_candidate -
FIG. 10 is a schematic diagram of an electronic device according to an embodiment of the present disclosure. - As shown in
FIG. 10 , theelectronic device 1000 includes astorage unit 1100, asensing unit 1200, acomputing unit 1300 and adisplay unit 1400. Theelectronic device 1000 includes, but not limited to, a desktop computer, a notebook computer, a smart phone or a wearable electronic device. - The
storage unit 1100 may be a computer readable memory device. The computer readable storage medium can be implemented via one or more of a volatile computer memory, a non-volatile memory, a hard drive a flash drive, a floppy disk or a compact disk, and a comparable medium. Thestorage unit 1100 may store computer executable instructions for performing the above-described methods, and data related to thevirtual environment 290 such as picture information and position information of thevisible scene 200, theinvisible scene 280 and theregions 220 of interest. - The
sensing unit 1200 detects physical information of the electronic device, including but not limited to the orientation of the electronic device (for example, the electronic device facing east, west, south or north) and the state of the electronic device relative to the horizontal plane (for example, the electronic device having an angle of elevation or depression relative to the horizontal plane). In some embodiments, thesensing unit 1200 can be implemented as an accelerator, a gyroscope and other devices capable of detecting the physical information of the electronic device. - The
computing unit 1300 may be implemented in the form of a computer processor, including but not limited to a single core processor, a dual microprocessor, a multi-core processor, AMD's Athlon, Duron and Opeteron processors, and Intel Celeron, Core 2 Duo, Core2 Quad, core i3, core i5 and core i7 processors. Thecomputing unit 1300 controls operation of related hardware and software and executes the computer executable instructions of thestorage unit 110 to implement the above-described methods. - The
display unit 1400 may be implemented as various physical and/or virtual display devices, including but not limited to a computer screen, a laptop computer screen, a mobile device screen, a PDA screen, a tablet screen, and a display screen of a wearable electronic device. Thedisplay unit 1400 may display data related to searching for regions of interest in a virtual environment. For example, thedisplay unit 1400 may display a visible scene and PIPs. - The above-described descriptions of the detailed embodiments are only to illustrate the preferred implementation according to the present disclosure, and it is not to limit the scope of the present disclosure. Accordingly, all modifications and variations completed by those with ordinary skill in the art should fall within the scope of present disclosure defined by the appended claims.
Claims (15)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW107107458A TWI707306B (en) | 2018-03-06 | 2018-03-06 | Method and device for enhancing the efficiency of searching regions of interest in a virtual environment |
TW107107458 | 2018-03-06 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190279015A1 true US20190279015A1 (en) | 2019-09-12 |
US10990843B2 US10990843B2 (en) | 2021-04-27 |
Family
ID=67844552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/012,582 Active 2039-05-28 US10990843B2 (en) | 2018-03-06 | 2018-06-19 | Method and electronic device for enhancing efficiency of searching for regions of interest in a virtual environment |
Country Status (2)
Country | Link |
---|---|
US (1) | US10990843B2 (en) |
TW (1) | TWI707306B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11197066B2 (en) * | 2018-06-01 | 2021-12-07 | At&T Intellectual Property I, L.P. | Navigation for 360-degree video streaming |
US11218758B2 (en) | 2018-05-17 | 2022-01-04 | At&T Intellectual Property I, L.P. | Directing user focus in 360 video consumption |
US11651546B2 (en) | 2018-05-22 | 2023-05-16 | At&T Intellectual Property I, L.P. | System for active-focus prediction in 360 video |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009037835B4 (en) * | 2009-08-18 | 2012-12-06 | Metaio Gmbh | Method for displaying virtual information in a real environment |
US10303945B2 (en) * | 2012-12-27 | 2019-05-28 | Panasonic Intellectual Property Corporation Of America | Display method and display apparatus |
KR102119659B1 (en) * | 2013-09-23 | 2020-06-08 | 엘지전자 주식회사 | Display device and control method thereof |
US20170053545A1 (en) * | 2015-08-19 | 2017-02-23 | Htc Corporation | Electronic system, portable display device and guiding device |
US20180288354A1 (en) * | 2017-03-31 | 2018-10-04 | Intel Corporation | Augmented and virtual reality picture-in-picture |
-
2018
- 2018-03-06 TW TW107107458A patent/TWI707306B/en active
- 2018-06-19 US US16/012,582 patent/US10990843B2/en active Active
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11218758B2 (en) | 2018-05-17 | 2022-01-04 | At&T Intellectual Property I, L.P. | Directing user focus in 360 video consumption |
US11651546B2 (en) | 2018-05-22 | 2023-05-16 | At&T Intellectual Property I, L.P. | System for active-focus prediction in 360 video |
US11197066B2 (en) * | 2018-06-01 | 2021-12-07 | At&T Intellectual Property I, L.P. | Navigation for 360-degree video streaming |
Also Published As
Publication number | Publication date |
---|---|
TW201939438A (en) | 2019-10-01 |
TWI707306B (en) | 2020-10-11 |
US10990843B2 (en) | 2021-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11238513B1 (en) | Methods and device for implementing a virtual browsing experience | |
US11941762B2 (en) | System and method for augmented reality scenes | |
JP5693966B2 (en) | Projecting graphic objects on interactive uneven displays | |
CN114080628A (en) | Textured mesh construction | |
US20140002443A1 (en) | Augmented reality interface | |
Jo et al. | Aroundplot: Focus+ context interface for off-screen objects in 3D environments | |
US10990843B2 (en) | Method and electronic device for enhancing efficiency of searching for regions of interest in a virtual environment | |
JP5592011B2 (en) | Multi-scale 3D orientation | |
US20150049086A1 (en) | 3D Space Content Visualization System | |
US11054963B2 (en) | Method for displaying navigator associated with content and electronic device for implementing the same | |
Andri et al. | Adoption of mobile augmented reality as a campus tour application | |
US8223145B2 (en) | Method and system for 3D object positioning in 3D virtual environments | |
US20110261048A1 (en) | Electronic device and method for displaying three dimensional image | |
US7636089B2 (en) | Photo mantel view and animation | |
WO2018233623A1 (en) | Method and apparatus for displaying image | |
US11893696B2 (en) | Methods, systems, and computer readable media for extended reality user interface | |
US9245366B1 (en) | Label placement for complex geographic polygons | |
US8570329B1 (en) | Subtle camera motions to indicate imagery type in a mapping system | |
US20140002494A1 (en) | Orientation aware application demonstration interface | |
US11532138B2 (en) | Augmented reality (AR) imprinting methods and systems | |
CN108958609A (en) | Generation method, device, storage medium and the terminal device of three-dimensional panorama surface plot | |
Lai et al. | Mobile edutainment with interactive augmented reality using adaptive marker tracking | |
Trapp et al. | Strategies for visualising 3D points-of-interest on mobile devices | |
JP6980802B2 (en) | Methods, equipment and computer programs to provide augmented reality | |
US8797315B1 (en) | Segmented editor for tours of a geographic information system, and applications thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: NATIONAL TAIWAN UNIVERSITY, TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, YUNG-TA;LIAO, YI-CHI;TENG, SHAN-YUAN;AND OTHERS;SIGNING DATES FROM 20180523 TO 20180524;REEL/FRAME:046260/0277 Owner name: MEDIATEK INC., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIN, YUNG-TA;LIAO, YI-CHI;TENG, SHAN-YUAN;AND OTHERS;SIGNING DATES FROM 20180523 TO 20180524;REEL/FRAME:046260/0277 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |