WO2014178477A1 - Visiocasque et procédé de fourniture de contenus en l'utilisant - Google Patents

Visiocasque et procédé de fourniture de contenus en l'utilisant Download PDF

Info

Publication number
WO2014178477A1
WO2014178477A1 PCT/KR2013/004988 KR2013004988W WO2014178477A1 WO 2014178477 A1 WO2014178477 A1 WO 2014178477A1 KR 2013004988 W KR2013004988 W KR 2013004988W WO 2014178477 A1 WO2014178477 A1 WO 2014178477A1
Authority
WO
WIPO (PCT)
Prior art keywords
hmd
eye image
image
present
right eye
Prior art date
Application number
PCT/KR2013/004988
Other languages
English (en)
Korean (ko)
Inventor
박수왕
Original Assignee
인텔렉추얼디스커버리 주식회사
(주)소셜네트워크
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인텔렉추얼디스커버리 주식회사, (주)소셜네트워크 filed Critical 인텔렉추얼디스커버리 주식회사
Publication of WO2014178477A1 publication Critical patent/WO2014178477A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking

Definitions

  • the present invention relates to a head mounted display (HMD) and a content providing method using the same, and more particularly, to a HMD for providing content by separating a left eye image and a right eye image, and a content providing method using the same. .
  • HMD head mounted display
  • Head mounted display refers to various digital devices that can be worn on the head like glasses to provide multimedia content.
  • various wearable computers have been developed, and the HMD is also widely used.
  • the HMD can be combined with augmented reality technology and N screen technology to provide various conveniences to the user beyond simple display functions.
  • the user wearing the HMD can interact with the HMD in various situations. At this time, the HMD needs a simpler user interface to provide a better context aware service to the user.
  • An object of the present invention is to provide a better situational awareness service to a user wearing an HMD.
  • the present invention has an object to display the content so that only the user wearing the HMD can be recognized when displaying the content provided through the HMD.
  • the present invention has an object to provide a realistic video content to the user wearing the HMD.
  • the content providing method using a head-mounted display (HMD) obtaining the information of the target object; Acquiring an image of an external area using a camera unit; Scanning the target object in the acquired image; Displaying augmented reality information of the target object when the target object is detected in the acquired image; Generating scanning completion information of the external area; Characterized in that it comprises a.
  • HMD head-mounted display
  • the content providing method using a head-mounted display according to an embodiment of the present invention, the step of obtaining a source image; Separating the source image into a left eye image and a right eye image, wherein the left eye image and the right eye image each constitute at least a part of the source image; And displaying the separated left eye image on the first display unit of the HMD and the separated right eye image on the second display unit of the HMD; Characterized in that it comprises a.
  • the video content providing method using a head mounted display comprising the steps of: obtaining video content; Acquiring orientation information of the HMD using a sensor unit; Displaying the video content on a display unit based on the obtained orientation information; Obtaining relative distance information between at least one virtual object included in the video content and the HMD; And providing feedback to the HMD when the virtual object collides with the HMD.
  • HMD head mounted display
  • the user may be provided with a context awareness service including augmented reality information through the HMD.
  • the user may receive the detection result of the target object that he / she wants to find in augmented reality through the HMD, and thus may conveniently find the target object.
  • the scanning completion information of the corresponding area is provided so that the user does not need to repeatedly search for the same area.
  • the overlapped image can be provided to the user wearing the HMD.
  • a user who does not wear the HMD can provide contents that are not well recognized. .
  • the HMD adjusts the display orientation of the video content based on the orientation information, thereby providing the user with realistic video content such as viewing a real object.
  • the HMD can check whether the HMD or the real object collides with the virtual object of the video content and provide feedback correspondingly. Therefore, the HMD of the present invention can provide more realistic multimedia content to the user.
  • FIG. 1 is a block diagram illustrating an HMD according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram illustrating an HMD according to an embodiment of the present invention.
  • FIG. 3 is a schematic view showing an HMD according to another embodiment of the present invention.
  • FIG. 4 is a flowchart illustrating a content providing method according to an embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a content providing method according to another embodiment of the present invention.
  • 6 to 9 are diagrams illustrating a specific embodiment of finding a target object using an HMD according to an embodiment of the present invention.
  • FIG. 10 is a flowchart illustrating a content providing method according to an embodiment of the present invention.
  • FIG. 11 is a flowchart illustrating a content providing method according to another embodiment of the present invention.
  • FIG. 12 to 16 illustrate a specific embodiment in which the HMD separates and displays a left eye image and a right eye image according to an embodiment of the present invention.
  • FIG 17 illustrates an embodiment in which the HMD of the present invention acquires a source image.
  • FIG. 18 is a flowchart illustrating a method of providing video content according to an embodiment of the present invention.
  • 19-22 illustrate specific methods of providing video content by the HMD according to an embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating an HMD 100 according to an embodiment of the present invention.
  • the HMD 100 of the present invention includes a processor 110, a display unit 120, a camera unit 130, a communication unit 140, a sensor unit 150, and a storage unit 160. can do.
  • the display unit 120 outputs an image on the display screen.
  • the display unit 120 outputs content executed in the processor 110 or outputs an image based on a control command of the processor 110.
  • the display unit 120 may display an image based on a control command of the external digital device 200 connected to the HMD 100.
  • the display unit 120 may display content being executed by the external digital device 200 connected to the HMD 100.
  • the HMD 100 may receive data from the external digital device 200 through the communication unit 140 and output an image based on the received data.
  • the camera unit 130 senses an image and transmits the sensed image to the processor.
  • the camera unit 130 may include a front camera unit (not shown) and an eye tracking camera unit (not shown).
  • the front camera unit may detect an image of the front direction of the HMD and provide it to the processor.
  • the eye tracking camera unit may detect the pupil position of the user wearing the HMD 100 to track the eyes of the user.
  • the camera unit 130 includes various types of imaging devices such as a depth camera module and an infrared camera module, as well as a general charge coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) camera module.
  • CCD charge coupled device
  • CMOS complementary metal-oxide semiconductor
  • the communication unit 140 may communicate with the external digital device 200 or the server using various protocols to transmit / receive data.
  • the communication unit 140 may access a server or a cloud through a network, and transmit / receive digital data, for example, content.
  • the HMD 100 may connect to the external digital device 200 using the communication unit 140.
  • the HMD 100 may receive display output information of content being executed by the connected external digital device 200 in real time, and output an image to the display unit 120 using the received information.
  • the sensor unit 150 may transmit a user input or an environment recognized by the HMD 100 to the processor 110 using at least one sensor mounted on the HMD 100.
  • the sensor unit 150 may include a plurality of sensing means.
  • the plurality of sensing means includes a gravity sensor, a geomagnetic sensor, a motion sensor, a gyro sensor, an acceleration sensor, an infrared sensor, an inclination sensor, an illuminance sensor, a proximity sensor, an altitude sensor, an olfactory sensor, a temperature Sensing means such as a sensor, a depth sensor, a pressure sensor, a bending sensor, an audio sensor, a video sensor, a microphone array, a global positioning system (GPS) sensor, a touch sensor, and the like may be included.
  • GPS global positioning system
  • the sensor unit 150 collectively refers to the various sensing means described above, and may sense various inputs of the user and the environment of the user, and may transmit a sensing result to allow the processor 110 to perform an operation accordingly.
  • the above-described sensors may be included in the HMD 100 as a separate element or integrated into at least one or more elements.
  • the sensor unit detects orientation information of the digital device 100 by using at least one of the various sensing means listed above, and transmits the detected orientation information to the processor 110.
  • the orientation information includes angle information of the digital device 100 with respect to at least one axis among x, y, and z axes.
  • the HMD 100 may include a detecting module for detecting a real object located in front of the HMD 100.
  • the detecting module detects a real object located in front of the HMD 100 and transmits position information of the corresponding real object to the processor 110.
  • the detecting module may include a camera unit 130.
  • the camera unit 130 may include a depth camera.
  • the HMD 100 may acquire an image through the camera unit 130 and perform a process such as image processing on the acquired image to detect a real object located in front of the HMD 100.
  • the HMD 100 may detect the relative distance between the HMD 100 and the real object.
  • the detecting module may include a distance sensor.
  • the HMD 100 may detect a real object in front of the HMD 100 using a distance sensor.
  • the distance sensor includes an infrared method, an ultrasonic method, and various other types of sensors capable of sensing a distance.
  • the HMD 100 may detect the relative distance between the HMD 100 and the real object using the distance sensor.
  • the storage unit 160 may store digital data including various contents such as video, audio, photographs, documents, applications, and the like.
  • the storage unit 150 may include various digital data storage media such as flash memory, random access memory (RAM), and solid state drive (SSD).
  • the storage unit 150 may store the content received by the communication unit 140 from the external digital device 200 or the server.
  • the processor 110 of the present invention may execute contents of the HMD 100 itself or contents received through data communication. It can also run various applications and process data inside the device. In addition, the processor 110 may control each unit of the HMD 100 described above, and may control data transmission and reception between the units.
  • the HMD 100 shown in FIG. 1 is a block diagram according to an embodiment of the present invention, in which blocks shown separately represent logically distinguishing elements of a device. Therefore, the elements of the above-described device may be mounted in one chip or in a plurality of chips according to the design of the device.
  • the HMD 100 includes a display unit 120 and a camera unit 130.
  • the display unit 120 displays an image based on a control command of a processor (not shown) of the HMD 100.
  • the display unit 120 may include a display screen to allow an image to be formed on the display screen.
  • the display unit 120 may project an image at a specific position in the front direction of the HMD 100 without having a separate display screen.
  • the display unit 120 may include a first display unit 120a and a second display unit 120b.
  • the first display unit 120a displays an image for the left eye of the user wearing the HMD 100
  • the second display unit 120b displays an image for the right eye of the user. Display.
  • the HMD 100 may separate a source image for displaying on the display unit 120 into a left eye image and a right eye image, respectively. In this case, the separated left eye image and right eye image may be displayed on the first display unit 120a and the second display unit 120b, respectively.
  • the camera unit 130 senses an image and transmits the sensed image to the processor.
  • the camera unit 130 may include at least one of a front camera unit and an eye tracking camera unit.
  • the camera unit 130 may have a field of view area of a preset range. That is, the camera unit 130 acquires an image in the viewing area and transfers the acquired image to the processor.
  • the HMD 100 may be connected to at least one external digital device 200 and operate based on a control command of the connected external digital device 200.
  • a separate pairing or communication connection may be performed for connecting between the HMD 100 and the external digital device 200, and the pairing or communication connection operation may be performed by a user input to the HMD 100 or the external digital device 200. It can be performed by.
  • the HMD 100 may provide a separate button or user interface for pairing or communicating with an external digital device 200, and receives a user input for the button or user interface to receive the HMD 100.
  • pairing or communication connection between the external digital device 200 and the external digital device 200 may be connected to at least one external digital device 200 and operate based on a control command of the connected external digital device 200.
  • a separate pairing or communication connection may be performed for connecting between the HMD 100 and the external digital device 200, and the pairing or communication connection operation may be performed by a user input to the HMD 100 or the external digital device 200. It can be performed by.
  • the HMD 100 may provide a separate button or user
  • the external digital device 200 includes various types of digital devices capable of controlling the HMD 100.
  • the external digital device 200 includes a smart phone, a PC, a personal digital assistant (PDA), a notebook, a tablet PC, a media player, and the like, and various types of digital devices capable of controlling the operation of the HMD. It includes.
  • the HMD 100 transmits / receives data with the external digital device 200 using various wired / wireless communication means.
  • usable wireless communication means include NFC (Near Field Communication), Zigbee, infrared communication, Bluetooth, Wi-Fi, the present invention is not limited to this.
  • the HMD 100 may be connected to the external digital device 200 to perform communication by using any one of the above-mentioned communication means or a combination thereof.
  • FIGS. 4 and 5 are flowcharts illustrating a content providing method according to an embodiment of the present invention. 4 and 5 described below may be performed by the HMD 100 of the present invention. That is, the processor 110 of the HMD 100 illustrated in FIG. 1 may control each step of FIGS. 4 and 5. Meanwhile, when the HMD 100 is controlled by the external digital device 200 as in the embodiment of FIG. 3, the HMD 100 may be configured based on the control command of the corresponding external digital device 200 in FIGS. 4 and 5. Each step can be performed.
  • the HMD of the present invention acquires information of a target object (S410).
  • the target object is an object that the user of the HMD is to find, and includes various types of visible objects such as objects, characters, figures, trademarks, and characters.
  • Information of the target object may be obtained based on a user input for setting the target object.
  • the user may set the target object using various types of user inputs such as a touch input, a voice input, or a gaze input to the HMD.
  • the user input for setting the target object may be received through an external digital device connected to the HMD.
  • the HMD acquires information of the target object, for example, image information of the target object.
  • the external digital device connected to the HMD receives a user input for setting the target object, the corresponding external digital device may transmit information of the selected target object to the HMD.
  • the HMD of the present invention acquires an image of an external area by using a camera unit (S420).
  • the outer area is an area in the front direction of the HMD, and may be obtained by the front camera unit provided in the HMD.
  • the HMD of the present invention determines whether a target object is detected in the acquired image (S430). That is, the HMD may scan whether at least a part of the image of the target object exists in the acquired image based on the information of the target object acquired in operation S410.
  • the processor of the HMD can perform scanning by various means such as image processing.
  • the HMD of the present invention displays augmented reality information of the corresponding target object (S440).
  • the HMD may display the augmented reality information in an area corresponding to the location of the target object.
  • the augmented reality information may be information previously stored in a storage unit of the HMD or information received in real time through a communication unit.
  • the HMD of the present invention When the scanning of the image of the external area acquired by the camera unit is completed as described above, the HMD of the present invention generates scanning completion information of the external area (S450).
  • the scanning completion information is information indicating that the detection of the target object for the corresponding external area is completed and may be provided in various embodiments.
  • the HMD may provide scanning completion information of the external area in the form of painting information overlaid on the external area.
  • the painting information may include at least one of a preset color, shade, and pattern.
  • the HMD overlays and displays the generated painting information on the corresponding external area, and may indicate that the external area is a pre-scanned area.
  • the HMD can use various sensing information to map the scanning completion information to the corresponding external region.
  • the HMD may identify an external area corresponding to the scanning completion information by using acquired image information, a sensing value of a gyro sensor, or a combination thereof.
  • the HMD may identify the external area corresponding to the scanning completion information based on the at least one detected marker.
  • the marker is a predetermined identifier for distinguishing an external area, and may include a barcode, a QR code, an RFID, a color code, an image code, and the like.
  • the HMD of the present invention can provide the corresponding scanning completion information to the user.
  • the HMD determines whether an external area in the front direction of the HMD is a prescan area (S510). Whether the external area is a pre-scanned area may be determined based on whether scanning completion information of the corresponding external area is previously generated. In this case, the HMD may identify the corresponding external area by using various sensing information. According to an embodiment of the present invention, the HMD may determine whether the external area is a previously scanned area by using image information acquired through a camera unit, a sensing value of a gyro sensor, or a combination thereof.
  • the HMD may determine whether the external area is a previously scanned area based on at least one detected marker.
  • the marker may include a barcode, a QR code, an RFID, a color code, an image code, and the like as described above, but the present invention is not limited thereto.
  • the HMD of the present invention displays scanning completion information of the corresponding external area (S520).
  • the scanning completion information is information generated in step S450 of FIG. 4 and may include painting information overlaid on a corresponding external area.
  • the painting information may include at least one of a preset color, shade, and pattern.
  • the HMD overlays and displays the generated painting information on the corresponding external area, and may indicate that the external area is a pre-scanned area.
  • steps S510 and S520 of FIG. 5 may be performed before step S420 of FIG. 4, or may be performed after step S420. That is, according to an embodiment of the present invention, regardless of whether the step S420 of FIG. 4 is performed, the HMD may determine whether the external area in the front direction is a previously scanned area by using a sensing value of the gyro sensor. Can be. In addition, according to another embodiment of the present invention, the HMD may determine whether the external area is a previously scanned area by using an image acquired through the camera unit.
  • 6 to 9 illustrate a specific embodiment of searching for the target object 305 using the HMD 100 according to an embodiment of the present invention.
  • the user 10 wearing the HMD 100 sets the book as the target object 305 to find a specific book in the library.
  • FIG. 6 illustrates an embodiment in which the HMD 100 of the present invention detects the target object 305 in the external area 300 in the forward direction.
  • the HMD 100 according to an embodiment of the present invention acquires an image of the external area 300 by using a camera unit, and scans the target object 305 from the image.
  • the external area 300 in which the image is obtained is an area located in the forward viewing area 102 of the HMD 100.
  • the HMD may display augmented reality information 320 of the target object 305.
  • the augmented reality information 320 is displayed by the display unit of the HMD 100.
  • the HMD 100 may display the augmented reality information 320 by overlaying it on the target object 305 or display the area in a region corresponding to the position of the target object 305. As such, by displaying the augmented reality information 320 of the target object 305, the HMD 100 may inform the location of the target object 305 that the user 10 wants to find.
  • FIG. 7 illustrates an embodiment in which the HMD 100 displays the scanning completion information 340 of the external areas 300a and 300b.
  • the HMD 100 of the present invention looks at the second field of view 102 ′ after completing scanning of the target object with respect to the first field of view 102.
  • An outer area 300a is located in the first viewing area 102, and outer areas 300a and 300b are located in the second viewing area 102 ′.
  • the HMD 100 of the present invention when scanning of the external area 300a is completed, the HMD 100 of the present invention generates the scanning completion information 340 of the external area 300a.
  • the scanning completion information 340 is painting information overlaid on the corresponding outer area 300a and may include at least one of a preset color, a shadow, and a pattern.
  • the HMD 100 determines whether the outer regions 300a and 300b located in the second field of view 102 ′ are previously scanned. Determine whether or not. Whether the external areas 300a and 300b are the previously scanned areas may be determined by whether there is scanning completion information of the corresponding external area.
  • the HMD 100 may identify the external regions 300a and 300b using various sensing information. According to the embodiment of FIG. 7, the HMD 100 may identify the external areas 300a and 300b by using image information obtained through the camera unit, a sensing value of a gyro sensor, or a combination thereof. Since the external area 300a is a scanning-completed area, the HMD 100 overlays and displays the scanning completion information 340 of the external area 300a on the corresponding external area 300a. On the other hand, since the external area 300b is an area in which scanning is not completed, the HMD does not display the scanning completion information corresponding to the external area 300b. The HMD 100 of the present invention may perform scanning of the target object on the external area 300b in which scanning is not completed.
  • FIG. 8 illustrates another embodiment in which the HMD 100 displays the scanning completion information 340 of the external areas 300a and 300b.
  • the same or corresponding parts as those of the embodiment of FIG. 7 will be omitted.
  • the HMD can identify the outer region based on the at least one detected marker 342a, 342b.
  • preset markers may be located at predetermined intervals in the outer region.
  • the marker is a predetermined identifier for distinguishing an external area, and may include a barcode, a QR code, an RFID, a color code, an image code, and the like.
  • the HMD can detect markers 342a and 342b in the forward direction.
  • the HMD 100 can identify that the outer regions 300a and 300b corresponding to the markers 342a and 342b are located in the forward direction of the HMD 100, respectively. Since the external area 300a is a scanning-completed area, the HMD 100 overlays and displays the scanning completion information 340 of the external area 300a on the corresponding external area 300a. On the other hand, since the external area 300b is an area in which scanning is not completed, the HMD does not display the scanning completion information corresponding to the external area 300b.
  • the HMD 100 of the present invention displays the scanning completion information of the corresponding area on the target object, that is, the area where the scanning of the book is completed. Accordingly, the user 10 wearing the HMD 100 may recognize an area where scanning is completed in the current viewing area 102 and find a target object in the area where scanning is not completed.
  • FIGS. 10 and 11 are flowcharts illustrating a content providing method according to an embodiment of the present invention.
  • Each step of FIGS. 10 and 11 described below may be performed by the HMD 100 of the present invention. That is, the processor 110 of the HMD 100 illustrated in FIG. 1 may control each step of FIGS. 10 and 11. Meanwhile, when the HMD 100 is controlled by the external digital device 200 as in the embodiment of FIG. 3, the HMD 100 may be configured based on the control command of the corresponding external digital device 200. Each step can be performed.
  • the HMD of the present invention acquires a source image (S1010).
  • the source image is an image to be displayed on the display unit of the HMD, and includes various kinds of images such as display images, icons, text, and symbols of various contents.
  • the source image includes augmented reality information corresponding to a specific object.
  • the HMD of the present invention separates the source image into a left eye image and a right eye image (S1020).
  • the left eye image is an image for the left eye of the user wearing the HMD
  • the right eye image is an image for the right eye of the user.
  • the left eye image and the right eye image overlap each other to form a source image.
  • the HMD can separate a source image into a left eye image and a right eye image by various methods.
  • the HMD can separate a source image into a left eye image and a right eye image by character units.
  • the HMD may separate the source image into a left eye image and a right eye image based on a predetermined pattern. Specific embodiments thereof will be described in detail with reference to FIGS. 12 to 13.
  • the HMD of the present invention displays the separated left eye image and right eye image on the first display unit and the second display unit, respectively (S1030).
  • the first display unit displays an image for the left eye of the user wearing the HMD, that is, the left eye image
  • the second display unit displays an image for the right eye of the user, that is, the right eye image.
  • the HMD can adjust the display interval of the left eye image and the right eye image to cross each other. A detailed embodiment thereof will be described in detail with reference to FIG. 14.
  • the HMD of the present invention may detect the gaze of a user wearing the HMD (S1110).
  • the HMD can detect the pupil position of the user wearing the HMD using the provided eye tracking camera unit, and can track the gaze of the user by using the pupil position of the detected user.
  • the HMD of the present invention adjusts the display position of at least one of the left eye image and the right eye image based on the detected gaze of the user (S1120). At this time, the HMD adjusts the display position so that the left eye image and the right eye image overlap exactly and are shown to the user.
  • the HMD calculates at least one display position shift factor of the left eye image and the right eye image based on the detected gaze of the user, and uses the calculated position shift factor to determine the left eye image and the right eye image. At least one of them may be displayed by being moved from the basic display position. Accordingly, the HMD can display the separated left eye image and the right eye image so as to overlap with each other's eyes. Specific embodiments thereof will be described in detail with reference to FIGS. 15 and 16.
  • FIG. 12 to 16 illustrate a specific embodiment in which the HMD 100 separates and displays a left eye image and a right eye image according to an embodiment of the present invention.
  • the HMD 100 may divide and display the source image 400 in units of characters.
  • the HMD 100 generates a left eye image “7 8” and a right eye image “5 1” by separating the source image “7581” by character units.
  • the HMD 100 displays the separated left eye images “7 8”, 420a and right eye images “5 1”, 420b on the first display unit 120a and the second display unit 120b, respectively.
  • the left eye image (“7 8”, 420a) and the right eye image (“5 1”, 420b) overlap each other and the source image (“7851”). , 400) is shown to the user in the same or similar form.
  • the HMD 100 may separate the source image 400 into a left eye image 430a and a right eye image 430b based on a preset pattern.
  • the source image “7581” may be split into a specific pattern and divided into a left eye image 430a and a right eye image 430b.
  • the HMD 100 displays the separated left eye image 430a and the right eye image 430b on the first display unit 120a and the second display unit 120b, respectively.
  • the left eye image 430a and the right eye image 430b overlap each other to form the same or similar to the source image (“7851”, 400). Will be shown to the user.
  • the HMD 100 may include a left eye image 440a displayed on the first display unit 120a and a right eye image 440b displayed on the second display unit 120b.
  • the display intervals can be adjusted to intersect each other.
  • the HMD 100 displays the left eye image 440a on the first display unit 120a for a predetermined time T1, and the right eye image 440b on the second display unit 120b.
  • T1 a predetermined time
  • the right eye image 440b on the second display unit 120b.
  • the HMD 100 displays the right eye image 440b on the second display unit 120b for a predetermined time T2
  • the left eye image on the first display unit 120a is shown in FIG. 14A. 14A, the HMD 100 displays the left eye image 440a on the first display unit 120a for a predetermined time T1, and the right eye image 440b on the second display unit 120b.
  • the HMD 100 displays the left eye image 440a on the first display unit 120a and the right eye image 440b on the second display unit 120b. ) May not be displayed. That is, the HMD 100 according to the embodiment of the present invention may alternately display the left eye image 440a and the right eye image 440b. Thus, even if the left eye image 440a and the right eye image 440b are not displayed at the same time. The afterimage of each image is left to the user wearing the HMD 100 to recognize the same or similar image as the source image 400.
  • the HMD 100 when the HMD 100 separates and displays the source image 400 into the left eye image and the right eye image, the corresponding source image 400 wears the HMD 100. It is only visible to the user. That is, since the left eye image and the right eye image do not appear to overlap with the person who does not wear the HMD 100, the source image may not be recognized. Therefore, the HMD 100 according to the embodiment of the present invention enables the secure display of the source image 400.
  • 15 and 16 illustrate a method for displaying a left eye image and a right eye image according to another embodiment of the present invention.
  • the HMD 100 detects the gaze 105 of the user wearing the HMD 100, and the left eye image 450a and the right eye image based on the detected gaze 105. 450b can be displayed.
  • the gaze 105 of a user wearing the HMD 100 may be different for each user, and may be changed in real time even with the same user. Accordingly, as illustrated in FIG. 15, the overlap image 480 of the left eye image 450a and the right eye image 450b may be distorted unlike the source image to the user wearing the HMD 100. In this way, if the left eye image 450a and the right eye image 450b do not exactly overlap with the user's line of sight 105, the user may not correctly recognize the source image.
  • the HMD 100 may change the display positions of the left eye image 450a and the right eye image 450b based on the user's gaze 105. That is, the HMD 100 detects the user's gaze 105 using the eye tracking camera unit, and the left eye image 450a and the right eye image 450b are exactly overlapped with the detected user's gaze 105.
  • the display position shift factors 25a and 25b are calculated.
  • the display position shift factors 25a and 25b may be calculated for both the left eye image 450a and the right eye image 450b, or may be calculated for either one.
  • the HMD moves and displays at least one of the left eye image 450a and the right eye image 450b from the basic display position by using the calculated position shift factors 25a and 25b. do.
  • the left eye image 450a and the right eye image 450b are exactly overlapped to form an overlap image 480 'which is the same as or similar to the source image.
  • FIG 17 is a diagram illustrating an embodiment in which the HMD 100 of the present invention acquires a source image.
  • the HMD 100 may acquire a source image in various ways.
  • the HMD 100 may obtain augmented reality information as the source image.
  • the HMD 100 may detect a specific marker 415 and acquire augmented reality information corresponding to the detected marker 415 as a source image.
  • the marker 415 is a predetermined identifier for providing augmented reality information, and includes various kinds of markers such as a barcode, a QR code, an RFID, a color code, and an image code.
  • the user 10 wearing the HMD 100 may receive the augmented reality information “7581” corresponding to the marker 415 detected by the HMD 100 as the overlap image 480.
  • the HMD 100 may display the left eye image and the right eye image of the augmented reality information “7581” based on the gaze 105 of the user 10.
  • the HMD 100 may acquire augmented reality information provided when tagged with a preset device as a source image.
  • a preset device such as a digital door lock or an automated teller machine (ATM)
  • the HMD 100 may acquire augmented reality information such as a password.
  • the HMD 100 may separate the acquired augmented reality information into a left eye image and a right eye image, and display them on the first display unit and the second display unit, respectively.
  • FIG. 18 is a flowchart illustrating a method of providing video content according to an embodiment of the present invention.
  • Each step of FIG. 18 described below may be performed by the HMD 100 of the present invention. That is, the processor 110 of the HMD 100 shown in FIG. 1 may control each step of FIG. 18. Meanwhile, when the HMD 100 is controlled by the external digital device 200 as in the embodiment of FIG. 3, the HMD 100 performs each step of FIG. 18 based on a control command of the corresponding external digital device 200. Can be done.
  • the HMD of the present invention acquires video content (S1910).
  • the HMD can obtain the video content from a connected external digital device.
  • the HMD can obtain the video content from a server via a communication unit.
  • the video content may include data stored in a storage unit of the HMD.
  • the HMD can detect at least one marker and acquire the video content with augmented reality information corresponding to the marker.
  • the marker may be a predetermined identifier for providing augmented reality information, and may include a barcode, a QR code, an RFID, a color code, an image code, and the like.
  • the HMD may acquire the video content as augmented reality information provided when tagged with a preset device.
  • the HMD of the present invention acquires orientation information of the HMD using the sensor unit (S1920).
  • the orientation information may include angle information of the HMD with respect to at least one axis among x, y and z axes. That is, according to an embodiment of the present invention, the orientation information may include azimuth information of the HMD with respect to at least one of the x-axis, the y-axis, and the z-axis. In addition, according to another embodiment of the present invention, the orientation information may include rotation angle information of the HMD about at least one axis of the x-axis, y-axis and z-axis.
  • the rotation angle information of the HMD about the arbitrary x-axis, y-axis, and z-axis may be defined as roll, pitch, and yaw information, respectively.
  • the present invention is not limited thereto, and each rotation angle information about the x-axis, y-axis, and z-axis may be appropriately set to any one of the roll, pitch, and yaw information.
  • the HMD of the present invention displays the video content on the display unit based on the obtained orientation information (S1930).
  • the HMD can adjust the display direction of the video content based on the obtained orientation information. For example, when the orientation information indicates that the HMD is inclined at an angle of ⁇ to the left in the reference direction, the HMD may display the video content by tilting it to the right at an angle of ⁇ . That is, when displaying the video content, the HMD can compensate for and display the tilted angle of the HMD. Thus, even if the posture of the user wearing the HMD changes, the video content displayed through the HMD can be provided to the user as if displayed at a fixed position.
  • the HMD of the present invention obtains relative distance information between at least one virtual object included in the video content and the HMD (S1940).
  • the HMD can obtain location information of at least one virtual object in the video content being displayed.
  • the location information of the virtual object may be included as additional information in the obtained video content, or the HMD may be obtained through image processing.
  • video content may include various simulation games, and various visual objects appearing in the simulation game may be included in the virtual object.
  • the HMD obtains distance information between the virtual object and the HMD in real time by using the position information of the virtual object and the orientation information of the HMD.
  • the HMD of the present invention determines whether the virtual object and the HMD collide with each other (S1950).
  • the HMD of the present invention determines whether a location collision occurs between the virtual object and the HMD based on the distance information between the virtual object and the HMD obtained in step S1940.
  • the HMD may provide the feedback in various forms.
  • the HMD can provide vibration feedback through the HMD or a device connected to the HMD.
  • the HMD can provide various kinds of visual feedback to the display unit.
  • the HMD may include a voice output unit, for example, an earphone, and may provide various types of voice feedback to the voice output unit.
  • the HMD may provide feedback including at least one of the vibration feedback, visual feedback, and voice feedback. Accordingly, the user wearing the HMD can recognize that a collision between the virtual object and the HMD has occurred.
  • the HMD may detect a real object located in front of the HMD.
  • the real object may include various objects such as a body of a user's hands and feet, a remote controller, and the like.
  • the HMD of the present invention can detect a real object by using a detection module, and obtain position information of the real object.
  • the HMD further obtains relative distance information between the real object and the virtual object, and determines whether the real object and the virtual object collide with each other. If the real object and the virtual object collide with each other, the HMD can provide feedback as described above in operation S1960.
  • 19 to 22 illustrate a specific method of providing video content by the HMD 100 according to an embodiment of the present invention.
  • FIG. 19 shows how the HMD 100 of the present invention displays video content on a display unit.
  • the HMD 100 may display at least some areas of the video content.
  • the area is referred to as the display area 40.
  • the display area 40 of the video content includes at least one virtual object 500a, 500b, or 500c.
  • Reference numeral 32 designates an orientation of the HMD 100.
  • the HMD 100 obtains the orientation 32 information of the HMD 100 using the sensor unit.
  • the orientation 32 information includes angle information of the HMD 100 with respect to at least one of the x-axis, the y-axis, and the z-axis.
  • the x-axis, the y-axis, and the z-axis shown in FIGS. 19 and 20 show examples of three axes based on arbitrary directions, and the present invention is not limited thereto.
  • the HMD 100 adjusts the display orientation 20 of the video content based on the obtained orientation 32 information.
  • FIG. 20 illustrates a state in which the HMD 100 of FIG. 19 is inclined at an angle ⁇ in a counterclockwise direction with respect to the x-axis.
  • the HMD 100 of the present invention obtains new orientation 32 'information of the HMD 100 through the sensor unit.
  • the HMD 100 changes the display area of video content from 40 to 40 'based on the new orientation 32' information.
  • the display area 40 ′ is an area in a direction inclined by an angle ⁇ in a counterclockwise direction with respect to the x axis in the display area 40 of FIG. 19. That is, the HMD 100 may display a new display area 40 'in response to the orientation change of the HMD 100.
  • the display area 40 ' includes virtual objects 500a and 500b of video content. Therefore, the HMD 100 may display the virtual objects 500a and 500b included in the display area 40 '. However, the HMD 100 may not display the virtual object 500c not included in the display area 40 '.
  • the HMD 100 may change the display orientation of the video content from 20 to 20 'based on the acquired new orientation 32' information.
  • the HMD 100 corresponds to the orientation 32 ′ of the HMD 100 inclined at an angle ⁇ in the counterclockwise direction with respect to the x axis, so that the display orientation of the video content is clocked with respect to the x axis. Can be tilted at an angle of ⁇ in the direction.
  • the virtual objects 500a and 500b included in the video content may also be tilted at an angle ⁇ in the clockwise direction with respect to the x-axis and displayed on the HMD 100.
  • the HMD 100 of the present invention may compensate for and display an inclination angle of the HMD 100. Therefore, even if the posture of the user wearing the HMD 100 changes, the video content displayed through the HMD 100 and the virtual objects 500a and 500b included therein may be provided to the user as if they are displayed at a fixed position. have.
  • the HMD 100 of the present invention may acquire location information of the virtual objects 500a, 500b, and 500c in the display area 40.
  • the HMD may acquire distance information between the virtual objects 500a, 500b, and 500c and the HMD in real time using the location information of the virtual objects 500a, 500b and 500c and the orientation 32 information of the HMD.
  • a positional collision occurs between the virtual object 500c and the HMD 100.
  • HMD 100 may provide feedback in response to such a location conflict.
  • HMD 100 may provide visual feedback 600 as shown in FIG. 21.
  • the HMD 100 may provide vibration feedback, voice feedback, and the like. Accordingly, the user wearing the HMD 100 can recognize that a collision has occurred between the virtual object 500c and the HMD 100.
  • FIG. 22 illustrates a method of providing video content when a virtual object collides with a real object 14 located in front of the HMD 100 according to another embodiment of the present invention.
  • the HMD 100 may detect the real object 14 located in front of the HMD 100 using the detecting module.
  • the real object 14 may include various objects such as a body such as a hand and a foot of the user 10, a remote controller, and other tools.
  • the user 10 wearing the HMD 100 may interact with the virtual objects 500a, 500b, and 500c of the video content using the real object 14.
  • the HMD 100 obtains the position information of the real object 14 using the detecting module, and calculates relative distance information between the real object 14 and the virtual objects 500a, 500b, and 500c.
  • the HMD 100 determines whether the real object 14 and the virtual objects 500a, 500b, and 500c collide with each other. In the embodiment of FIG. 22, a positional conflict occurs between the virtual object 500b and the real object 14. In response to the collision, the HMD 100 may provide visual feedback 600, vibration feedback, voice feedback, and the like, as shown. Accordingly, a user wearing the HMD 100 can recognize that a collision has occurred between the virtual object 500b and the real object 14.
  • each operation is performed by the control of the external digital device 200 connected to the HMD 100.
  • the HMD described in the present invention can be changed and replaced with various devices according to the purpose of the present invention.
  • the HMD of the present invention includes various devices capable of providing a display by being worn by a user, such as eye mounted display (EMD), eyeglass, eyepiece, eye wear, HWD (Head Worn Display), and used in the present invention. It is not limited to the said term.
  • EMD eye mounted display
  • HWD Head Worn Display
  • the present invention may be applied, in whole or in part, to various digital devices including HMDs.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

L'invention concerne un visiocasque (HMD) pour fournir des contenus séparés dans une image d'oeil gauche et une image d'oeil droit et un procédé pour fournir les contenus à l'aide de ce visiocasque. Le visiocasque selon l'invention comprend un processeur pour commander le fonctionnement du visiocasque, et une unité d'affichage pour produire une image conformément à une commande du processeur, caractérisé en ce que l'unité d'affichage comprend une première unité d'affichage et une seconde unité d'affichage, et le processeur sépare une image source obtenue en une image d'oeil gauche et une image d'oeil droit affichées respectivement sur la première et la seconde unité d'affichage, l'image d'oeil gauche et d'oeil droit constituant au moins une partie de l'image source.
PCT/KR2013/004988 2013-04-30 2013-06-05 Visiocasque et procédé de fourniture de contenus en l'utilisant WO2014178477A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR10-2013-0048910 2013-04-30
KR10-2013-0048916 2013-04-30
KR20130048910 2013-04-30
KR20130048913 2013-04-30
KR20130048916 2013-04-30
KR10-2013-0048913 2013-04-30

Publications (1)

Publication Number Publication Date
WO2014178477A1 true WO2014178477A1 (fr) 2014-11-06

Family

ID=51843590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/004988 WO2014178477A1 (fr) 2013-04-30 2013-06-05 Visiocasque et procédé de fourniture de contenus en l'utilisant

Country Status (1)

Country Link
WO (1) WO2014178477A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016190489A1 (fr) * 2015-05-27 2016-12-01 엘지전자 주식회사 Visiocasque et procédé de commande correspondant
CN111556305A (zh) * 2020-05-20 2020-08-18 京东方科技集团股份有限公司 图像处理方法、vr设备、终端、显示系统和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11275605A (ja) * 1998-03-19 1999-10-08 Sony Corp ヘッドマウントディスプレイの立体視方法及びヘッドマウントディスプレイ装置
US5982343A (en) * 1903-11-29 1999-11-09 Olympus Optical Co., Ltd. Visual display apparatus
US20060250322A1 (en) * 2005-05-09 2006-11-09 Optics 1, Inc. Dynamic vergence and focus control for head-mounted displays
KR20120129134A (ko) * 2011-05-19 2012-11-28 삼성전자주식회사 헤드 마운트 디스플레이 장치의 이미지 표시 제어 장치 및 방법
US20130002813A1 (en) * 2011-06-29 2013-01-03 Vaught Benjamin I Viewing windows for video streams

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5982343A (en) * 1903-11-29 1999-11-09 Olympus Optical Co., Ltd. Visual display apparatus
JPH11275605A (ja) * 1998-03-19 1999-10-08 Sony Corp ヘッドマウントディスプレイの立体視方法及びヘッドマウントディスプレイ装置
US20060250322A1 (en) * 2005-05-09 2006-11-09 Optics 1, Inc. Dynamic vergence and focus control for head-mounted displays
KR20120129134A (ko) * 2011-05-19 2012-11-28 삼성전자주식회사 헤드 마운트 디스플레이 장치의 이미지 표시 제어 장치 및 방법
US20130002813A1 (en) * 2011-06-29 2013-01-03 Vaught Benjamin I Viewing windows for video streams

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016190489A1 (fr) * 2015-05-27 2016-12-01 엘지전자 주식회사 Visiocasque et procédé de commande correspondant
US10585283B2 (en) 2015-05-27 2020-03-10 Lg Electronics Inc. Head mounted display and control method therefor
CN111556305A (zh) * 2020-05-20 2020-08-18 京东方科技集团股份有限公司 图像处理方法、vr设备、终端、显示系统和计算机可读存储介质
US11838494B2 (en) 2020-05-20 2023-12-05 Beijing Boe Optoelectronics Technology Co., Ltd. Image processing method, VR device, terminal, display system, and non-transitory computer-readable storage medium

Similar Documents

Publication Publication Date Title
US11310483B2 (en) Display apparatus and method for controlling display apparatus
US11087728B1 (en) Computer vision and mapping for audio applications
WO2015122565A1 (fr) Système d'affichage permettant d'afficher une imagé de réalité augmentée et son procédé de commande
WO2015046686A1 (fr) Dispositif d'affichage pouvant être porté et procédé permettant de commander une couche dans celui
WO2015046667A1 (fr) Montre intelligente et procédé de commande associé
WO2020185029A1 (fr) Dispositif électronique et procédé d'affichage des informations de partage sur la base de la réalité augmentée
WO2015163536A1 (fr) Dispositif d'affichage et son procédé de commande
EP2912514A1 (fr) Dispositif d'affichage monté sur tête et procédé de sortie de signal audio l'utilisant
EP3204837A1 (fr) Système de connexion
WO2015046676A1 (fr) Visiocasque et procédé de commande de ce dernier
WO2015122566A1 (fr) Dispositif d'affichage monté sur tête pour afficher un guide de capture d'image en réalité augmentée, et son procédé de commande
WO2020017890A1 (fr) Système et procédé d'association 3d d'objets détectés
WO2020130667A1 (fr) Procédé et dispositif électronique pour commander un dispositif de réalité augmentée
US20220084303A1 (en) Augmented reality eyewear with 3d costumes
WO2016010200A1 (fr) Dispositif d'affichage à porter sur soi et son procédé de commande
WO2015088101A1 (fr) Dispositif d'affichage et son procédé de commande
WO2021230568A1 (fr) Dispositif électronique permettant de fournir un service de réalité augmentée et son procédé de fonctionnement
KR20140129936A (ko) 헤드 마운트 디스플레이 및 이를 이용한 콘텐츠 제공 방법
WO2014178477A1 (fr) Visiocasque et procédé de fourniture de contenus en l'utilisant
US20210200284A1 (en) Image display device, power supply system, and power supply method for image display device
WO2020145653A1 (fr) Dispositif électronique et procédé pour recommander un emplacement de capture d'images
US11900058B2 (en) Ring motion capture and message composition system
WO2021261619A1 (fr) Dispositif électronique de détection d'un plan dans une image et procédé de fonctionnement correspondant
WO2020159115A1 (fr) Dispositif électronique à plusieurs lentilles, et son procédé de commande
WO2014035118A1 (fr) Visiocasque et procédé de commande de dispositif numérique l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13883503

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13883503

Country of ref document: EP

Kind code of ref document: A1