WO2013012146A1 - Content playing method and apparatus - Google Patents

Content playing method and apparatus Download PDF

Info

Publication number
WO2013012146A1
WO2013012146A1 PCT/KR2012/000375 KR2012000375W WO2013012146A1 WO 2013012146 A1 WO2013012146 A1 WO 2013012146A1 KR 2012000375 W KR2012000375 W KR 2012000375W WO 2013012146 A1 WO2013012146 A1 WO 2013012146A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
user
location
virtual viewpoint
space
Prior art date
Application number
PCT/KR2012/000375
Other languages
French (fr)
Inventor
Sang Keun Jung
Hyun Cheol Park
Moon Sik Jeong
Kyung Sun Cho
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP12814483.9A priority Critical patent/EP2735164A4/en
Priority to CN201280035942.2A priority patent/CN103703772A/en
Publication of WO2013012146A1 publication Critical patent/WO2013012146A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic

Definitions

  • the present invention relates generally to a content playing method and apparatus, and more particularly, to a method for playing content corresponding to a location of a user and an apparatus thereof.
  • the demand for Three-Dimensional (3D) image technology has increased and with more common use of digital broadcasting, stereoscopic image use in 3D TV and 3D information terminals is being actively researched.
  • the stereoscopic image implemented through 3D technology is formed by a stereo sight principle as experienced through two eyes. Because two eyes are spaced apart from each other by approximately 65 mm, a binocular parallax acts as a main factor of depth.
  • the left and right eyes view different stereoscopic images
  • the two different stereoscopic images are transferred to a brain through a retina, wherein the brain combines the two different stereoscopic image such that the user experiences the depth of the stereoscopic image.
  • a 3D TV is capable of showing a 3D image having a fixed viewpoint regardless of a location of a user, the TV cannot provide a realistic image where the user is present in the TV.
  • the present invention has been made to solve the above mentioned problems occurring in the prior art, and the present invention provides a method for playing realistic content stimulating at least one of the senses of a user, corresponding to a location of the user, and an apparatus thereof.
  • a content playing method including determining a first location of a user; mapping a content space displayed on a display unit to correspond with an actual space in which the user is present based on the first determined location; determining a virtual viewpoint in the content space corresponding to a second location of the user; and playing content corresponding to the determined virtual viewpoint.
  • a content playing apparatus including a content collecting unit for collecting content stimulating senses of a user; a content processor for performing a processing operation such that content input from the content collecting unit are played; a content playing unit for playing content input from the content collecting unit; a sensor for collecting information associated with a location of the user such that the content is played corresponding to the location of the user; and a controller for determining a virtual viewpoint in a virtual content space corresponding to the location of the user based on received information from the sensor and controlling such that content corresponding to the determined virtual viewpoint are played.
  • the present invention provides a method for playing realistic content stimulating at least one of the senses of a user, corresponding to a location of the user, and an apparatus thereof.
  • FIG. 1 is a block diagram illustrating a configuration of a content playing apparatus according to an embodiment of the present invention
  • FIG. 2 is a diagram illustrating a three-dimensional coordinate system according to an embodiment of the present invention.
  • FIGS. 3A, 3B, and 3C are diagrams illustrating a perspective view, a plan view, and a side view, respectively, of a actual space according to an embodiment of the present invention
  • FIGS. 4A and 4B are diagrams illustrating a perspective view and a plan view, respectively, of a virtual content space according to an embodiment of the present invention
  • FIG. 5 is a block diagram illustrating a realistic type playing unit according to an embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a space mapping method according to an embodiment of the present invention.
  • FIG. 7 is a diagram illustrating a space mapping method according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating a virtual viewpoint determining method according to an embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an angle control method of a virtual camera according to an embodiment of the present invention.
  • FIG. 10 is a diagram illustrating a stereoscopic image control method according to an embodiment of the present invention.
  • FIG. 11 is a diagram illustrating a stereoscopic sound control method according to an embodiment of the present invention.
  • FIG. 12 is a block diagram illustrating a network of a home network system to which a content playing apparatus is applied according to an embodiment of the present invention
  • FIG. 13 is a flowchart illustrating a content playing method according to an embodiment of the present invention.
  • FIG. 14 is a flowchart illustrating a content playing method according to an embodiment of the present invention.
  • the term “content” refers to an operation for stimulating senses of a user, such as a sense of sight, a sense of hearing, and a sense of touch.
  • the content may be an image, light, voice, and wind.
  • realistic type playback may refer to playing content corresponding to a location of the user. That is, the user may experience the same content differently in different locations of the user. For example, when the content is a car displayed on a screen, a front surface or a side surface of the car is viewed by the user depending on the user location.
  • the content playing method and apparatus are applicable to an electronic device having a function playing content stimulating senses of the user.
  • the content playing method and apparatus are applicable to a notebook computer, a desktop PC, a tablet PC, a smart phone, a High Definition TeleVision (HDTV), a smart TV, a 3-Dimensional (3D) TV, an Internet Protocol Television (IPTV), a stereoscopic sound system, a theater system, a home theater, a home network system and the like.
  • HDMI High Definition TeleVision
  • IPTV Internet Protocol Television
  • the content playing method and apparatus provide a function for tracking location variation of the user, and a function for realistically playing content corresponding to the tracked location of the user.
  • the content playing method and apparatus may provide a function that receives content, such as images from a content provided through Local Area Network (LAN), wireless LAN, 3-Generation (3G) or 4-Generation (4G) wireless communication network, stores and plays a database including the received images in a real-time manner.
  • the images may include stereoscopic image.
  • the stereoscopic image may become 3D movie, 3D animation, or 3D computer graphics. Further, the stereoscopic image may be a multi-media combined with the stereoscopic sound.
  • FIG. 1 is a block diagram illustrating a configuration of a content playing apparatus according to an embodiment of the present invention. It is assumed that the content playing apparatus of FIG. 1 is a 3D TV which enables content to appear to exist in a space between a screen and the user.
  • a content playing apparatus 100 according to an embodiment of the present invention includes an input unit 110, a remote controller 120, a remote controller receiver 125, a sensor 130, a content collecting unit 140, a content processor 150, a sound output unit 161, an image display unit 162, a memory 170, an interface unit 180, and a controller 190.
  • the input unit 110 may include a plurality of input keys and function keys for receiving input of numeral or character information, and for setting various functions.
  • the functions may include arrow keys, side keys, and hot keys set to perform a predetermined function.
  • the input unit 110 creates and transfers a key event associated with user setting and function control of the content playing apparatus 100.
  • the key event may include a power on/off event, a volume control event, a screen on/off event, etc.
  • the controller 190 controls the foregoing elements in response the key event.
  • the remote controller 120 creates various key events for operating the content playing apparatus 100, and converts the created key event into a wireless signal, and transmits the wireless signal to the remote controller receiver 125.
  • the remote controller 120 of the present invention may create a start event for requesting realistic type playback and a termination event for terminating the realistic type playback.
  • the realistic type playback may be defined to play content corresponding to a location of the user.
  • the remote controller receiver 125 converts a received wireless signal into an original key event, and transfers the original key event to the controller 190.
  • the sensor 130 collects information associated with the location of the user such that the user may track the location of the user, and transfers the collected information to the controller 190.
  • the sensor 130 may be implemented by an image sensor or an optical sensor for sensing light of a predetermined wavelength such as infrared ray.
  • the sensor 130 converts a sensed physical amount into an electric signal, and an Analog to Digital Converter (ADC) converts the electric signal into data, and transfers the data to the controller 190.
  • ADC Analog to Digital Converter
  • the content collecting unit 140 performs a function for collecting content stimulating senses of the user. Specifically, the content collecting unit 140 performs a function for collecting images and sounds from a network or a peripheral device. That is, the content collecting unit 140 may include a broadcasting receiver 141 and an Internet communication unit 142. Specifically, the broadcasting receiver 141 selects one from a plurality of broadcasting channels, and demodulates a broadcasting signal of the selected broadcasting channel to original broadcasting content.
  • the Internet communication unit 142 includes a wired modem or a wireless modem for receiving various information for home shopping, home banking, and on-line gaming, and MP3 use and additional information with respect thereto.
  • the Internet communication unit 142 may include a mobile communication module (e.g., 3G mobile communication module, 3.5G mobile communication module, and 4G mobile communication module) and a near distance communication module (e.g., a Wi-Fi module).
  • a mobile communication module e.g., 3G mobile communication module, 3.5G mobile communication module, and
  • the content processor 150 performs a processing function to play content from a content collecting unit 140. Specifically, the content processor 150 classifies input content into a stereoscopic image and a stereoscopic sound.
  • the content processor 150 may include a sound processor for decoding and outputting the classified stereoscopic sound to a sound output unit 161, and an image processor for decoding the classified stereoscopic sound into a left image and a right image and outputting the left image and the right image to the image processor 152. Further, the content processor 150 may compress and transfer input content to the controller 190 under the control of the controller 190. Accordingly, the controller 190 transfers compressed content to the memory 170.
  • the sound processor 150 may control a direction or a distance of a stereoscopic sound according to the location of the user.
  • the sound processor 150 may change a type of the sound output from the sound output unit 161 according to a location of the user or change the volume according to a type of the sound.
  • the image processor 160 may control brightness, stereoscopic sensation, and depth according to the location of the user.
  • the content playing unit 160 performs a function for playing content processed by the content processor 150.
  • the content playing unit 160 may include a sound output unit 161 and an image display unit 162.
  • the sound output unit 161 outputs a decoded stereoscopic sound, and includes a plurality of speakers, for example, 5.1 channel speakers.
  • the image display unit 161 displays a stereoscopic image.
  • the image display unit 162 displays a stereoscopic image with depth, as the stereoscopic image actually exists in a three-dimensional space between the image display unit 162 and the screen through a display unit for displaying a stereoscopic image and a 2D implementing unit for allowing a user to experience depth with respect to a displayed stereoscopic image.
  • the display unit may be implemented as a Liquid Crystal Display (LCD), Organic Light Emitting Diodes (OLED), or Active Matrix Organic Light Emitting Diodes (AMOLED).
  • the 3D implementing unit is a structural element formed in an accumulated with the display unit that makes different images to be recognized at binocular right and left eyes.
  • the 3D implementing scheme is divided into a glass scheme and a glass-free scheme.
  • the glass scheme includes a color filter scheme, a deflection filter scheme, and a shutter glass scheme.
  • the glass-free scheme includes a lenticular lens scheme and a parallax barrier. Because the 3d implementing scheme is known in the art, a detailed description thereof is omitted.
  • the memory 170 stores programs and data necessary for an operation of the content playing apparatus 100.
  • the memory 170 may be configured by a volatile storage medium, a nonvolatile storage medium, or a combination thereof.
  • the volatile storage medium includes a semiconductor memory such as RAM, DRAM, or SRAM.
  • the non-volatile storage medium may include a hard disk.
  • the memory 170 may be divided into a data area and a program area. Specifically, a data area of the memory 170 may store data created by the controller 160 according to use of the content playing apparatus 100. The data area may store content compressed in a predetermine format provided from the controller 160.
  • the program area of the memory 140 may store an operating system (OS) for booting the content playing apparatus 100 and operating respective elements, and applications for supporting various user functions, for example, a web browser for accessing an Internet server, an MP3 user function for playing other sound sources, an image output function for playing photographs, and a moving playback function.
  • OS operating system
  • the program area of the present invention may store a realistic type playback program.
  • the realistic type playing program may include a routine for determining an initial location of a user, a routine for mapping a actual space with a content space based on the initial location, a routine for tracking location variation, a routine for determining a virtual viewpoint in a content space corresponding to the location of the user, and a routine for playing content corresponding to a time point in the content space.
  • the initial location is defined as a reference value for mapping a content space to a actual space.
  • the actual space is a 3D space in which the user and the display unit are located.
  • the content space is a virtual space in which content displayed through a display unit exists.
  • the virtual viewpoint is defined as a viewpoint of the user in a content space mapped with a actual space.
  • the interface unit 180 performs a function for connecting the content playing apparatus 100 with a peripheral device in a wired or wireless scheme.
  • the interface unit 180 may include a Zigbee® module, a Wi-Fi module, or a Bluetooth® module.
  • the interface unit 180 may receive and transfer a control signal for realistic type playback from the controller 190 to a peripheral device. That is, the controller 190 may control the peripheral device through the interface unit 180.
  • the peripheral device may become a home network device, a stereoscopic sound device, a lamp, an air conditioner, and a heater.
  • the controller 190 may control the peripheral device to play content for stimulating senses of the user, for example, a touch sense, a sight sense, and a smell sense.
  • the controller 190 may control an overall operation of the content playing apparatus 100, and signal flow between internal structural elements of the content playing apparatus 100. Further, the controller 190 may control power supply to internal elements in a battery. Moreover, the controller 190 may execute various application stored in the program area. Specifically, in the present invention, if a start event for realistic type playback is sensed, the controller 190 may execute the foregoing realistic type playback program. That is, if the realistic type playback program is executed, the controller 190 determines an initial location of the user and tracks location variation of the user. Further, the controller 190 maps a content space with a actual space based on the initial location, determines a virtual viewpoint corresponding to the tracked location, and controls the content processor 150 to play content corresponding to the virtual viewpoint. Furthermore, the controller 190 may control a peripheral device through the interface unit 180 to play content corresponding to the virtual viewpoint. The realistic type playback function of the controller 190 will be described in detail.
  • FIG. 2 is a diagram illustrating a three dimensional coordinate system according to an embodiment of the present invention.
  • a method for expressing a 3D space according to the present may use a three-dimensional coordinate system.
  • a solid line expresses a positive value
  • a dotted line expresses a negative value.
  • coordinates of the user in a actual space at time t are expressed as (x u ,t , y u,t , z u ,t )
  • coordinates of a camera in a content space at time t are expressed as (x c ,t , y c ,t , z c ,t ).
  • FIGS. 3A, 3B, and 3C are diagrams illustrating a perspective view, a plan view, and a side view of a actual space according to an embodiment of the present invention.
  • a central point of a screen 301 of a display unit in a actual space is set to (0,0,0) of a coordinate system.
  • a right direction and a left direction of the central point 302 become a positive direction and a negative direction of an X u axis with reference to a direction that the user views a screen, respectively.
  • An upward direction and a downward direction in the central point 302 become a positive direction and a negative direction of an Y u axis.
  • a direction to the user 303 on the screen 301 becomes a positive direction of the Z u axis, and an opposite direction thereof becomes a negative direction of the Z u axis.
  • the location of the user may be expressed with (x u , y u , z u ).
  • a horizontal length of the screen 301 may be expressed with Display Screen Width (DSW)
  • a vertical length of the screen 302 be with Display Screen Height (DSH)
  • a straight distance between the screen 301 and the user 303 be expressed with Watching Distance (WD).
  • FIGS. 4A and 4B are diagrams illustrating a perspective view and a plan view illustrating a virtual content space.
  • a virtual camera described herein is not a real camera, but refers to a user in a content space corresponding to that in a actual space.
  • a focus 402 on a focus surface 401 in a content space is set to (0,0,0) of a coordinate system.
  • a right direction and a left direction of the focus 402 become a positive direction and a negative direction of an X c axis based on a direction that the camera 430 directs the focus surface 401, respectively.
  • An upward direction and a downward direction of the focus 402 become a positive direction and a negative direction of a Y c axis, respectively.
  • a direction of the focus surface 401 to the virtual camera 403 becomes a positive direction of the Z c axis and an opposite direction thereof becomes a negative direction of the Z c axis.
  • the location of the virtual camera 403 may be expressed with (x c , y c , z c ).
  • a horizontal length of the focus surface 401 may be expressed with Focal Width (FW), a vertical length of the focus surface 401 be expressed with Focal Height (FH), and a straight distance between the focus surface 401 and the virtual camera 430 be expressed with Focal Length (FL).
  • the FL may be set by the user.
  • the size of the focus surface 401 may be set by adjusting an angle of the camera. That is, because the virtual camera 403 is virtual, the distance of the focus and the angle of the camera may be set as needed.
  • FIG. 5 is a block diagram illustrating a realistic playing unit according to an embodiment of the present invention.
  • the realistic type playing unit 500 may be configured inside the controller 190 or be separately configured. It is assumed that the realistic playing unit 500 may be configured inside the controller 190.
  • the realistic playing unit 500 of the present invention such as a controller 190 may include a tracker 510, an initiator 520, a space mapper 530, a virtual viewpoint determinator 540, and a content processing controller 550.
  • the tracker 510 tracks a location of a user 303. That is, the tracker 510 tracks coordinates (x u , y u , z u ) of the user using data received from the sensor 130.
  • the tracker 510 detects characteristic information, for example, a face of the user 303 from received sensing information, and determines a central point of the detected face as coordinates (x u , y u , z u ) of the user 303.
  • z u , WD may be computed using the size of the detected face.
  • the initiator 520 determines an initial location of the user being a reference value for mapping a content space with a actual space. That is, if a start event for realistic type playback is sensed, the initiator 520 determines coordinates input from the tracker 510 an initial location of the user. Specifically, if the location of the user 303 is not changed within a preset error after watching the content starts, the initiator 520 may determine the location of the user 303 as an initial location. Further, if a predetermined key value is input from the remote controller 120, the initiator 520 may determine a location of the user in the input time of the predetermined key value as the initial location.
  • the initiator 520 may determine a location of the user in the input time of the predetermined key value as the initial location. To do this, the tracker 510 may detect a predetermined gesture of the user, for example, action taking a hand down after lifting it using a template matching manner. If the predetermined gesture is detected, the tracker 510 informs the initiator 520 of this.
  • a predetermined gesture of the user for example, action taking a hand down after lifting it using a template matching manner. If the predetermined gesture is detected, the tracker 510 informs the initiator 520 of this.
  • FIG. 6 is a diagram illustrating a space mapping method according to an embodiment of the present invention.
  • a following Equation (1) expresses a relative ratio of a content space coordinate system to a actual space coordinate system in a space mapping method according to an embodiment of the present invention.
  • t 0 represents a time point determined as an initial location by the initiator 520.
  • the space mapper 530 maps the content space 603 to the actual space 602 based on an initial location of the user 601. That is, the space mapper 530 determines FW, FH, and FL and then computes X_Ratio, Y_Ratio, and Z_Ratio(at t 0 ) as illustrated in the Equation (1).
  • FIG. 7 is a diagram illustrating a space mapping method according to an embodiment of the present invention.
  • a following Equation (2) expresses a relative ratio of a content space coordinate system to a actual space coordinate system in a space mapping method according to an embodiment of the present invention.
  • DSW, DSH, and WD are values depending on the size of the display unit and a actual space.
  • the space mapper 530 may add or subtract a predetermine adjustment to or from the foregoing values to extend or shorten a actual space mapped to the content space.
  • the space mapper 530 may control the size of displayed content using the adjustment.
  • the space mapper 530 may receive the adjustment from the remote controller 120 through the receiver 125 at any time, before start or during start of realistic type playback.
  • FIG. 8 is a diagram illustrating a virtual viewpoint determining method according to an embodiment of the present invention.
  • a following Equation (3) is a calculation equation of the virtual viewpoint determining method.
  • a virtual viewpoint determinator 540 receives coordinates (x u ,t+1 , y u ,t+1 , z u ,t+1 ) of the user 801 from the tracker 510, and receives coordinate transformation values, namely, X_Ratio, Y_Ratio and Z_Ratio from the space mapper 530. As illustrated in the Equation (3), the virtual viewpoint determinator 540 computes coordinates (x c ,t+1 , y c ,t+1 , z c ,t+1 ) of the virtual camera 702 mapped to coordinates of the user 801 using the foregoing received information.
  • a Minimum Transition Threshold (MTT) for moving a location of the camera may be previously set in the present invention.
  • the MIT may be an option item that the user may directly set.
  • the virtual viewpoint determinator 540 may compute coordinates of the virtual camera 802.
  • the MTT may be differently set for X, Y, Z axes.
  • the MTT for the Z axis may be set to have the greatest value.
  • coordinates of the virtual camera 802 may be computed.
  • FIG. 9 is a diagram illustrating an angle control method of a virtual camera according to an embodiment of the present invention.
  • the virtual viewpoint determinator 540 calculates angle variation amounts ⁇ ( ⁇ X , ⁇ Y , ⁇ Z ) of the user 902 based on a central point 901 of a screen.
  • FIG. 10 is a diagram illustrating a stereoscopic image control method according to an embodiment of the present invention.
  • a content processing controller 550 receives a virtual viewpoint, namely, coordinates x c ,t+1 , y c ,t+1 , z c ,t+1 of a virtual camera from a virtual viewpoint determinator 540. Further, the content processing controller 550 may receive an angle adjustment of a virtual viewpoint, namely, an angle control value ⁇ of the virtual camera from the virtual viewpoint determinator 540. Moreover, the content processing controller 550 controls the content processor 150 based on received information to adjust brightness, stereoscopic sensation, and depth of a stereoscopic image. Referring to FIG. 10, if an initial location 1001 of a user is determined, the content processing controller 550 perform a control operation to display an object that the virtual camera orients in the initial location 902.
  • a location of the user is moved to 1001 -> 1003 -> 1005, a location of a virtual camera in a content space mapped to a actual space to 1002 -> 1004 -> 1006. Accordingly, another part of the object is displayed according to location variation of a virtual camera. If the user moves from 1005 to 1007, the camera moves from 1007 to 1008. If the camera is located at 1008, because an object is separated from an angle of a virtual camera, it is not displayed longer. However, the content processing controller 550 rotates the camera in a direction of the object 1009 by an angle control value ⁇ to continuously display the object 1009.
  • FIG. 11 is a diagram illustrating a stereoscopic sound control method according to an embodiment of the present invention.
  • the content processing controller 550 controls the content processor 150 to adjust a direction and a distance of a stereoscopic sound.
  • the content processing controller 550 may adjust a distance in a manner that a sound of a car is gradually increased such that the user experiences the car approaching.
  • the content processing controller 550 may adjust a distance in a manner that a sound of a car is gradually reduced such that the user experiences the car leaving.
  • the content processing controller 550 controls a direction of a stereoscopic sound such that a sound of the car is output from a front speaker and a centre speaker.
  • the content processing controller 550 controls the direction of a stereoscopic sound such that a sound of the car is output from a rear speaker.
  • the foregoing content playing apparatus 100 may further include constructions that are not mentioned such as a camera, a microphone, and a GPS receiving module. Since the structural elements can be variously changed according to convergence trend of a digital device, no elements can be listed. However, the content playing apparatus 100 may include structural elements equivalent to the foregoing structural elements. Further, the content playing apparatus 100 of the present invention may be substituted by specific constructions in the foregoing arrangements according to the provided form or another structure. This can be easily understood to those skilled in the present art.
  • FIG. 12 is a block diagram illustrating a network of a home network system to which a content playing apparatus is applied according to an embodiment of the present invention.
  • a home network system may include a content playing apparatus 1200, a home network server 1210, and a plurality of home network devices.
  • the content playing apparatus may include the foregoing structural elements.
  • the content playing apparatus 1200 may communicate with a home network server 1210 in a communication scheme such as a ZigBee scheme, a Bluetooth® scheme, or a Wireless LAN scheme. If a start event for realistic type playback is sensed, the content playing apparatus 1200 executes a realistic type playing program.
  • the content playing apparatus 1200 may control the home network device to play the content corresponding to a location of the user.
  • the home network server 1210 controls home network devices.
  • the home network server 1210 may drive home network devices under the control of the content playing apparatus 1200.
  • the home network devices have unique addresses, respectively, and are controlled through the addresses by the home network server 1210.
  • the home network device plays content stimulating senses of the user under the control of the content playing apparatus 1200.
  • the home network devices may include an air conditioner 1220, a humidifier 1230, a heater 1240, and a lamp 1250.
  • the content playing apparatus 1200 may control the air conditioner 1220, the humidifier 1230, the heater 1240, and the lamp according to location variation of the user to adjust an intensity of wind, peripheral brightness, temperature, and humidity. For example, referring to FIG. 11, when the user is located at 1102, the content playing apparatus 1200 may increase the intensity of wind such as the user experiences the car being near the user.
  • the user may be stimulated with a touch sense, and experience to be present in a real content space.
  • FIG. 13 is a flowchart illustrating a content playing method according to an embodiment of the present invention.
  • a controller 190 may firstly be in an idle state.
  • the idle state may be defined as a state displaying an image before a realistic type playback step. If the user operates a remote controller 120 in the idle state, the controller 190 senses a start event for realistic playback. As illustrated previously, if a start event is sensed, an initiator 520 determines coordinates input from a tracker 510 as an initial location of a user in step 1301. The tracker 510 may track a location of the user before a realistic type playing step.
  • a space mapper 530 maps a content space to a actual space based on the determined initial location in step 1302.
  • a virtual viewpoint determinator 540 computes location variation amounts ( ⁇ x u , ⁇ y u , ⁇ z u ) of the user in step 1303.
  • the virtual viewpoint determinator 540 compares the computed location variation amounts of the user with an MTT in step 1304. As a comparison result of step 1304, when the computed location variation amounts of the user is greater than the MTT, the process proceeds to step 1305.
  • the virtual viewpoint determinator 540 determines and transfers locations x c ,t+1 , y c,t+1 , and z c ,t+1 of a virtual camera, namely, a virtual viewpoint to the content processing controller 550 using the Equation (3) in step 1305.
  • the content processing controller 550 controls the content processor 150 based on the received virtual viewpoint in step 1306. That is, the content processing controller 550 controls the content processor 150 to play content corresponding to the received virtual viewpoint. Further, the content processing controller 550 controls the content processor 150 based on the received virtual viewpoint to adjust a direction or a distance of a stereoscopic sound in step 1306.
  • the content processing controller 550 may control a peripheral device, such as home network devices based on the virtual viewpoint to adjust an intensity of wind, temperature, humidity, or brightness.
  • the content processing controller 550 determines whether a termination event of realistic type playback is sensed in step 1307. If the termination event of realistic type playback is sensed at step 1307, a process for realistic playback is terminated. Conversely, if the termination event is not sensed, the process returns to step 1303.
  • FIG. 14 is a flowchart illustrating a content playing method according to another embodiment of the present invention.
  • a content playing method according to another embodiment of the present invention may include steps 1401 to 1407. Because steps 1401, 1402, 1404, 1405, and 1407 correlate with the forgoing steps 1301, 1302, 1304, 1305, and 1307, a description thereof will be omitted.
  • Step 1403 is a step in which a virtual viewpoint determinator 540 calculates an angle variation amount ⁇ of FIG. 9 together with location variation amounts ⁇ x u , ⁇ y u , and ⁇ z u of the user.
  • the content processing controller 550 plays content corresponding to a received virtual viewpoint, rotates a direction of the virtual viewpoint by the computed angel variation amount ⁇ , and plays content corresponding to the rotated virtual viewpoint.
  • a method for providing a user interface in a portable terminal according to an embodiment of the present invention as described above may be implemented in an executable program command form by various computer means and be recorded in a computer readable recording medium.
  • the computer readable recording medium may include a program command, a data file, and a data structure individually or a combination thereof.
  • the program command recorded in a recording medium may be specially designed or configured for the present invention or be known to a person having ordinary skill in a computer software field to be used.
  • the computer readable recording medium includes Magnetic Media such as hard disk, floppy disk, or magnetic tape, Optical Media such as Compact Disc Read Only Memory (CD-ROM) or Digital Versatile Disc (DVD), Magneto-Optical Media such as floptical disk, and a hardware device such as ROM.
  • RAM random access memory
  • flash memory storing and executing program commands.
  • the program command includes a machine language code created by a complier and a high-level language code executable by a computer using an interpreter.
  • the foregoing hardware device may be configured to be operated as at least one software module to perform an operation of the present invention, and a reverse operation thereof is the same.
  • a content playing method and an apparatus thereof according to the present invention have an effect that they may realistically play content stimulating a sense of the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and apparatus for realistically playing content stimulating sight and hearing senses of a user corresponding to a location of the user, by determining a first location of a user, mapping a content space displayed on a display unit corresponding with an actual space in which the user is positioned based on the first determined location, determining a virtual viewpoint in the content space corresponding to a second location of the user, and playing content corresponding to the determined virtual viewpoint.

Description

CONTENT PLAYING METHOD AND APPARATUS
The present invention relates generally to a content playing method and apparatus, and more particularly, to a method for playing content corresponding to a location of a user and an apparatus thereof.
In recent years, the demand for Three-Dimensional (3D) image technology has increased and with more common use of digital broadcasting, stereoscopic image use in 3D TV and 3D information terminals is being actively researched. In general, the stereoscopic image implemented through 3D technology is formed by a stereo sight principle as experienced through two eyes. Because two eyes are spaced apart from each other by approximately 65 mm, a binocular parallax acts as a main factor of depth. When the left and right eyes view different stereoscopic images, the two different stereoscopic images are transferred to a brain through a retina, wherein the brain combines the two different stereoscopic image such that the user experiences the depth of the stereoscopic image. However, while a 3D TV is capable of showing a 3D image having a fixed viewpoint regardless of a location of a user, the TV cannot provide a realistic image where the user is present in the TV.
Accordingly, the present invention has been made to solve the above mentioned problems occurring in the prior art, and the present invention provides a method for playing realistic content stimulating at least one of the senses of a user, corresponding to a location of the user, and an apparatus thereof.
According to an aspect of the present invention, there is provided a content playing method including determining a first location of a user; mapping a content space displayed on a display unit to correspond with an actual space in which the user is present based on the first determined location; determining a virtual viewpoint in the content space corresponding to a second location of the user; and playing content corresponding to the determined virtual viewpoint.
According to another aspect of the present invention, there is provided a content playing apparatus including a content collecting unit for collecting content stimulating senses of a user; a content processor for performing a processing operation such that content input from the content collecting unit are played; a content playing unit for playing content input from the content collecting unit; a sensor for collecting information associated with a location of the user such that the content is played corresponding to the location of the user; and a controller for determining a virtual viewpoint in a virtual content space corresponding to the location of the user based on received information from the sensor and controlling such that content corresponding to the determined virtual viewpoint are played.
The present invention provides a method for playing realistic content stimulating at least one of the senses of a user, corresponding to a location of the user, and an apparatus thereof.
The above and other aspects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a block diagram illustrating a configuration of a content playing apparatus according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a three-dimensional coordinate system according to an embodiment of the present invention;
FIGS. 3A, 3B, and 3C are diagrams illustrating a perspective view, a plan view, and a side view, respectively, of a actual space according to an embodiment of the present invention;
FIGS. 4A and 4B are diagrams illustrating a perspective view and a plan view, respectively, of a virtual content space according to an embodiment of the present invention;
FIG. 5 is a block diagram illustrating a realistic type playing unit according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a space mapping method according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a space mapping method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a virtual viewpoint determining method according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating an angle control method of a virtual camera according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating a stereoscopic image control method according to an embodiment of the present invention;
FIG. 11 is a diagram illustrating a stereoscopic sound control method according to an embodiment of the present invention;
FIG. 12 is a block diagram illustrating a network of a home network system to which a content playing apparatus is applied according to an embodiment of the present invention;
FIG. 13 is a flowchart illustrating a content playing method according to an embodiment of the present invention; and
FIG. 14 is a flowchart illustrating a content playing method according to an embodiment of the present invention.
Herein after, various embodiments of the content playing method and apparatus according to the present invention are described in detail with reference to the accompanying drawings. The same reference numbers are used throughout the drawings to refer to the same or similar elements. Detailed description of well-known functions and structures is omitted to avoid obscuring the subject matter of the present invention.
As used herein, the term “content” refers to an operation for stimulating senses of a user, such as a sense of sight, a sense of hearing, and a sense of touch. For example, the content may be an image, light, voice, and wind. Further, realistic type playback may refer to playing content corresponding to a location of the user. That is, the user may experience the same content differently in different locations of the user. For example, when the content is a car displayed on a screen, a front surface or a side surface of the car is viewed by the user depending on the user location. The content playing method and apparatus are applicable to an electronic device having a function playing content stimulating senses of the user. Specifically, the content playing method and apparatus are applicable to a notebook computer, a desktop PC, a tablet PC, a smart phone, a High Definition TeleVision (HDTV), a smart TV, a 3-Dimensional (3D) TV, an Internet Protocol Television (IPTV), a stereoscopic sound system, a theater system, a home theater, a home network system and the like.
The content playing method and apparatus provide a function for tracking location variation of the user, and a function for realistically playing content corresponding to the tracked location of the user. The content playing method and apparatus according to an embodiment of the present invention may provide a function that receives content, such as images from a content provided through Local Area Network (LAN), wireless LAN, 3-Generation (3G) or 4-Generation (4G) wireless communication network, stores and plays a database including the received images in a real-time manner. The images may include stereoscopic image. The stereoscopic image may become 3D movie, 3D animation, or 3D computer graphics. Further, the stereoscopic image may be a multi-media combined with the stereoscopic sound.
FIG. 1 is a block diagram illustrating a configuration of a content playing apparatus according to an embodiment of the present invention. It is assumed that the content playing apparatus of FIG. 1 is a 3D TV which enables content to appear to exist in a space between a screen and the user. Referring to FIG. 1, a content playing apparatus 100 according to an embodiment of the present invention includes an input unit 110, a remote controller 120, a remote controller receiver 125, a sensor 130, a content collecting unit 140, a content processor 150, a sound output unit 161, an image display unit 162, a memory 170, an interface unit 180, and a controller 190.
The input unit 110 may include a plurality of input keys and function keys for receiving input of numeral or character information, and for setting various functions. The functions may include arrow keys, side keys, and hot keys set to perform a predetermined function. Further, the input unit 110 creates and transfers a key event associated with user setting and function control of the content playing apparatus 100. The key event may include a power on/off event, a volume control event, a screen on/off event, etc. The controller 190 controls the foregoing elements in response the key event.
The remote controller 120 creates various key events for operating the content playing apparatus 100, and converts the created key event into a wireless signal, and transmits the wireless signal to the remote controller receiver 125. Specifically, the remote controller 120 of the present invention may create a start event for requesting realistic type playback and a termination event for terminating the realistic type playback. As illustrated above, the realistic type playback may be defined to play content corresponding to a location of the user. The remote controller receiver 125 converts a received wireless signal into an original key event, and transfers the original key event to the controller 190.
The sensor 130 collects information associated with the location of the user such that the user may track the location of the user, and transfers the collected information to the controller 190. Specifically, the sensor 130 may be implemented by an image sensor or an optical sensor for sensing light of a predetermined wavelength such as infrared ray. Moreover, the sensor 130 converts a sensed physical amount into an electric signal, and an Analog to Digital Converter (ADC) converts the electric signal into data, and transfers the data to the controller 190.
The content collecting unit 140 performs a function for collecting content stimulating senses of the user. Specifically, the content collecting unit 140 performs a function for collecting images and sounds from a network or a peripheral device. That is, the content collecting unit 140 may include a broadcasting receiver 141 and an Internet communication unit 142. Specifically, the broadcasting receiver 141 selects one from a plurality of broadcasting channels, and demodulates a broadcasting signal of the selected broadcasting channel to original broadcasting content. The Internet communication unit 142 includes a wired modem or a wireless modem for receiving various information for home shopping, home banking, and on-line gaming, and MP3 use and additional information with respect thereto. The Internet communication unit 142 may include a mobile communication module (e.g., 3G mobile communication module, 3.5G mobile communication module, and 4G mobile communication module) and a near distance communication module (e.g., a Wi-Fi module).
The content processor 150 performs a processing function to play content from a content collecting unit 140. Specifically, the content processor 150 classifies input content into a stereoscopic image and a stereoscopic sound. The content processor 150 may include a sound processor for decoding and outputting the classified stereoscopic sound to a sound output unit 161, and an image processor for decoding the classified stereoscopic sound into a left image and a right image and outputting the left image and the right image to the image processor 152. Further, the content processor 150 may compress and transfer input content to the controller 190 under the control of the controller 190. Accordingly, the controller 190 transfers compressed content to the memory 170. Specifically, the sound processor 150 may control a direction or a distance of a stereoscopic sound according to the location of the user. In other words, the sound processor 150 may change a type of the sound output from the sound output unit 161 according to a location of the user or change the volume according to a type of the sound. The image processor 160 may control brightness, stereoscopic sensation, and depth according to the location of the user.
The content playing unit 160 performs a function for playing content processed by the content processor 150. The content playing unit 160 may include a sound output unit 161 and an image display unit 162. The sound output unit 161 outputs a decoded stereoscopic sound, and includes a plurality of speakers, for example, 5.1 channel speakers. The image display unit 161 displays a stereoscopic image. The image display unit 162 displays a stereoscopic image with depth, as the stereoscopic image actually exists in a three-dimensional space between the image display unit 162 and the screen through a display unit for displaying a stereoscopic image and a 2D implementing unit for allowing a user to experience depth with respect to a displayed stereoscopic image. The display unit may be implemented as a Liquid Crystal Display (LCD), Organic Light Emitting Diodes (OLED), or Active Matrix Organic Light Emitting Diodes (AMOLED). The 3D implementing unit is a structural element formed in an accumulated with the display unit that makes different images to be recognized at binocular right and left eyes. Generally, the 3D implementing scheme is divided into a glass scheme and a glass-free scheme. The glass scheme includes a color filter scheme, a deflection filter scheme, and a shutter glass scheme. The glass-free scheme includes a lenticular lens scheme and a parallax barrier. Because the 3d implementing scheme is known in the art, a detailed description thereof is omitted.
The memory 170 stores programs and data necessary for an operation of the content playing apparatus 100. The memory 170 may be configured by a volatile storage medium, a nonvolatile storage medium, or a combination thereof. The volatile storage medium includes a semiconductor memory such as RAM, DRAM, or SRAM. The non-volatile storage medium may include a hard disk. Further, the memory 170 may be divided into a data area and a program area. Specifically, a data area of the memory 170 may store data created by the controller 160 according to use of the content playing apparatus 100. The data area may store content compressed in a predetermine format provided from the controller 160. The program area of the memory 140 may store an operating system (OS) for booting the content playing apparatus 100 and operating respective elements, and applications for supporting various user functions, for example, a web browser for accessing an Internet server, an MP3 user function for playing other sound sources, an image output function for playing photographs, and a moving playback function. Specifically, the program area of the present invention may store a realistic type playback program. The realistic type playing program may include a routine for determining an initial location of a user, a routine for mapping a actual space with a content space based on the initial location, a routine for tracking location variation, a routine for determining a virtual viewpoint in a content space corresponding to the location of the user, and a routine for playing content corresponding to a time point in the content space. The initial location is defined as a reference value for mapping a content space to a actual space. The actual space is a 3D space in which the user and the display unit are located. The content space is a virtual space in which content displayed through a display unit exists. Further, the virtual viewpoint is defined as a viewpoint of the user in a content space mapped with a actual space.
The interface unit 180 performs a function for connecting the content playing apparatus 100 with a peripheral device in a wired or wireless scheme. The interface unit 180 may include a Zigbee® module, a Wi-Fi module, or a Bluetooth® module. Specifically, the interface unit 180 may receive and transfer a control signal for realistic type playback from the controller 190 to a peripheral device. That is, the controller 190 may control the peripheral device through the interface unit 180. The peripheral device may become a home network device, a stereoscopic sound device, a lamp, an air conditioner, and a heater. In other words, the controller 190 may control the peripheral device to play content for stimulating senses of the user, for example, a touch sense, a sight sense, and a smell sense.
The controller 190 may control an overall operation of the content playing apparatus 100, and signal flow between internal structural elements of the content playing apparatus 100. Further, the controller 190 may control power supply to internal elements in a battery. Moreover, the controller 190 may execute various application stored in the program area. Specifically, in the present invention, if a start event for realistic type playback is sensed, the controller 190 may execute the foregoing realistic type playback program. That is, if the realistic type playback program is executed, the controller 190 determines an initial location of the user and tracks location variation of the user. Further, the controller 190 maps a content space with a actual space based on the initial location, determines a virtual viewpoint corresponding to the tracked location, and controls the content processor 150 to play content corresponding to the virtual viewpoint. Furthermore, the controller 190 may control a peripheral device through the interface unit 180 to play content corresponding to the virtual viewpoint. The realistic type playback function of the controller 190 will be described in detail.
FIG. 2 is a diagram illustrating a three dimensional coordinate system according to an embodiment of the present invention. As illustrated in FIG. 2, a method for expressing a 3D space according to the present may use a three-dimensional coordinate system. A solid line expresses a positive value, and a dotted line expresses a negative value. In addition, coordinates of the user in a actual space at time t are expressed as (xu ,t, yu,t, zu ,t), and coordinates of a camera in a content space at time t are expressed as (xc ,t, yc ,t, zc ,t).
FIGS. 3A, 3B, and 3C are diagrams illustrating a perspective view, a plan view, and a side view of a actual space according to an embodiment of the present invention. Referring to FIG. 3A, a central point of a screen 301 of a display unit in a actual space is set to (0,0,0) of a coordinate system. A right direction and a left direction of the central point 302 become a positive direction and a negative direction of an Xu axis with reference to a direction that the user views a screen, respectively. An upward direction and a downward direction in the central point 302 become a positive direction and a negative direction of an Yu axis. A direction to the user 303 on the screen 301 becomes a positive direction of the Zu axis, and an opposite direction thereof becomes a negative direction of the Zu axis. The location of the user may be expressed with (xu, yu, zu). Referring to FIGS. 3B and 3C, a horizontal length of the screen 301 may be expressed with Display Screen Width (DSW), a vertical length of the screen 302 be with Display Screen Height (DSH), and a straight distance between the screen 301 and the user 303 be expressed with Watching Distance (WD).
FIGS. 4A and 4B are diagrams illustrating a perspective view and a plan view illustrating a virtual content space. First, a virtual camera described herein is not a real camera, but refers to a user in a content space corresponding to that in a actual space. Referring to FIG. 4A, a focus 402 on a focus surface 401 in a content space is set to (0,0,0) of a coordinate system. A right direction and a left direction of the focus 402 become a positive direction and a negative direction of an Xc axis based on a direction that the camera 430 directs the focus surface 401, respectively. An upward direction and a downward direction of the focus 402 become a positive direction and a negative direction of a Yc axis, respectively. A direction of the focus surface 401 to the virtual camera 403 becomes a positive direction of the Zc axis and an opposite direction thereof becomes a negative direction of the Zc axis. The location of the virtual camera 403 may be expressed with (xc, yc, zc). A horizontal length of the focus surface 401 may be expressed with Focal Width (FW), a vertical length of the focus surface 401 be expressed with Focal Height (FH), and a straight distance between the focus surface 401 and the virtual camera 430 be expressed with Focal Length (FL). The FL may be set by the user. The size of the focus surface 401 may be set by adjusting an angle of the camera. That is, because the virtual camera 403 is virtual, the distance of the focus and the angle of the camera may be set as needed.
FIG. 5 is a block diagram illustrating a realistic playing unit according to an embodiment of the present invention. The realistic type playing unit 500 may be configured inside the controller 190 or be separately configured. It is assumed that the realistic playing unit 500 may be configured inside the controller 190. Referring to FIG. 5, the realistic playing unit 500 of the present invention, such as a controller 190 may include a tracker 510, an initiator 520, a space mapper 530, a virtual viewpoint determinator 540, and a content processing controller 550. The tracker 510 tracks a location of a user 303. That is, the tracker 510 tracks coordinates (xu, yu, zu) of the user using data received from the sensor 130. Specifically, the tracker 510 detects characteristic information, for example, a face of the user 303 from received sensing information, and determines a central point of the detected face as coordinates (xu, yu, zu) of the user 303. In this case, zu, WD may be computed using the size of the detected face.
The initiator 520 determines an initial location of the user being a reference value for mapping a content space with a actual space. That is, if a start event for realistic type playback is sensed, the initiator 520 determines coordinates input from the tracker 510 an initial location of the user. Specifically, if the location of the user 303 is not changed within a preset error after watching the content starts, the initiator 520 may determine the location of the user 303 as an initial location. Further, if a predetermined key value is input from the remote controller 120, the initiator 520 may determine a location of the user in the input time of the predetermined key value as the initial location. Further, if a start event is input from the tracker 510, the initiator 520 may determine a location of the user in the input time of the predetermined key value as the initial location. To do this, the tracker 510 may detect a predetermined gesture of the user, for example, action taking a hand down after lifting it using a template matching manner. If the predetermined gesture is detected, the tracker 510 informs the initiator 520 of this.
FIG. 6 is a diagram illustrating a space mapping method according to an embodiment of the present invention. A following Equation (1) expresses a relative ratio of a content space coordinate system to a actual space coordinate system in a space mapping method according to an embodiment of the present invention. In the equation, t0 represents a time point determined as an initial location by the initiator 520.
Equation (1)
== X axis conversion ==
FW = X_Ratio * DSW, X_Ratio = FW/DSW
== Y axis conversion ==
FH = Y_Ratio * DSH, Y_Ratio = FH/DSH
== Z axis conversion =
FL = Z_Ratio * WD at time 0(t0), Z_Ratio = FL/WD at t0
As illustrated in FIG. 6, the space mapper 530 maps the content space 603 to the actual space 602 based on an initial location of the user 601. That is, the space mapper 530 determines FW, FH, and FL and then computes X_Ratio, Y_Ratio, and Z_Ratio(at t0) as illustrated in the Equation (1).
FIG. 7 is a diagram illustrating a space mapping method according to an embodiment of the present invention. A following Equation (2) expresses a relative ratio of a content space coordinate system to a actual space coordinate system in a space mapping method according to an embodiment of the present invention.
Equation (2)
== X axis conversion ==
FW = X_Ratio * (DSW + X_Adjustment)
X_Ratio = FW/(DSW + X_Adjustment)
== Y axis conversion ==
FH = Y_Ratio * (DSH + Y_Adjustment)
Y_Ratio = FH/(DSH+ Y+Adjustment)
== Z axis conversion ==
FL = Z_Ratio * (WD at t0 + Z_Adjustment)
Z_Ratio = FL/(WD at t0 + Z_Adjustment)
DSW, DSH, and WD are values depending on the size of the display unit and a actual space. As illustrated in the Equation (2), the space mapper 530 may add or subtract a predetermine adjustment to or from the foregoing values to extend or shorten a actual space mapped to the content space. In other words, the space mapper 530 may control the size of displayed content using the adjustment. The space mapper 530 may receive the adjustment from the remote controller 120 through the receiver 125 at any time, before start or during start of realistic type playback.
FIG. 8 is a diagram illustrating a virtual viewpoint determining method according to an embodiment of the present invention. A following Equation (3) is a calculation equation of the virtual viewpoint determining method.
Equation (3)
△xu = xu ,t+1 - xu ,t, △xc = X_Ratio * △xu, xc ,t+1 = xc ,t + △xc
△yu = yu ,t+1 - yu ,t, △yc = Y_Ratio * △yu, yc ,t+1 = yc ,t + △yc
△zu = zu ,t+1 - zu ,t, △zc = Z_Ratio * △zu, zc ,t+1 = zc ,t + △zc
Referring to FIG. 8, a virtual viewpoint determinator 540 receives coordinates (xu ,t+1, yu ,t+1, zu ,t+1) of the user 801 from the tracker 510, and receives coordinate transformation values, namely, X_Ratio, Y_Ratio and Z_Ratio from the space mapper 530. As illustrated in the Equation (3), the virtual viewpoint determinator 540 computes coordinates (xc ,t+1, yc ,t+1, zc ,t+1) of the virtual camera 702 mapped to coordinates of the user 801 using the foregoing received information. Although the user still stands, the location of the user may be slightly changed or changed to a predetermined degree due to error of a sensor or recognition. Accordingly, if displayed content is changed in consideration of the location variation, the user may experience the inconvenience. A Minimum Transition Threshold (MTT) for moving a location of the camera may be previously set in the present invention. The MIT may be an option item that the user may directly set. When a location variation amount of the user 801, namely, △xu, △yu, or △zu is the MIT, the virtual viewpoint determinator 540 may compute coordinates of the virtual camera 802. The MTT may be differently set for X, Y, Z axes. For example, the MTT for the Z axis may be set to have the greatest value. For example, when the user stands up while watching from a seat, coordinates of the virtual camera 802 may be computed.
FIG. 9 is a diagram illustrating an angle control method of a virtual camera according to an embodiment of the present invention. Referring to FIG. 9, the virtual viewpoint determinator 540 calculates angle variation amounts θ(θX, θY, θZ) of the user 902 based on a central point 901 of a screen. That is, the virtual viewpoint determinator 540 calculates location various amounts △xu ,0(=xu ,t - xu ,0), △yu ,0(=yu ,t - yu ,0), △zu ,0(=zu ,t - zu ,0) based on an initial location of the user 902, and applies the calculated location variation amounts to a trigonometric function to calculate an angle variation amount. That is, the calculated angle variation amount θ may be used as an angle control value of the virtual camera 903. The calculation of the angle variation amount may be selectively set by the user.
FIG. 10 is a diagram illustrating a stereoscopic image control method according to an embodiment of the present invention.
A content processing controller 550 receives a virtual viewpoint, namely, coordinates xc ,t+1, yc ,t+1, zc ,t+1 of a virtual camera from a virtual viewpoint determinator 540. Further, the content processing controller 550 may receive an angle adjustment of a virtual viewpoint, namely, an angle control value θ of the virtual camera from the virtual viewpoint determinator 540. Moreover, the content processing controller 550 controls the content processor 150 based on received information to adjust brightness, stereoscopic sensation, and depth of a stereoscopic image. Referring to FIG. 10, if an initial location 1001 of a user is determined, the content processing controller 550 perform a control operation to display an object that the virtual camera orients in the initial location 902. If a location of the user is moved to 1001 -> 1003 -> 1005, a location of a virtual camera in a content space mapped to a actual space to 1002 -> 1004 -> 1006. Accordingly, another part of the object is displayed according to location variation of a virtual camera. If the user moves from 1005 to 1007, the camera moves from 1007 to 1008. If the camera is located at 1008, because an object is separated from an angle of a virtual camera, it is not displayed longer. However, the content processing controller 550 rotates the camera in a direction of the object 1009 by an angle control value θ to continuously display the object 1009.
FIG. 11 is a diagram illustrating a stereoscopic sound control method according to an embodiment of the present invention.
The content processing controller 550 controls the content processor 150 to adjust a direction and a distance of a stereoscopic sound. Referring to FIG. 11, when the user is located at 1101, the content processing controller 550 may adjust a distance in a manner that a sound of a car is gradually increased such that the user experiences the car approaching. When the user is located at 1102, the content processing controller 550 may adjust a distance in a manner that a sound of a car is gradually reduced such that the user experiences the car leaving. Further, when the user is located at 1101, the content processing controller 550 controls a direction of a stereoscopic sound such that a sound of the car is output from a front speaker and a centre speaker. When the user is located at 1102, the content processing controller 550 controls the direction of a stereoscopic sound such that a sound of the car is output from a rear speaker.
The foregoing content playing apparatus 100 may further include constructions that are not mentioned such as a camera, a microphone, and a GPS receiving module. Since the structural elements can be variously changed according to convergence trend of a digital device, no elements can be listed. However, the content playing apparatus 100 may include structural elements equivalent to the foregoing structural elements. Further, the content playing apparatus 100 of the present invention may be substituted by specific constructions in the foregoing arrangements according to the provided form or another structure. This can be easily understood to those skilled in the present art.
FIG. 12 is a block diagram illustrating a network of a home network system to which a content playing apparatus is applied according to an embodiment of the present invention. Referring to FIG. 12, a home network system according to the present invention may include a content playing apparatus 1200, a home network server 1210, and a plurality of home network devices. The content playing apparatus may include the foregoing structural elements. The content playing apparatus 1200 may communicate with a home network server 1210 in a communication scheme such as a ZigBee scheme, a Bluetooth® scheme, or a Wireless LAN scheme. If a start event for realistic type playback is sensed, the content playing apparatus 1200 executes a realistic type playing program. That is, the content playing apparatus 1200 may control the home network device to play the content corresponding to a location of the user. The home network server 1210 controls home network devices. The home network server 1210 may drive home network devices under the control of the content playing apparatus 1200. The home network devices have unique addresses, respectively, and are controlled through the addresses by the home network server 1210. The home network device plays content stimulating senses of the user under the control of the content playing apparatus 1200. For example, as illustrated, the home network devices may include an air conditioner 1220, a humidifier 1230, a heater 1240, and a lamp 1250. The content playing apparatus 1200 may control the air conditioner 1220, the humidifier 1230, the heater 1240, and the lamp according to location variation of the user to adjust an intensity of wind, peripheral brightness, temperature, and humidity. For example, referring to FIG. 11, when the user is located at 1102, the content playing apparatus 1200 may increase the intensity of wind such as the user experiences the car being near the user.
That is, the user may be stimulated with a touch sense, and experience to be present in a real content space.
FIG. 13 is a flowchart illustrating a content playing method according to an embodiment of the present invention. Referring to FIGS. 1 to 13, a controller 190 may firstly be in an idle state. The idle state may be defined as a state displaying an image before a realistic type playback step. If the user operates a remote controller 120 in the idle state, the controller 190 senses a start event for realistic playback. As illustrated previously, if a start event is sensed, an initiator 520 determines coordinates input from a tracker 510 as an initial location of a user in step 1301. The tracker 510 may track a location of the user before a realistic type playing step.
A space mapper 530 maps a content space to a actual space based on the determined initial location in step 1302. A virtual viewpoint determinator 540 computes location variation amounts (△xu, △yu, △zu) of the user in step 1303. Next, the virtual viewpoint determinator 540 compares the computed location variation amounts of the user with an MTT in step 1304. As a comparison result of step 1304, when the computed location variation amounts of the user is greater than the MTT, the process proceeds to step 1305. The virtual viewpoint determinator 540 determines and transfers locations xc ,t+1, yc,t+1, and zc ,t+1 of a virtual camera, namely, a virtual viewpoint to the content processing controller 550 using the Equation (3) in step 1305.
The content processing controller 550 controls the content processor 150 based on the received virtual viewpoint in step 1306. That is, the content processing controller 550 controls the content processor 150 to play content corresponding to the received virtual viewpoint. Further, the content processing controller 550 controls the content processor 150 based on the received virtual viewpoint to adjust a direction or a distance of a stereoscopic sound in step 1306. The content processing controller 550 may control a peripheral device, such as home network devices based on the virtual viewpoint to adjust an intensity of wind, temperature, humidity, or brightness.
Next, the content processing controller 550 determines whether a termination event of realistic type playback is sensed in step 1307. If the termination event of realistic type playback is sensed at step 1307, a process for realistic playback is terminated. Conversely, if the termination event is not sensed, the process returns to step 1303.
FIG. 14 is a flowchart illustrating a content playing method according to another embodiment of the present invention. Referring to FIG. 14, a content playing method according to another embodiment of the present invention may include steps 1401 to 1407. Because steps 1401, 1402, 1404, 1405, and 1407 correlate with the forgoing steps 1301, 1302, 1304, 1305, and 1307, a description thereof will be omitted. Step 1403 is a step in which a virtual viewpoint determinator 540 calculates an angle variation amount θ of FIG. 9 together with location variation amounts △xu, △yu, and △zu of the user. In step 1406 the content processing controller 550 plays content corresponding to a received virtual viewpoint, rotates a direction of the virtual viewpoint by the computed angel variation amount θ, and plays content corresponding to the rotated virtual viewpoint.
A method for providing a user interface in a portable terminal according to an embodiment of the present invention as described above may be implemented in an executable program command form by various computer means and be recorded in a computer readable recording medium. The computer readable recording medium may include a program command, a data file, and a data structure individually or a combination thereof. In the meantime, the program command recorded in a recording medium may be specially designed or configured for the present invention or be known to a person having ordinary skill in a computer software field to be used.
The computer readable recording medium includes Magnetic Media such as hard disk, floppy disk, or magnetic tape, Optical Media such as Compact Disc Read Only Memory (CD-ROM) or Digital Versatile Disc (DVD), Magneto-Optical Media such as floptical disk, and a hardware device such as ROM. RAM, flash memory storing and executing program commands. Further, the program command includes a machine language code created by a complier and a high-level language code executable by a computer using an interpreter. The foregoing hardware device may be configured to be operated as at least one software module to perform an operation of the present invention, and a reverse operation thereof is the same.
A content playing method and an apparatus thereof according to the present invention have an effect that they may realistically play content stimulating a sense of the user.
Although various embodiments of the present invention have been described in detail herein, many variations and modifications may be made without departing from the spirit and scope of the present invention, as defined by the appended claims.

Claims (18)

  1. A content playing method, the method comprising:
    determining a first location of a user;
    mapping a content space displayed on a display unit to correspond with an actual space in which the user is positioned based on the first determined location;
    determining a virtual viewpoint in the content space corresponding to a second location of the user; and
    playing content corresponding to the determined virtual viewpoint.
  2. The method of claim 1, wherein playing content comprises displaying an image based on the determined virtual viewpoint.
  3. The method of claim 2, wherein displaying an image comprises controlling at least one of brightness, stereoscopic sensation, and depth of the image to display the controlled image.
  4. The method of claim 1, wherein playing content comprises outputting a sound based on the determined virtual viewpoint.
  5. The method of claim 4, wherein outputting a sound comprises controlling at least one of a direction sense and a distance sense to output the controlled sound.
  6. The method of claim 1, wherein the played content includes an image and a sound for stimulating at least a sight sense and a hearing sense of the user.
  7. The method of claim 1, wherein determining a virtual viewpoint comprises:
    computing an amount of location variation of the user; and
    determining the virtual viewpoint corresponding to a changed location of the user when the computed location variation amount is greater than a preset Minimum Transition Threshold (MTT).
  8. The method of claim 1, wherein mapping a content space comprises computing a coordinate transformation value between a coordinate system of the actual space and a coordinate system of the content space.
  9. The method of claim 8, wherein determining a virtual viewpoint comprises determining the virtual viewpoint corresponding to the second location of the user using the coordinate transformation value.
  10. The method of claim 8, wherein mapping a content space comprises:
    setting the coordinate system of the actual space using a horizontal length Display Screen Width (DSW) of a screen of the display unit, a vertical length Display Screen Height (DSH) of the screen, and a straight distance Watching Distance (WD) between the screen and the user;
    setting the coordinate system of the content space using a horizontal length Focal Width (FW) of a focus surface and a vertical length Focal Height (FH) of the focus surface, and a straight distance Focal Length (FL) between the focus surface and the virtual viewpoint; and
    computing the coordinate transformation value using the first location when the first location is determined.
  11. The method of claim 10, wherein setting the coordinate system of the actual space comprises mapping the actual space reduced or enlarged to the virtual space by adding or subtracting an adjustment for at least one of the horizontal length, the vertical length, and the straight distance.
  12. The method of claim 1, further comprising:
    computing an amount of location variation of the user based on the first location of the user;
    applying the computed location variation amount to a trigonometric function to compute an angle variation amount;
    rotating a direction of the virtual viewpoint by the computed angle variation amount; and
    playing content corresponding to the virtual viewpoint the direction of which is rotated.
  13. The method of claim 1, wherein determining a virtual viewpoint comprises:
    tracking a location of the user; and
    determining a fixed location of the user as a first location of the mapping when the location of the user is fixed within a preset error for a preset time as the tracked result.
  14. The method of claim 1, wherein determining a virtual viewpoint comprises determining the tracked location as the first location of the mapping when a start event for realistic type playback is sensed while tracking a location of the user.
  15. The method of claim 1, wherein determining a virtual viewpoint comprises determining the tracked location as the first location of the mapping when a predetermined gesture of the user is sensed while tracking a location of the user.
  16. A content playing apparatus, comprising:
    a content collecting unit for collecting content stimulating senses of a user;
    a content processor for performing a processing operation such that content input from the content collecting unit are played;
    a content playing unit for playing content input from the content collecting unit;
    a sensor for collecting information associated with a location of the user such that the content is played corresponding to the location of the user; and
    a controller for determining a virtual viewpoint in a virtual content space corresponding to the location of the user based on received information from the sensor and controlling such that content corresponding to the determined virtual viewpoint are played.
  17. The apparatus of claim 16, wherein the controller comprises:
    an initiator for determining a first location of a user based on the received information from the sensor;
    a space mapper for mapping a content space displayed on a display unit to a actual space in which the user is positioned based on the first determined location;
    a virtual viewpoint determinator for determining a virtual viewpoint in the content space corresponding to a second location of the user; and
    a content processing controller for controlling the content processor to play content corresponding to the determined virtual viewpoint.
  18. The apparatus of claim 17, further comprising an interface unit connecting with a peripheral device having a content playing function,
    wherein the content processing controller controls the peripheral device through the interface unit to play content corresponding to the virtual viewpoint.
PCT/KR2012/000375 2011-07-18 2012-01-17 Content playing method and apparatus WO2013012146A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP12814483.9A EP2735164A4 (en) 2011-07-18 2012-01-17 Content playing method and apparatus
CN201280035942.2A CN103703772A (en) 2011-07-18 2012-01-17 Content playing method and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR10-2011-0070959 2011-07-18
KR20110070959 2011-07-18
KR1020110114883A KR101926477B1 (en) 2011-07-18 2011-11-07 Contents play method and apparatus
KR10-2011-0114883 2011-11-07

Publications (1)

Publication Number Publication Date
WO2013012146A1 true WO2013012146A1 (en) 2013-01-24

Family

ID=47839676

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2012/000375 WO2013012146A1 (en) 2011-07-18 2012-01-17 Content playing method and apparatus

Country Status (5)

Country Link
US (1) US20130023342A1 (en)
EP (1) EP2735164A4 (en)
KR (1) KR101926477B1 (en)
CN (1) CN103703772A (en)
WO (1) WO2013012146A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104937929A (en) * 2013-03-18 2015-09-23 Lg电子株式会社 3D display device and method for controlling the same

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6102944B2 (en) 2012-12-10 2017-03-29 ソニー株式会社 Display control apparatus, display control method, and program
KR101462021B1 (en) * 2013-05-23 2014-11-18 하수호 Method and terminal of providing graphical user interface for generating a sound source
KR101381396B1 (en) * 2013-09-12 2014-04-04 하수호 Multiple viewer video and 3d stereophonic sound player system including stereophonic sound controller and method thereof
US10937187B2 (en) 2013-10-07 2021-03-02 Apple Inc. Method and system for providing position or movement information for controlling at least one function of an environment
CN110223327B (en) 2013-10-07 2023-08-01 苹果公司 Method and system for providing location information or movement information for controlling at least one function of a vehicle
KR101669926B1 (en) * 2014-02-03 2016-11-09 (주)에프엑스기어 User view point related image processing apparatus and method thereof
CN105094299B (en) * 2014-05-14 2018-06-01 三星电子(中国)研发中心 The method and apparatus for controlling electronic device
CN104123003B (en) * 2014-07-18 2017-08-01 北京智谷睿拓技术服务有限公司 Content share method and device
CN104102349B (en) 2014-07-18 2018-04-27 北京智谷睿拓技术服务有限公司 Content share method and device
WO2016126770A2 (en) * 2015-02-03 2016-08-11 Dolby Laboratories Licensing Corporation Selective conference digest
CN104808946A (en) * 2015-04-29 2015-07-29 天脉聚源(北京)传媒科技有限公司 Image playing and controlling method and device
JP6739907B2 (en) * 2015-06-18 2020-08-12 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Device specifying method, device specifying device and program
WO2017082078A1 (en) * 2015-11-11 2017-05-18 ソニー株式会社 Image processing device and image processing method
US9851435B2 (en) 2015-12-14 2017-12-26 Htc Corporation Electronic device and signal generating circuit
CN109963177A (en) * 2017-12-26 2019-07-02 深圳Tcl新技术有限公司 A kind of method, storage medium and the television set of television set adjust automatically viewing angle
KR102059114B1 (en) * 2018-01-31 2019-12-24 옵티머스시스템 주식회사 Image compensation device for matching virtual space and real space and image matching system using the same
JP7140517B2 (en) * 2018-03-09 2022-09-21 キヤノン株式会社 Generation device, generation method performed by generation device, and program
CN111681467B (en) * 2020-06-01 2022-09-23 广东小天才科技有限公司 Vocabulary learning method, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060038880A1 (en) * 2004-08-19 2006-02-23 Microsoft Corporation Stereoscopic image display
US20100201790A1 (en) * 2009-02-11 2010-08-12 Hyeonho Son Method of controlling view of stereoscopic image and stereoscopic image display using the same
KR20110070326A (en) * 2009-12-18 2011-06-24 한국전자통신연구원 Apparatus and method for presenting display of 3d image using head tracking
KR101046259B1 (en) * 2010-10-04 2011-07-04 최규호 Stereoscopic image display apparatus according to eye position

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1186038A (en) * 1997-03-03 1999-03-30 Sega Enterp Ltd Image processor, image processing method, medium and game machine
JP2006025281A (en) * 2004-07-09 2006-01-26 Hitachi Ltd Information source selection system, and method
JP4500632B2 (en) * 2004-09-07 2010-07-14 キヤノン株式会社 Virtual reality presentation apparatus and information processing method
WO2006096776A2 (en) * 2005-03-07 2006-09-14 The University Of Georgia Research Foundation,Inc. Teleportation systems and methods in a virtual environment
US8564532B2 (en) * 2005-12-06 2013-10-22 Naturalpoint, Inc. System and methods for using a movable object to control a computer
US20080252596A1 (en) * 2007-04-10 2008-10-16 Matthew Bell Display Using a Three-Dimensional vision System
US20090141905A1 (en) * 2007-12-03 2009-06-04 David Warhol Navigable audio-based virtual environment
US20090222838A1 (en) * 2008-02-29 2009-09-03 Palm, Inc. Techniques for dynamic contact information
EP2374110A4 (en) * 2008-12-19 2013-06-05 Saab Ab System and method for mixing a scene with a virtual scenario
US8477175B2 (en) * 2009-03-09 2013-07-02 Cisco Technology, Inc. System and method for providing three dimensional imaging in a network environment
CN102045429B (en) * 2009-10-13 2015-01-21 华为终端有限公司 Method and equipment for adjusting displayed content
US20110157322A1 (en) * 2009-12-31 2011-06-30 Broadcom Corporation Controlling a pixel array to support an adaptable light manipulator
US9644989B2 (en) * 2011-06-29 2017-05-09 Telenav, Inc. Navigation system with notification and method of operation thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060038880A1 (en) * 2004-08-19 2006-02-23 Microsoft Corporation Stereoscopic image display
US20100201790A1 (en) * 2009-02-11 2010-08-12 Hyeonho Son Method of controlling view of stereoscopic image and stereoscopic image display using the same
KR20110070326A (en) * 2009-12-18 2011-06-24 한국전자통신연구원 Apparatus and method for presenting display of 3d image using head tracking
KR101046259B1 (en) * 2010-10-04 2011-07-04 최규호 Stereoscopic image display apparatus according to eye position

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2735164A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104937929A (en) * 2013-03-18 2015-09-23 Lg电子株式会社 3D display device and method for controlling the same
US9762896B2 (en) 2013-03-18 2017-09-12 Lg Electronics Inc. 3D display device and method for controlling the same
CN104937929B (en) * 2013-03-18 2018-05-11 Lg 电子株式会社 3D display device and its control method

Also Published As

Publication number Publication date
KR20130010424A (en) 2013-01-28
KR101926477B1 (en) 2018-12-11
CN103703772A (en) 2014-04-02
US20130023342A1 (en) 2013-01-24
EP2735164A4 (en) 2015-04-29
EP2735164A1 (en) 2014-05-28

Similar Documents

Publication Publication Date Title
WO2013012146A1 (en) Content playing method and apparatus
US10853992B1 (en) Systems and methods for displaying a virtual reality model
CN107358007B (en) It controls the method, apparatus of smart home system and calculates readable storage medium storing program for executing
WO2021208648A1 (en) Virtual object adjusting method and apparatus, storage medium and augmented reality device
US20180059783A1 (en) Virtual reality system including social graph
US8963956B2 (en) Location based skins for mixed reality displays
WO2017148294A1 (en) Mobile terminal-based apparatus control method, device, and mobile terminal
WO2010074437A2 (en) Image processing method and apparatus therefor
WO2021136091A1 (en) Flash lamp light supplementation method and electronic device
US9443457B2 (en) Display control device, display control method, and recording medium
KR20140128428A (en) Method and system of providing interactive information
CN104243961A (en) Display system and method of multi-view image
TWI508525B (en) Mobile terminal and method of controlling the operation of the mobile terminal
WO2021147921A1 (en) Image processing method, electronic device and computer-readable storage medium
CN109407821A (en) With the collaborative interactive of virtual reality video
CN107884942A (en) A kind of augmented reality display device
WO2022262839A1 (en) Stereoscopic display method and apparatus for live performance, medium, and system
US11323838B2 (en) Method and apparatus for providing audio content in immersive reality
US11635802B2 (en) Combined light intensity based CMOS and event detection sensor for high speed predictive tracking and latency compensation in virtual and augmented reality HMD systems
CN114615556B (en) Virtual live broadcast enhanced interaction method and device, electronic equipment and storage medium
CN205491078U (en) Intelligent sound box in air forms images
JP2020530218A (en) How to project immersive audiovisual content
KR102574730B1 (en) Method of providing augmented reality TV screen and remote control using AR glass, and apparatus and system therefor
WO2018000610A1 (en) Automatic playing method based on determination of image type, and electronic device
US20230037102A1 (en) Information processing system, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12814483

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2012814483

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012814483

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE