US8605117B2 - Method and apparatus for providing content - Google Patents

Method and apparatus for providing content Download PDF

Info

Publication number
US8605117B2
US8605117B2 US12/871,381 US87138110A US8605117B2 US 8605117 B2 US8605117 B2 US 8605117B2 US 87138110 A US87138110 A US 87138110A US 8605117 B2 US8605117 B2 US 8605117B2
Authority
US
United States
Prior art keywords
content
information
bio
movement
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/871,381
Other versions
US20110050707A1 (en
Inventor
Cory Kim
Jae-Young Lee
Hyung-Jin Seo
Sung-hyun Cho
Jong-eun Yang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD. reassignment SAMSUNG ELECTRONICS CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHO, SUNG-HYUN, KIM, CORY, LEE, JAE-YOUNG, SEO, HYUNG-JIN, YANG, JONG-EUN
Publication of US20110050707A1 publication Critical patent/US20110050707A1/en
Application granted granted Critical
Publication of US8605117B2 publication Critical patent/US8605117B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B22/00Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/065Visualisation of specific exercise parameters
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B22/00Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements
    • A63B22/02Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements with movable endless bands, e.g. treadmills
    • A63B22/0235Exercising apparatus specially adapted for conditioning the cardio-vascular system, for training agility or co-ordination of movements with movable endless bands, e.g. treadmills driven by a motor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/10Positions
    • A63B2220/16Angular positions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/30Speed
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2220/00Measuring of physical parameters relating to sporting activity
    • A63B2220/80Special sensors, transducers or devices therefor
    • A63B2220/803Motion sensors
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2225/00Miscellaneous features of sport apparatus, devices or equipment
    • A63B2225/50Wireless data transmission, e.g. by radio transmitters or telemetry
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • A63B2230/04Measuring physiological parameters of the user heartbeat characteristics, e.g. ECG, blood pressure modulations
    • A63B2230/06Measuring physiological parameters of the user heartbeat characteristics, e.g. ECG, blood pressure modulations heartbeat rate only
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • A63B2230/75Measuring physiological parameters of the user calorie expenditure

Definitions

  • the present invention relates to a method and apparatus for providing content, and more particularly, to a method and apparatus for providing content to a moving user.
  • the present invention provides a method and apparatus for efficiently providing content to a moving user.
  • a method of providing content to a user that moves including obtaining movement information or bio-information about the user; processing content based on the movement information or the bio-information; and outputting the processed content.
  • the movement information may include a speed of movement, a direction of the movement, or a type of movement.
  • the bio-information may include an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, a calorie tracker, or body age.
  • the processing operation may include extracting a keyword from text data; and determining a magnification ratio for the keyword based on the movement information or the bio-information, and magnifying the keyword according to the magnification ratio.
  • the processing operation may also include dividing text data into a plurality of block data; and determining a magnification ratio for the plurality of block data based on the movement information or the bio-information, and magnifying the plurality of block data according to the magnification ratio.
  • the processing operation may include selecting a plurality content to be output from among content stored in one or more connected devices, based on movement information and/or bio-information; and controlling the plurality selected content to be sequentially output.
  • a content providing apparatus for providing content to a user in motion, the content providing apparatus including an information obtaining unit for obtaining movement information or bio-information about the user; a content processing unit for processing content based on movement information or bio-information; and an output unit for outputting the processed content.
  • FIG. 1 is a block diagram illustrating a content providing apparatus according to an embodiment of the present invention
  • FIGS. 2A through 2C illustrate a method of providing content to a moving user, according to an embodiment of the present invention
  • FIGS. 3A through 3C illustrate a method of providing content to a moving user, according to another embodiment of the present invention
  • FIGS. 4A through 4C illustrate a method of providing content to a moving user, according to another embodiment of the present invention
  • FIGS. 5A through 5C illustrate a method of providing content to a moving user, according to another embodiment of the present invention
  • FIGS. 6A through 6C illustrate a method of providing content to a moving user, according to another embodiment of the present invention
  • FIG. 7 is a block diagram illustrating a content providing apparatus according to another embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a method of providing content, according to another embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a method of providing content, according to another embodiment of the present invention.
  • FIG. 1 is a block diagram illustrating a content providing apparatus 100 according to an embodiment of the present invention.
  • the content providing apparatus 100 may be applied to a case in which content is output to a moving user.
  • the content providing apparatus 100 may be applied to a case in which content is output to a user who is exercising.
  • the content providing apparatus 100 may include an information obtaining unit 110 , a content processing unit 120 , and an output unit 130 .
  • the information obtaining unit 110 obtains movement information or bio-information about a user.
  • the movement information about the user may include any information related to the user's movement such as speed, direction, and type of movement.
  • information about the type of movement indicates how the user moves, and includes information about what exercises the user is performing.
  • the bio-information may include any information related to a physical state of the user, e.g., an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, the amount of calorie consumption, body age, or the like.
  • the information obtaining unit 110 may obtain movement information including running speed, pace, running direction, angle of inclination, exercise time, and the like.
  • the information obtaining unit 110 may also obtain bio-information including heart rate, pulse frequency, the amount of calorie consumption, body age, and the like.
  • the content processing unit 120 processes content based on at least one of movement information and bio-information so as to allow the user to easily interpret the content. This is useful because the ability of a user to interpret the content is reduced, such that it is difficult for the user to understand the content. In particular, when the user views the content while running at a fast speed, the ability of the user to interpret the content is significantly reduced. Similarly, when the bio-information sharply changes, as in the case where the heart rate of the user rapidly increases, the ability of the user to interpret the content is greatly reduced. In one embodiment of the present invention, the content processing unit 120 appropriately processes the content according to at least one of movement information and bio-information so that the user may obtain information effectively while still allowing the user to stay in motion.
  • the output unit 130 outputs the processed content.
  • the output unit 130 may adjust the speed at which the content is output, according to movement information and/or bio-information. For example, the output unit 130 may adjust the display speed of an image by outputting an image frame at a normal speed when the user runs slowly, and by outputting an image frame at a slow speed when the user runs rapidly.
  • FIGS. 2A through 6C illustrate methods of providing content to a moving user, according to several embodiments of the present invention.
  • a user is provided text data via a display device while the user runs on a treadmill.
  • the present embodiment may also apply to other forms of content including a moving picture, a still image, music, or the like.
  • FIGS. 2A through 2C illustrate a method of providing content to a moving user, according to an embodiment of the present invention. According to the method described with reference to FIGS. 2A through 2C , only essential data from among a plurality of data configuring the content is magnified and displayed.
  • FIG. 2A is a flowchart illustrating the method of providing content according to an embodiment of the present invention.
  • step S 210 at least one of movement information and bio-information about the user is obtained.
  • the movement information and the bio-information may be obtained from a sensor or the treadmill itself.
  • keywords are extracted from the text data based on at least one of the movement information and bio-information.
  • the number of keywords that are extracted may be changed. That is, when the user runs slowly, a large number of keywords may be extracted, and when the user runs rapidly, a small number of keywords may be extracted. Since a user's content recognition ability is reduced when the user increases speed, more important keywords may be selectively extracted.
  • step S 230 the extracted keywords are magnified.
  • a magnification ratio of the extracted keywords may be based on the movement information and/or the bio-information. For example, when the user runs at a speed of 5 km/h, the magnification ratio may be set to ‘2’, and when the user runs at a speed of 10 km/h, the magnification ratio may be set to ‘4’.
  • step S 240 the magnified keywords are output.
  • the magnified keywords may be sequentially displayed, or the magnified keywords may be simultaneously output. Alternatively, only the magnified keywords may be output, or the complete text content could be output with the keywords magnified.
  • FIGS. 2B and 2C illustrate an example of the text data that is provided according to the method of FIG. 2A .
  • FIG. 2B illustrates text data of a case where the user runs at a speed of 5 km/h
  • FIG. 2C illustrates text data of a case where the user runs at a faster speed of 10 km/h.
  • FIGS. 3A through 3C illustrate a method of providing content to a moving user, according to another embodiment of the present invention. According to the method described in FIGS. 3A through 3C , content is divided into a plurality of pieces of block data, which are magnified and output.
  • FIG. 3A is a flowchart illustrating the method of providing content according to an embodiment of the present invention.
  • step S 310 at least one of movement information and bio-information is obtained.
  • step S 320 text data is divided into a plurality of blocks.
  • a plurality of pieces of data corresponding to the plurality of blocks will be referred to as a “plurality of pieces of block data.”
  • step S 330 the plurality of pieces of block data is magnified.
  • step S 340 the plurality of pieces of magnified block data is sequentially output.
  • step S 350 it is determined whether the plurality of pieces of block data is all output, and if there is block data that is not output, step S 340 is performed again.
  • the dividing operation or the magnifying and outputting operation may be based on at least one of the movement information and bio-information.
  • magnification ratio of the plurality of pieces of block data may be uniformly determined regardless of a running speed of the user
  • the number of pieces of block data may be determined according to the running speed of the user.
  • the number of pieces of block data may be uniformly determined regardless of the running speed of the user, but the magnification ratio of the plurality of pieces of block data may be determined in consideration of the running speed of the user.
  • FIGS. 3B and 3C illustrate an example of data that is provided according to the method described in relation to FIG. 3A .
  • FIG. 3B illustrates a screen of a display device when the user runs at a slow speed of 5 km/h
  • FIG. 3C illustrates a screen of a display device when the user runs at a faster speed of 10 km/h.
  • text data is divided into two pieces of block data.
  • the text data is comprised of four lines, the upper two lines are divided into one block data, and the lower two lines are divided into the other block data.
  • the block data is magnified twice bigger than original text data.
  • the text data is divided into four pieces of block data.
  • the text data is comprised of four lines, each of the four lines is divided into each of the four pieces of block data.
  • the block data is magnified four times the size of the original text data.
  • FIGS. 4A through 4C illustrate a method of providing content to a moving user, according to another embodiment of the present invention.
  • a portion in the content is magnified and output in such a manner that a magnified portion is sequentially changed.
  • FIG. 4A is a flowchart illustrating the method of providing content according to an embodiment of the present invention.
  • step S 410 at least one of movement information and bio-information is obtained.
  • step S 420 at least one letter, numeral, or word in the text data is magnified.
  • a target letter or target numeral
  • sizes of letters may be gradually decreased as the letters move farther from the target letter.
  • the number of letters to be magnified, the magnification ratio, and the interval by which a magnification target letter is changed may be determined based on at least one of the movement information and the bio-information.
  • the magnification ratio may be increased, or the interval by which the magnification target letter is changed may be set to have a longer duration time.
  • the magnification ratio may be decreased, or the interval by which the magnification target letter is changed may be set to a shorter duration time.
  • step S 430 magnified text data is output.
  • step S 440 it is determined whether a magnified letter is the last letter in the text data. If it is not, then step S 420 is performed so that a subsequent letter in the text data is magnified and output.
  • FIGS. 4B and 4C illustrate a display device on which text data is output according to the method that is described with reference to FIG. 4A .
  • FIG. 4B illustrates the screen of the display device at, for example, a time of 3 seconds
  • FIG. 4C illustrates the screen of the display device at, for example, a time of 3.1 seconds.
  • an interval by which a magnified letter is changed is 0.1 seconds
  • the magnification ratio with respect to the target letter is 1:2.
  • the magnification ratio to the letters is decreased by 1:0.1.
  • the number ‘3’ is output, magnified at twice its original size.
  • the two letters ‘T’ and ‘P’ adjacent to the number ‘3’ are output, magnified slightly less than the target letter, by 1.9 times their original sizes.
  • letters ‘A’ and ‘M’ that are distant from the number ‘3’ by two letters are output, magnified by a factor slightly less, at 1.8 times their original sizes.
  • a letter ‘P’ is output by being magnified at twice its original size.
  • the number ‘3’ and the letter ‘M’ adjacent to the letter ‘P’ are output, magnified slightly less than the letter ‘P’ by 1.9 times their original sizes. In this manner, a position of the most magnified letter is changed and output.
  • a magnified portion is changed in a unit of letters, but according to other embodiments, the magnified portion may be changed in a unit of words.
  • FIGS. 5A through 5C illustrate a method of providing content to a moving user, according to another embodiment of the present invention.
  • the content is gradually zoomed in and output.
  • the present embodiment may be particularly appropriate for providing content in a slide form, like a presentation generated with Microsoft PowerPoint.
  • FIG. 5A is a flowchart illustrating the method of providing content, according to an embodiment of the present invention.
  • step S 510 at least one of movement information and bio-information is obtained.
  • a level or a speed at which content is zoomed-in is determined.
  • the movement information and the bio-information may be used. For example, it is possible to allow that a size of content output when the user runs at a faster speed of 10 km/h is larger than a size of content output when the user runs at a slower speed of 5 km/h. In another example, it is possible to set the content zoom-in speed to a fast value for a slow runner and a slow value for fast runner for a more realistic feel.
  • step S 530 the content is zoomed-in and output.
  • FIGS. 5B and 5C illustrate a display device for providing the content according to the method that is described with reference to FIG. 5A .
  • FIG. 5B illustrates the screen of the display device when the user runs at a slower speed of 5 km/h
  • FIG. 5C illustrates the screen of the display device when the user runs at a speed of 10 km/h.
  • the level by which text data is zoomed in is adjusted in such a manner that the text data for a fast runner looks larger than the text data for a slow runner.
  • FIGS. 6A through 6C illustrate a method of providing content to a moving user, according to another embodiment of the present invention. According to the method described with reference to FIGS. 6A through 6C , content stored in at least one connected device is selectively output based on movement information and bio-information.
  • FIG. 6A is a flowchart of the method of providing content, according to an embodiment of the present invention.
  • step S 610 at least one of the movement information and bio-information is obtained.
  • step S 620 a plurality of pieces of content, which are internally stored or are stored in one or more devices connected via a network, are searched for.
  • step S 630 the output order and whether or not to output the plurality of pieces of found content are determined.
  • the output order, and whether to output the content may be determined based on movement information and/or bio-information. For example, in a case where the user performs a dynamic exercise such as running, urgent content including email or a text messages that have not been checked by the user, or content including entertainment shows or music that are easily understood by the user may be output. On the other hand, in a case where the user performs a static exercise such as weight lifting, all types of content may be output according to a determined order, regardless of the type of the contents.
  • step S 640 the content is sequentially output according to the output order.
  • the content is output, one of the methods described with reference to FIGS. 2A through 6C may be used.
  • FIGS. 6B and 6C illustrate an example of data that is provided according to the method of FIG. 6A .
  • FIG. 6B illustrates a process in which content is searched for in connected devices.
  • FIG. 6C illustrates a process in which an e-mail, which is one of the found content, is output. When the e-mail is output, the method of FIG. 2A is used.
  • the method of providing content to the moving user is not limited to the embodiments of FIGS. 2A through 6C , and thus any content providing method for allowing the moving user to easily interpret the content may be used.
  • FIG. 7 is a block diagram illustrating a content providing apparatus 700 according to another embodiment of the present invention.
  • the content providing apparatus 700 includes an interface unit 710 , an information obtaining unit 720 , a communication module 730 , a content processing unit 740 , and an output unit 750 .
  • the interface unit 710 receives a signal for selecting content to be output or for selecting a type of content.
  • the interface unit 710 may receive the selection signal via a button attached to a remote control or the content providing apparatus 700 .
  • the communication module 730 receives the selected content or the selected type of content from one or more devices connected via a network.
  • the communication module 730 may receive content via a wired network including a local area network (LAN) or a wireless network including a high speed downlink packet access (HSDPA) network, a wireless local area network (WLAN), and the like.
  • LAN local area network
  • HSDPA high speed downlink packet access
  • WLAN wireless local area network
  • the information obtaining unit 720 obtains at least one of movement information and bio-information about the user.
  • the information obtaining unit 720 may obtain movement information and/or bio-information about the user from a sensor, or may obtain movement information or bio-information at a previous time from a memory unit (not shown).
  • the content processing unit 740 processes the content based on the movement information and/or bio-information so as to allow the user to easily understand the content.
  • the output unit 750 outputs the processed content to an external output device (not shown) such as a display device or a speaker.
  • FIG. 8 is a flowchart of a method of providing content, according to another embodiment of the present invention.
  • step S 810 movement information or bio-information about a user is obtained.
  • the movement information may include speed, direction, or a type of movement.
  • the bio-information may include a physical state of the user, e.g., an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, the amount of calorie consumption, or body age.
  • step S 820 the content is processed based on at least one of the movement information and bio-information.
  • step S 820 only essential data from among a plurality of pieces of data configuring the content may be magnified and output; the content may be divided into a plurality of pieces of block data, which in turn may be magnified and output; a portion of the content may be magnified and then output, and here, the content may be processed and output in such a manner that a magnified portion is sequentially changed; or the content may be gradually zoomed in and output.
  • step S 830 the processed content is output.
  • FIG. 9 is a flowchart illustrating a method of providing content, according to another embodiment of the present invention.
  • step S 910 at least one of movement information and bio-information about a user is obtained.
  • step S 920 the content is processed based on the movement information and/or bio-information
  • step S 930 it is determined whether the movement information or the bio-information has changed. If either has changed, step S 910 is performed so that updated information is obtained. Otherwise, if the movement information and the bio-information have not changed, step S 940 is performed.
  • step S 940 the processed content is output.
  • Embodiments of the present invention can be written as computer programs and can be implemented in general-use digital computers that execute the programs using a non-transitory computer readable recording medium.
  • Examples of the computer readable recording medium include magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Vascular Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Cardiology (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and apparatus for providing content to user that is moving. The method includes obtaining movement information or bio-information about a user, processing content based on the movement information or the bio-information, and outputting the processed content.

Description

PRIORITY
This application claims priority to Korean Patent Application No. 10-2009-0080722, filed on Aug. 28, 2009, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method and apparatus for providing content, and more particularly, to a method and apparatus for providing content to a moving user.
2. Description of the Related Art
According to the development of the information communication technology, various types of content are provided through various routes. With so many available choices, a user is nearly always able to receive desired content regardless of time and place.
Due to an increase in the importance of time management, there is an increasing demand on a user who wants to perform two or more tasks at one time. Accordingly, there are increasing cases in which the user uses content while in motion, like when a user watches a television (TV) while exercising.
SUMMARY OF THE INVENTION
The present invention provides a method and apparatus for efficiently providing content to a moving user.
According to an aspect of the present invention, there is provided a method of providing content to a user that moves, the method including obtaining movement information or bio-information about the user; processing content based on the movement information or the bio-information; and outputting the processed content.
The movement information may include a speed of movement, a direction of the movement, or a type of movement.
The bio-information may include an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, a calorie tracker, or body age.
The processing operation may include extracting a keyword from text data; and determining a magnification ratio for the keyword based on the movement information or the bio-information, and magnifying the keyword according to the magnification ratio.
The processing operation may also include dividing text data into a plurality of block data; and determining a magnification ratio for the plurality of block data based on the movement information or the bio-information, and magnifying the plurality of block data according to the magnification ratio.
The processing operation may include selecting a plurality content to be output from among content stored in one or more connected devices, based on movement information and/or bio-information; and controlling the plurality selected content to be sequentially output.
According to another aspect of the present invention, there is provided a content providing apparatus for providing content to a user in motion, the content providing apparatus including an information obtaining unit for obtaining movement information or bio-information about the user; a content processing unit for processing content based on movement information or bio-information; and an output unit for outputting the processed content.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other features and advantages of the present invention will become more apparent with a detailed description of several embodiments thereof, with reference to the attached drawings in which:
FIG. 1 is a block diagram illustrating a content providing apparatus according to an embodiment of the present invention;
FIGS. 2A through 2C illustrate a method of providing content to a moving user, according to an embodiment of the present invention;
FIGS. 3A through 3C illustrate a method of providing content to a moving user, according to another embodiment of the present invention;
FIGS. 4A through 4C illustrate a method of providing content to a moving user, according to another embodiment of the present invention;
FIGS. 5A through 5C illustrate a method of providing content to a moving user, according to another embodiment of the present invention;
FIGS. 6A through 6C illustrate a method of providing content to a moving user, according to another embodiment of the present invention;
FIG. 7 is a block diagram illustrating a content providing apparatus according to another embodiment of the present invention;
FIG. 8 is a flowchart illustrating a method of providing content, according to another embodiment of the present invention; and
FIG. 9 is a flowchart illustrating a method of providing content, according to another embodiment of the present invention.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
Hereinafter, the present invention will be described in detail by explaining several embodiments of the invention with reference to the attached drawings.
FIG. 1 is a block diagram illustrating a content providing apparatus 100 according to an embodiment of the present invention. The content providing apparatus 100 may be applied to a case in which content is output to a moving user. For example, the content providing apparatus 100 may be applied to a case in which content is output to a user who is exercising.
The content providing apparatus 100 may include an information obtaining unit 110, a content processing unit 120, and an output unit 130.
The information obtaining unit 110 obtains movement information or bio-information about a user.
The movement information about the user may include any information related to the user's movement such as speed, direction, and type of movement. In particular, information about the type of movement indicates how the user moves, and includes information about what exercises the user is performing. The bio-information may include any information related to a physical state of the user, e.g., an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, the amount of calorie consumption, body age, or the like.
In the case where a user watches content on a display device that is attached to (or separate from) a treadmill while the user runs on the treadmill, the information obtaining unit 110 may obtain movement information including running speed, pace, running direction, angle of inclination, exercise time, and the like. The information obtaining unit 110 may also obtain bio-information including heart rate, pulse frequency, the amount of calorie consumption, body age, and the like.
The content processing unit 120 processes content based on at least one of movement information and bio-information so as to allow the user to easily interpret the content. This is useful because the ability of a user to interpret the content is reduced, such that it is difficult for the user to understand the content. In particular, when the user views the content while running at a fast speed, the ability of the user to interpret the content is significantly reduced. Similarly, when the bio-information sharply changes, as in the case where the heart rate of the user rapidly increases, the ability of the user to interpret the content is greatly reduced. In one embodiment of the present invention, the content processing unit 120 appropriately processes the content according to at least one of movement information and bio-information so that the user may obtain information effectively while still allowing the user to stay in motion.
The output unit 130 outputs the processed content. The output unit 130 may adjust the speed at which the content is output, according to movement information and/or bio-information. For example, the output unit 130 may adjust the display speed of an image by outputting an image frame at a normal speed when the user runs slowly, and by outputting an image frame at a slow speed when the user runs rapidly.
FIGS. 2A through 6C illustrate methods of providing content to a moving user, according to several embodiments of the present invention. With reference to FIGS. 2A through 6C, it is assumed that a user is provided text data via a display device while the user runs on a treadmill. However, the present embodiment may also apply to other forms of content including a moving picture, a still image, music, or the like.
In addition, the way content is processed based on movement information will be described with reference to FIGS. 2A through 6C. However, the present embodiment may equivalently apply to a case in which the content is processed according to bio-information of the user.
FIGS. 2A through 2C illustrate a method of providing content to a moving user, according to an embodiment of the present invention. According to the method described with reference to FIGS. 2A through 2C, only essential data from among a plurality of data configuring the content is magnified and displayed.
FIG. 2A is a flowchart illustrating the method of providing content according to an embodiment of the present invention.
In step S210, at least one of movement information and bio-information about the user is obtained. The movement information and the bio-information may be obtained from a sensor or the treadmill itself.
In step S220, keywords are extracted from the text data based on at least one of the movement information and bio-information. Here, the number of keywords that are extracted may be changed. That is, when the user runs slowly, a large number of keywords may be extracted, and when the user runs rapidly, a small number of keywords may be extracted. Since a user's content recognition ability is reduced when the user increases speed, more important keywords may be selectively extracted.
In step S230, the extracted keywords are magnified. A magnification ratio of the extracted keywords may be based on the movement information and/or the bio-information. For example, when the user runs at a speed of 5 km/h, the magnification ratio may be set to ‘2’, and when the user runs at a speed of 10 km/h, the magnification ratio may be set to ‘4’.
In step S240, the magnified keywords are output. Here, the magnified keywords may be sequentially displayed, or the magnified keywords may be simultaneously output. Alternatively, only the magnified keywords may be output, or the complete text content could be output with the keywords magnified.
FIGS. 2B and 2C illustrate an example of the text data that is provided according to the method of FIG. 2A. FIG. 2B illustrates text data of a case where the user runs at a speed of 5 km/h, and FIG. 2C illustrates text data of a case where the user runs at a faster speed of 10 km/h.
In the case where the user runs at the slower speed of 5 km/h, four keywords corresponding to ‘hello’, ‘meeting’, ‘tomorrow at 3 p.m.’, and ‘meeting’ are extracted, and the extracted keywords are displayed at a magnification twice the size of the original text data. On the other hand, in the case where the user runs at the faster speed of 10 km/h, only two keywords corresponding to ‘tomorrow 3 p.m.’ and ‘meeting’ are extracted, and the extracted keywords are magnified four times larger than the original text data.
FIGS. 3A through 3C illustrate a method of providing content to a moving user, according to another embodiment of the present invention. According to the method described in FIGS. 3A through 3C, content is divided into a plurality of pieces of block data, which are magnified and output.
FIG. 3A is a flowchart illustrating the method of providing content according to an embodiment of the present invention.
In step S310, at least one of movement information and bio-information is obtained.
In step S320, text data is divided into a plurality of blocks. Hereinafter, for convenience of description, a plurality of pieces of data corresponding to the plurality of blocks will be referred to as a “plurality of pieces of block data.”
In step S330, the plurality of pieces of block data is magnified.
In step S340, the plurality of pieces of magnified block data is sequentially output.
In step S350, it is determined whether the plurality of pieces of block data is all output, and if there is block data that is not output, step S340 is performed again.
When the text data is divided or when the plurality of pieces of block data is magnified and output, the dividing operation or the magnifying and outputting operation may be based on at least one of the movement information and bio-information.
While the magnification ratio of the plurality of pieces of block data may be uniformly determined regardless of a running speed of the user, the number of pieces of block data may be determined according to the running speed of the user. Alternatively, the number of pieces of block data may be uniformly determined regardless of the running speed of the user, but the magnification ratio of the plurality of pieces of block data may be determined in consideration of the running speed of the user.
FIGS. 3B and 3C illustrate an example of data that is provided according to the method described in relation to FIG. 3A. FIG. 3B illustrates a screen of a display device when the user runs at a slow speed of 5 km/h, and FIG. 3C illustrates a screen of a display device when the user runs at a faster speed of 10 km/h.
In FIG. 3B, when the user runs at the slower speed, text data is divided into two pieces of block data. When the text data is comprised of four lines, the upper two lines are divided into one block data, and the lower two lines are divided into the other block data. The block data is magnified twice bigger than original text data.
Referring to FIG. 3C, when the user runs at the faster speed, the text data is divided into four pieces of block data. When the text data is comprised of four lines, each of the four lines is divided into each of the four pieces of block data. The block data is magnified four times the size of the original text data.
FIGS. 4A through 4C illustrate a method of providing content to a moving user, according to another embodiment of the present invention.
According to the method described with reference to FIGS. 4A through 4C, a portion in the content is magnified and output in such a manner that a magnified portion is sequentially changed.
FIG. 4A is a flowchart illustrating the method of providing content according to an embodiment of the present invention.
In step S410, at least one of movement information and bio-information is obtained.
In step S420, at least one letter, numeral, or word in the text data is magnified. Here, a target letter (or target numeral) is maximally magnified, and sizes of letters may be gradually decreased as the letters move farther from the target letter.
The number of letters to be magnified, the magnification ratio, and the interval by which a magnification target letter is changed may be determined based on at least one of the movement information and the bio-information. To be more specific, in the case where a user runs at a rapid speed, the magnification ratio may be increased, or the interval by which the magnification target letter is changed may be set to have a longer duration time. On the other hand, where the user runs at a slow speed, the magnification ratio may be decreased, or the interval by which the magnification target letter is changed may be set to a shorter duration time.
In step S430, magnified text data is output.
In step S440, it is determined whether a magnified letter is the last letter in the text data. If it is not, then step S420 is performed so that a subsequent letter in the text data is magnified and output.
FIGS. 4B and 4C illustrate a display device on which text data is output according to the method that is described with reference to FIG. 4A. FIG. 4B illustrates the screen of the display device at, for example, a time of 3 seconds, and FIG. 4C illustrates the screen of the display device at, for example, a time of 3.1 seconds. In this figure, an interval by which a magnified letter is changed is 0.1 seconds, and the magnification ratio with respect to the target letter is 1:2. As letters increase in distance from the target letter by one letter, the magnification ratio to the letters is decreased by 1:0.1.
Referring to FIG. 4B, at the time of 3 seconds, the number ‘3’ is output, magnified at twice its original size. The two letters ‘T’ and ‘P’ adjacent to the number ‘3’ are output, magnified slightly less than the target letter, by 1.9 times their original sizes. Similarly, letters ‘A’ and ‘M’ that are distant from the number ‘3’ by two letters are output, magnified by a factor slightly less, at 1.8 times their original sizes.
Referring to FIG. 4C, at the later time of 3.1 seconds, a letter ‘P’ is output by being magnified at twice its original size. The number ‘3’ and the letter ‘M’ adjacent to the letter ‘P’ are output, magnified slightly less than the letter ‘P’ by 1.9 times their original sizes. In this manner, a position of the most magnified letter is changed and output.
Referring to FIGS. 4B and 4C, a magnified portion is changed in a unit of letters, but according to other embodiments, the magnified portion may be changed in a unit of words.
FIGS. 5A through 5C illustrate a method of providing content to a moving user, according to another embodiment of the present invention. According to the method described with reference to FIGS. 5A through 5C, the content is gradually zoomed in and output. The present embodiment may be particularly appropriate for providing content in a slide form, like a presentation generated with Microsoft PowerPoint.
FIG. 5A is a flowchart illustrating the method of providing content, according to an embodiment of the present invention.
In step S510, at least one of movement information and bio-information is obtained.
In step S520, a level or a speed at which content is zoomed-in is determined. In order to determine zoomed-in information, or zoom-in speed, the movement information and the bio-information may be used. For example, it is possible to allow that a size of content output when the user runs at a faster speed of 10 km/h is larger than a size of content output when the user runs at a slower speed of 5 km/h. In another example, it is possible to set the content zoom-in speed to a fast value for a slow runner and a slow value for fast runner for a more realistic feel.
In step S530, the content is zoomed-in and output.
FIGS. 5B and 5C illustrate a display device for providing the content according to the method that is described with reference to FIG. 5A. FIG. 5B illustrates the screen of the display device when the user runs at a slower speed of 5 km/h, and FIG. 5C illustrates the screen of the display device when the user runs at a speed of 10 km/h. The level by which text data is zoomed in is adjusted in such a manner that the text data for a fast runner looks larger than the text data for a slow runner.
FIGS. 6A through 6C illustrate a method of providing content to a moving user, according to another embodiment of the present invention. According to the method described with reference to FIGS. 6A through 6C, content stored in at least one connected device is selectively output based on movement information and bio-information.
FIG. 6A is a flowchart of the method of providing content, according to an embodiment of the present invention.
In step S610, at least one of the movement information and bio-information is obtained.
In step S620, a plurality of pieces of content, which are internally stored or are stored in one or more devices connected via a network, are searched for.
In step S630, the output order and whether or not to output the plurality of pieces of found content are determined. The output order, and whether to output the content may be determined based on movement information and/or bio-information. For example, in a case where the user performs a dynamic exercise such as running, urgent content including email or a text messages that have not been checked by the user, or content including entertainment shows or music that are easily understood by the user may be output. On the other hand, in a case where the user performs a static exercise such as weight lifting, all types of content may be output according to a determined order, regardless of the type of the contents.
In step S640, the content is sequentially output according to the output order. When the content is output, one of the methods described with reference to FIGS. 2A through 6C may be used.
FIGS. 6B and 6C illustrate an example of data that is provided according to the method of FIG. 6A. FIG. 6B illustrates a process in which content is searched for in connected devices. FIG. 6C illustrates a process in which an e-mail, which is one of the found content, is output. When the e-mail is output, the method of FIG. 2A is used.
The method of providing content to the moving user is not limited to the embodiments of FIGS. 2A through 6C, and thus any content providing method for allowing the moving user to easily interpret the content may be used.
FIG. 7 is a block diagram illustrating a content providing apparatus 700 according to another embodiment of the present invention.
The content providing apparatus 700 includes an interface unit 710, an information obtaining unit 720, a communication module 730, a content processing unit 740, and an output unit 750.
The interface unit 710 receives a signal for selecting content to be output or for selecting a type of content. The interface unit 710 may receive the selection signal via a button attached to a remote control or the content providing apparatus 700.
The communication module 730 receives the selected content or the selected type of content from one or more devices connected via a network. The communication module 730 may receive content via a wired network including a local area network (LAN) or a wireless network including a high speed downlink packet access (HSDPA) network, a wireless local area network (WLAN), and the like.
The information obtaining unit 720 obtains at least one of movement information and bio-information about the user. The information obtaining unit 720 may obtain movement information and/or bio-information about the user from a sensor, or may obtain movement information or bio-information at a previous time from a memory unit (not shown).
The content processing unit 740 processes the content based on the movement information and/or bio-information so as to allow the user to easily understand the content.
The output unit 750 outputs the processed content to an external output device (not shown) such as a display device or a speaker.
FIG. 8 is a flowchart of a method of providing content, according to another embodiment of the present invention.
In step S810, movement information or bio-information about a user is obtained. The movement information may include speed, direction, or a type of movement. The bio-information may include a physical state of the user, e.g., an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, the amount of calorie consumption, or body age.
In step S820, the content is processed based on at least one of the movement information and bio-information. In step S820, only essential data from among a plurality of pieces of data configuring the content may be magnified and output; the content may be divided into a plurality of pieces of block data, which in turn may be magnified and output; a portion of the content may be magnified and then output, and here, the content may be processed and output in such a manner that a magnified portion is sequentially changed; or the content may be gradually zoomed in and output.
In step S830, the processed content is output.
FIG. 9 is a flowchart illustrating a method of providing content, according to another embodiment of the present invention.
In step S910, at least one of movement information and bio-information about a user is obtained.
In step S920, the content is processed based on the movement information and/or bio-information
In step S930, it is determined whether the movement information or the bio-information has changed. If either has changed, step S910 is performed so that updated information is obtained. Otherwise, if the movement information and the bio-information have not changed, step S940 is performed.
In step S940, the processed content is output.
Embodiments of the present invention can be written as computer programs and can be implemented in general-use digital computers that execute the programs using a non-transitory computer readable recording medium. Examples of the computer readable recording medium include magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), etc.
While the present invention has been particularly shown and described with reference to certain embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (19)

What is claimed is:
1. A method of providing content to a user that is moving, comprising:
obtaining at least one of movement information about the user and bio-information about the user;
processing, by a processor, content based on the at least one of the movement information and the bio-information; and
outputting the processed content,
wherein processing the content comprises magnifying a portion of the content according to a magnification based on the at least one of the movement information and the bio-information.
2. The method of claim 1, wherein the movement information comprises a speed of movement, a direction of the movement, or a type of movement.
3. The method of claim 1, wherein the bio-information comprises an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, the amount of calorie consumption, or user's age.
4. The method of claim 1, wherein the processing comprises:
extracting a keyword from text data of the content; and
determining a magnification ratio for the keyword based on at least one of the movement information and the bio-information, and magnifying the keyword according to the magnification ratio.
5. The method of claim 1, wherein the processing comprises:
dividing text data of the content into a plurality of pieces of block data; and
determining a magnification ratio for each of the plurality of pieces of block data based on the at least one of the movement information and the bio-information, and magnifying each of the plurality of pieces of block data according to the magnification ratio.
6. The method of claim 1, wherein the processing comprises sequentially magnifying one or more letters in text data of the content, based on the movement information or the bio-information.
7. The method of claim 1, wherein the processing comprises determining a speed or a level at which the content is zoomed in, based on the at least one of the movement information and the bio-information.
8. The method of claim 1, wherein the processing comprises searching for content in one or more connected devices.
9. The method of claim 1, wherein the processing comprises:
selecting a plurality of pieces of content to be output from among content stored in one or more connected devices, based on the at least one of the movement information and the bio-information; and
controlling the plurality of pieces of selected content to be sequentially output.
10. The method of claim 1, further comprising receiving an external signal for determining a type of content to be output and a processing scheme for the content.
11. A content providing apparatus for providing content to a user that is moving, comprising:
an information obtaining unit for obtaining at least one of movement information about the user and bio-information about the user;
a content processing unit for processing content based on at least one of the movement information and the bio-information; and
an output unit for outputting the processed content,
wherein the content processing unit magnifies a portion of the content according to a magnification ratio determined based on the at least one of the movement information and the bio-information.
12. The content providing apparatus of claim 11, wherein the movement information comprises a speed of movement, a direction of the movement, or a type of movement.
13. The content providing apparatus of claim 11, wherein the bio-information comprises an electrocardiogram, a brain wave, a stress index, a bone density index, a body mass index, the amount of calorie consumption, or user's age.
14. The content providing apparatus of claim 11, wherein the content processing unit extracts a keyword from text data of the content, and magnifies the keyword based on at least one of the movement information and the bio-information.
15. The content providing apparatus of claim 11, wherein the content processing unit divides text data of the content into a plurality of pieces of block data, determines a magnification ratio of each of the plurality of pieces of block data based on the at least one of the movement information and bio-information, and magnifies the plurality of pieces of block data according to the magnification ratio.
16. The content providing apparatus of claim 11, wherein the content processing unit sequentially magnifies one or more letters in text data of the content based on at least one of the movement information and the bio-information.
17. The content providing apparatus of claim 11, wherein the content processing unit determines a speed or a level at which the content is zoomed in, based on at least one of the movement information and the bio-information.
18. The content providing apparatus of claim 11, wherein the content processing unit searches for content in one or more connected devices.
19. The content providing apparatus of claim 11, wherein the content processing unit selects content to be output from among a plurality of pieces of content stored in one or more connected devices, based on at least one of the movement information and the bio-information.
US12/871,381 2009-08-28 2010-08-30 Method and apparatus for providing content Active 2031-10-19 US8605117B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2009-0080722 2009-08-28
KR1020090080722A KR101635508B1 (en) 2009-08-28 2009-08-28 Method and apparatus for providing of content

Publications (2)

Publication Number Publication Date
US20110050707A1 US20110050707A1 (en) 2011-03-03
US8605117B2 true US8605117B2 (en) 2013-12-10

Family

ID=43624188

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/871,381 Active 2031-10-19 US8605117B2 (en) 2009-08-28 2010-08-30 Method and apparatus for providing content

Country Status (2)

Country Link
US (1) US8605117B2 (en)
KR (1) KR101635508B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD742390S1 (en) * 2013-02-22 2015-11-03 Samsung Electronics Co., Ltd. Graphic user interface for a display screen or a portion thereof
USD743972S1 (en) * 2013-02-22 2015-11-24 Samsung Electronics Co., Ltd. Graphic user interface for a display screen or a portion thereof

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5885129B2 (en) * 2012-09-11 2016-03-15 カシオ計算機株式会社 Exercise support device, exercise support method, and exercise support program
KR102314644B1 (en) * 2015-06-16 2021-10-19 삼성전자주식회사 System and method for providing information of peripheral device
JP2017037159A (en) * 2015-08-10 2017-02-16 キヤノン株式会社 Image display apparatus, image display method, and program
JP2017068594A (en) * 2015-09-30 2017-04-06 ソニー株式会社 Information processing device, information processing method, and program
EP3358445A4 (en) * 2015-09-30 2019-09-11 Sony Corporation Information processing device, information processing method, and program
US11540095B2 (en) * 2020-05-27 2022-12-27 Goldman Sachs & Co. LLC Projecting content from exercise equipment

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5204946A (en) * 1988-06-17 1993-04-20 Canon Kabushiki Kaisha Mixed text and image data processing
US5289168A (en) * 1990-01-23 1994-02-22 Crosfield Electronics Ltd. Image handling apparatus and controller for selecting display mode
US5586196A (en) * 1991-04-24 1996-12-17 Michael Sussman Digital document magnifier
US5742264A (en) * 1995-01-24 1998-04-21 Matsushita Electric Industrial Co., Ltd. Head-mounted display
US20020057281A1 (en) * 2000-11-10 2002-05-16 Jun Moroo Image display control unit, image display control method, image displaying apparatus, and image display control program recorded computer-readable recording medium
US20060161565A1 (en) * 2005-01-14 2006-07-20 Samsung Electronics Co., Ltd. Method and apparatus for providing user interface for content search
US20060243120A1 (en) * 2005-03-25 2006-11-02 Sony Corporation Content searching method, content list searching method, content searching apparatus, and searching server
US7224282B2 (en) * 2003-06-30 2007-05-29 Sony Corporation Control apparatus and method for controlling an environment based on bio-information and environment information
US7290212B2 (en) * 2001-03-30 2007-10-30 Fujitsu Limited Program and method for displaying a radar chart
US20070273714A1 (en) * 2006-05-23 2007-11-29 Apple Computer, Inc. Portable media device with power-managed display
US20080068335A1 (en) * 2001-01-05 2008-03-20 Sony Corporation Information processing device
US20090024415A1 (en) * 2007-07-16 2009-01-22 Alpert Alan I Device and method for medical facility biometric patient intake and physiological measurements
US20090040231A1 (en) * 2007-08-06 2009-02-12 Sony Corporation Information processing apparatus, system, and method thereof
US7544880B2 (en) * 2003-11-20 2009-06-09 Sony Corporation Playback mode control device and playback mode control method
US7548415B2 (en) * 2004-06-01 2009-06-16 Kim Si-Han Portable display device
US20100188426A1 (en) * 2009-01-27 2010-07-29 Kenta Ohmori Display apparatus, display control method, and display control program
US8116576B2 (en) * 2006-03-03 2012-02-14 Panasonic Corporation Image processing method and image processing device for reconstructing a high-resolution picture from a captured low-resolution picture

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20070066574A (en) * 2005-12-22 2007-06-27 주식회사 팬택 Method and mobile communication terminal for adjusting size of displayed text according to distance from user eyes
KR100912123B1 (en) * 2009-03-12 2009-08-13 (주)이랜서 Device and method automatically selecting and playing of music corresponding to physical information

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5204946A (en) * 1988-06-17 1993-04-20 Canon Kabushiki Kaisha Mixed text and image data processing
US5289168A (en) * 1990-01-23 1994-02-22 Crosfield Electronics Ltd. Image handling apparatus and controller for selecting display mode
US5586196A (en) * 1991-04-24 1996-12-17 Michael Sussman Digital document magnifier
US5742264A (en) * 1995-01-24 1998-04-21 Matsushita Electric Industrial Co., Ltd. Head-mounted display
US20020057281A1 (en) * 2000-11-10 2002-05-16 Jun Moroo Image display control unit, image display control method, image displaying apparatus, and image display control program recorded computer-readable recording medium
US20080068335A1 (en) * 2001-01-05 2008-03-20 Sony Corporation Information processing device
US7290212B2 (en) * 2001-03-30 2007-10-30 Fujitsu Limited Program and method for displaying a radar chart
US7224282B2 (en) * 2003-06-30 2007-05-29 Sony Corporation Control apparatus and method for controlling an environment based on bio-information and environment information
US7544880B2 (en) * 2003-11-20 2009-06-09 Sony Corporation Playback mode control device and playback mode control method
US7548415B2 (en) * 2004-06-01 2009-06-16 Kim Si-Han Portable display device
US20060161565A1 (en) * 2005-01-14 2006-07-20 Samsung Electronics Co., Ltd. Method and apparatus for providing user interface for content search
US20060243120A1 (en) * 2005-03-25 2006-11-02 Sony Corporation Content searching method, content list searching method, content searching apparatus, and searching server
US8116576B2 (en) * 2006-03-03 2012-02-14 Panasonic Corporation Image processing method and image processing device for reconstructing a high-resolution picture from a captured low-resolution picture
US20070273714A1 (en) * 2006-05-23 2007-11-29 Apple Computer, Inc. Portable media device with power-managed display
US20090024415A1 (en) * 2007-07-16 2009-01-22 Alpert Alan I Device and method for medical facility biometric patient intake and physiological measurements
US20090040231A1 (en) * 2007-08-06 2009-02-12 Sony Corporation Information processing apparatus, system, and method thereof
US20100188426A1 (en) * 2009-01-27 2010-07-29 Kenta Ohmori Display apparatus, display control method, and display control program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD742390S1 (en) * 2013-02-22 2015-11-03 Samsung Electronics Co., Ltd. Graphic user interface for a display screen or a portion thereof
USD743972S1 (en) * 2013-02-22 2015-11-24 Samsung Electronics Co., Ltd. Graphic user interface for a display screen or a portion thereof

Also Published As

Publication number Publication date
US20110050707A1 (en) 2011-03-03
KR20110023103A (en) 2011-03-08
KR101635508B1 (en) 2016-07-04

Similar Documents

Publication Publication Date Title
US8605117B2 (en) Method and apparatus for providing content
US10839954B2 (en) Dynamic exercise content
US7181692B2 (en) Method for the auditory navigation of text
US5799267A (en) Phonic engine
KR101454950B1 (en) Deep tag cloud associated with streaming media
US7305624B1 (en) Method for limiting Internet access
WO2018049979A1 (en) Animation synthesis method and device
US20110243452A1 (en) Electronic apparatus, image processing method, and program
JP5906843B2 (en) Keyword detection apparatus, control method and control program therefor, and display device
CN111695422B (en) Video tag acquisition method and device, storage medium and server
CN113035199B (en) Audio processing method, device, equipment and readable storage medium
JP2006244028A (en) Information exhibition device and information exhibition program
EP3940551A1 (en) Method and apparatus for generating weather forecast video, electronic device, and storage medium
JP7482620B2 (en) DATA GENERATION METHOD, DATA DISPLAY METHOD, DATA GENERATION DEVICE, AND DATA DISPLAY SYSTEM
JP2022541832A (en) Method and apparatus for retrieving images
US7737981B2 (en) Information processing apparatus
US20220239969A1 (en) Methods and apparatus for live text-based conversation between small and large groups
CN110767201A (en) Score generation method, storage medium and terminal equipment
WO2019073668A1 (en) Information processing device, information processing method, and program
US20180108342A1 (en) Low-dimensional real-time concatenative speech synthesizer
US11134300B2 (en) Information processing device
JP2021163292A (en) Method for presenting story development to user, story development presenting device, computer program thereof, story development analysis method, story development analyzer, and computer program thereof
WO2019026396A1 (en) Information processing device, information processing method, and program
CN107609018B (en) Search result presenting method and device and terminal equipment
KR102479023B1 (en) Apparatus, method and program for providing foreign language learning service

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIM, CORY;LEE, JAE-YOUNG;SEO, HYUNG-JIN;AND OTHERS;REEL/FRAME:025123/0949

Effective date: 20100827

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8