WO2015046673A1 - Head mounted display and method of controlling the same - Google Patents

Head mounted display and method of controlling the same Download PDF

Info

Publication number
WO2015046673A1
WO2015046673A1 PCT/KR2014/000392 KR2014000392W WO2015046673A1 WO 2015046673 A1 WO2015046673 A1 WO 2015046673A1 KR 2014000392 W KR2014000392 W KR 2014000392W WO 2015046673 A1 WO2015046673 A1 WO 2015046673A1
Authority
WO
WIPO (PCT)
Prior art keywords
food
information regarding
motion
ingestion
head mounted
Prior art date
Application number
PCT/KR2014/000392
Other languages
French (fr)
Inventor
Yongsin Kim
Doyoung Lee
Hyorim Park
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR20130116376A external-priority patent/KR20150037108A/en
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Publication of WO2015046673A1 publication Critical patent/WO2015046673A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Definitions

  • an object of the present disclosure is to provide a head mounted display, which displays information regarding food that a user will ingest as augmented reality information, and a method of controlling the same.
  • FIGs. 7 to 9 are views showing food information regarding an object included in food
  • the camera unit 110 may be integrated with the sensor unit 140 so as to be included as a single unit in the head mounted display 300.
  • the sensor unit 140 may sense a user motion of ingesting food (hereinafter referred to as “food ingestion motion”).
  • the head mounted display 300 may acquire information regarding the position of the head mounted display 300.
  • the processor of the head mounted display 300 may control the camera unit to capture images of the hamburger 2 and the pizza 4 without capturing an image of the piece of cake 6.
  • the sensor unit may include at least one of a sound sensor, a vibration sensor, and a muscle movement sensor, which serve to sense a user motion of ingesting food, without being limited thereto.
  • the FSR sensor may measure resistance that is reduced as force applied to a surface of the sensor increases, thereby detecting a signal indicating movement of muscles.
  • objects of the hamburger 2 may be a bun, a patty, and a vegetable.
  • the head mounted display 300 may provide the user with the current intake of food that the user ingests in real time, which may assist the user in easily and conveniently regulating portion size.
  • the processor of the head mounted display 300 may display total result information 700 including at least one of information 710 regarding the total intake of food, total result information 720 regarding the food ingestion motion, and recommended information 730 regarding the food ingestion motion.
  • the recommended information 730 regarding the food ingestion motion may include at least one of the recommended number of chews, recommended chewing direction, recommended chewing intensity, deficient components, excessive components, and recommended food.
  • the head mounted display 300 may control the camera unit 110 to capture an image of the hamburger 2 only when the hamburger 2 is present within a reference distance from the head mounted display 300.
  • the head mounted display 300 displays information regarding the user motion of ingesting the hamburger 2 based on the sensed user motion of ingesting the hamburger 2 in real time, the user may view various information regarding his/her motion of ingesting the hamburger 2.
  • displayed information 500 regarding the user motion of ingesting the hamburger 2 may include the number of times of chewing the hamburger 2, and the direction of chewing the hamburger 2, for example.
  • the head mounted display and the method of controlling the same may be implemented as code that may be written on a processor readable recording medium and thus read by a processor provided in a network device.
  • the processor readable recording medium may be any type of recording device in which data is stored in a processor readable manner. Examples of the processor readable recording medium may include a ROM, a RAM, a CDROM, a magnetic tape, a floppy disc, and an optical data storage device.
  • the processor readable recording medium includes a carrier wave (e.g., data transmission over the Internet).
  • the processor readable recording medium may be distributed over a plurality of computer systems connected to a network so that processor readable code is written thereto and executed therefrom in a decentralized manner.
  • the user may easily and conveniently acquire information regarding food that the user ingests and information regarding eating habits in real time while the user eats food.
  • displaying recommended information related to the user motion of ingesting food may assist the user in easily and conveniently controlling portion size and improving eating habits.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

Disclosed are a head mounted display, which provides information regarding food that a user ingests and information regarding eating habits, and a method of controlling the same. The head mounted display includes a camera unit configured to capture an image of food that a user will ingest, a display unit configured to display augmented reality information related to the food, a communication unit configured to transmit and receive data, a sensor unit configured to sense a user ingestion motion of the food, and a processor configured to control the camera unit, display unit, communication unit, and sensor unit. The processor is configured to acquire information regarding the captured image of food, to acquire and display augmented reality information based on the food information, and to display at least one of information regarding the intake of food and information regarding the food ingestion motion based on the sensed food ingestion motion.

Description

HEAD MOUNTED DISPLAY AND METHOD OF CONTROLLING THE SAME
The present disclosure relates to a head mounted display, and more particularly to a head mounted display, which may provide information regarding food that a user ingests and information regarding eating habits, and a method of controlling the same.
The recent westernization of the Korean food culture is driving the increase in the number of obese patients, more particularly, extremely obese patients.
Accordingly, for treatment and prevention of obesity, the importance of lowcalorie food and portion control has been emphasized, and in recent years, restaurants specialized for healthy and balanced diet have appeared to help diet therapy and control of portion size.
In particular, control of portion size is necessary to prevent modern people who are liable to suffer from lack of exercise from becoming obese.
That is, modern people, who tend to not make time for exercise and suffer from lack of exercise, are liable to become obese if they eat large portions despite low physical activity levels.
Moreover, modern menus containing high calorie foods are one of the main causes of obesity so long as portion size is not appropriately controlled.
In addition, modern people tend to maintain bad eating habits due to a paucity of information regarding eating habits, which may often threaten health.
For instance, in the case of a person who habitually chews food only with one side of the mouth, the face may have abnormally deteriorated bilateral symmetry because jawbone or jaw muscles asymmetrically develop, which may cause a serious health problem later.
Accordingly, modern people must inconveniently search for and record information regarding ingested foods one by one to control portion size, and visit related hospitals to acquire information regarding their eating habits, etc.
Such difficulty in control of portion size and control of eating habits increase probability of diet failure, having a negative effect on health of modern people.
For this reason, there is a demand for a system which may easily and conveniently provide modern people with information regarding food that people ingest in real time as well as information regarding eating habits even while people eat food.
According to one embodiment, an object of the present disclosure is to provide a head mounted display, which displays information regarding food that a user will ingest as augmented reality information, and a method of controlling the same.
According to another embodiment, an object of the present disclosure is to provide a head mounted display, which senses a user motion of ingesting food, and displays information regarding the intake of food and information regarding the user motion of ingesting food, and a method of controlling the same.
According to a further embodiment, an object of the present disclosure is to provide a head mounted display, which displays recommended information in relation to a user motion of ingesting food, and a method of controlling the same.
To achieve these objects and other advantages and in accordance with the purpose of the disclosure, as embodied and broadly described herein, a head mounted display includes a camera unit configured to capture an image of food that a user will ingest, a display unit configured to display augmented reality information related to the food, a communication unit configured to transmit and receive data, a sensor unit configured to sense a user ingestion motion of the food, and a processor configured to control the camera unit, the display unit, the communication unit, and the sensor unit, wherein the processor is configured to acquire information regarding the captured image of food, wherein the processor is configured to acquire and display augmented reality information based on the food information, and wherein the processor is configured to display at least one of information regarding the intake of food and information regarding the food ingestion motion based on the sensed food ingestion motion.
Here, the sensed food ingestion motion may include a distance variation between the head mounted display and the food.
In some cases, the sensed food ingestion motion may include positioning of the food within a reference distance from the head mounted display.
In another case, the sensed food ingestion motion may include a distance variation between the head mounted display and the food, and positioning of the food within a reference distance from the head mounted display.
The processor may be configured to capture an image of the food if the entire food is detected within a view angle range of the head mounted display and the food is detected within a reference distance from the head mounted display.
The processor may be configured to detect at least one object included in the captured image of food, and configured to query for information regarding the detected object.
Here, the information regarding the object may include at least one of the kind of the object, the caloric content of the object, the weight of the object, and components of the object.
The food information may include at least one of the kind of the food, the caloric content of the food, the weight of the food, and components of the food.
The displayed information regarding the food ingestion motion may be at least one of the number of times of chewing the food, chewing direction, and chewing intensity, or includes the number of times of swallowing the food.
After displaying the information regarding the food ingestion motion based on the sensed food ingestion motion, the processor may be configured to capture an image of food currently remaining after ingestion if the remaining food is detected within a view angle range of the head mounted display, configured to compare the captured image of the remaining food with an initially captured image of food to analyze a current intake of food, and configured to extract and display information regarding the intake of the food based on the current intake of food.
The information regarding the intake of food may include at least one of the kind of ingested food, the caloric content of the ingested food, the weight of the ingested food, and components of the ingested food.
Upon analyzing the current intake of ingested food, the processor may be configured to compare the captured image of the remaining food with the initially captured image of food in terms of the size to calculate a difference therebetween, and configured to calculate the intake of food corresponding to the calculated difference.
Upon extracting the information regarding the intake of food, the processor may be configured to calculate caloric content, weight, and component values with regard to the intake of food based on caloric content, weight, and component values with regard to the initially captured image of food.
The processor may be configured to store information regarding the intake of food and information regarding the food ingestion motion in real time based on the sensed food ingestion motion, configured to calculate a total intake of food and total result values with regard to the food ingestion motion based on the stored information regarding the intake of food and the stored information regarding the food ingestion motion, configured to extract recommended information regarding the food ingestion motion based on the calculated total intake of food and the calculated total result values with regard to the food ingestion motion, and configured to display information regarding the total intake of food, total result information regarding the food ingestion motion, and the recommended information regarding the food ingestion motion.
The processor may be configured to calculate the total intake of food and the total result values with regard to the food ingestion motion when a sensing signal indicating the food ingestion motion is not received from the sensor unit for a given time.
The recommended information regarding the food ingestion motion may include at least one of the recommended number of times of chewing the food, recommended chewing direction, recommended chewing intensity, deficient components, excessive components, and recommended food.
The sensor unit may include at least one of a sound sensor, a vibration sensor, and a muscle movement sensor, and the muscle movement sensor may include at least one of an Electromyography (EMG) sensor and a Force Sensing Resistor (FSR) sensor.
In accordance with another aspect of the present disclosure, a method of controlling a head mounted display including a sensor unit configured to sense a food ingestion motion, includes capturing an image of food that a user will ingest, acquiring information regarding the captured image of food, acquiring and displaying augmented reality information based on the food information, sensing the food ingestion motion, and displaying at least one of information regarding the intake of food and information regarding the food ingestion motion based on the sensed food ingestion motion.
Here, the method may further include calculating a total intake of food and total result values with regard to the food ingestion motion if a sensing signal indicating the food ingestion motion is not received from the sensor unit for a given time, extracting recommended information regarding the food ingestion motion based on the calculated total intake of food and the calculated total result values with regard to the food ingestion motion, and displaying information regarding the total intake of food, total result information regarding the food ingestion motion, and the recommended information regarding the food ingestion motion.
According to one embodiment, as a result of sensing a user motion of ingesting food, and displaying information regarding food that the user ingests, information regarding the intake of food, and information regarding the user motion of ingesting food, the user may easily and conveniently acquire information regarding food that the user ingests and information regarding eating habits in real time while the user eats food.
In addition, displaying recommended information related to the user motion of ingesting food may assist the user in easily and conveniently controlling portion size and improving eating habits.
FIG. 1 is a block diagram showing a configuration of a head mounted display according to the present disclosure;
FIG. 2 is a view showing a view angle range of a head mounted display according to the present disclosure;
FIGs. 3 to 5 are views showing image capture of food located within a view angle range of a head mounted display;
FIG. 6 is a view showing information regarding food that a user will ingest;
FIGs. 7 to 9 are views showing food information regarding an object included in food;
FIG. 10 is a view showing information regarding a user motion of ingesting food;
FIGs. 11 and 12 are views showing information regarding the intake of food;
FIG. 13 is a view showing total information regarding food that the user has ingested;
FIGs. 14 and 15 are flowcharts explaining a method of providing food ingestion information of a head mounted display according to the present disclosure; and
FIGs. 16 to 28 are views showing an embodiment in which a user uses a head mounted display according to the present disclosure.
Although the terms used in the following description are selected, as much as possible, from general terms that are widely used at present while taking into consideration the functions obtained in accordance with the embodiments, these terms may be replaced by other terms based on intensions of those skilled in the art, customs, emergence of new technologies, or the like. Also, in a particular case, terms that are arbitrarily selected by the applicant may be used. In this case, the meanings of these terms may be described in corresponding description parts of the disclosure. Accordingly, it should be noted that the terms used herein should be construed based on practical meanings thereof and the whole content of this specification, rather than being simply construed based on names of the terms.
Moreover, although the embodiments will be described herein in detail with reference to the accompanying drawings and content described in the accompanying drawings, it should be understood that the disclosure is not limited to or restricted by the embodiments.
FIG. 1 is a block diagram showing a configuration of a head mounted display according to the present disclosure. It is noted that FIG. 1 shows one embodiment and some constituent modules may be omitted or new constituent modules may be added by those skilled in the art as necessary.
As exemplarily shown in FIG. 1, the head mounted display, designated by reference numeral 300, according to one embodiment may include a camera unit 110, a display unit 120, a communication unit 130, a sensor unit 140, and a processor 150.
First, the camera unit 110 may capture an image around the head mounted display 300. In one example, the camera unit 110 may capture an image of food that a user will ingest.
For instance, the camera unit 110 may capture an image of food within a predetermined range (hereinafter referred to as a ‘view angle range’) corresponding to the view field of the user who wears the head mounted display 300. Then, the camera unit 110 may transmit a captured result to the processor 150.
In addition, the camera unit 110 may include a stereoscopic camera.
For instance, the camera unit 110 may include at least two lenses, which are spaced apart from each other by a given distance to capture the same subject at the same time.
Here, the at least two lenses may be linked to each other in terms of focus adjustment, exposure adjustment, and shutter operation.
As such, the image captured via the camera unit 110 of the head mounted display 300 may include a stereoscopic image (e.g., a 3D image).
The camera unit 110 may be given as a separate unit included in the head mounted display 300.
In some cases, the camera unit 110 may be integrated with the sensor unit 140 so as to be included as a single unit in the head mounted display 300.
The display unit 120 may display augmented reality information related to food on a display screen.
In this case, the display unit 120 may output an image based on content executed in the processor 150, or a control instruction of the processor 150.
Assuming that the camera unit 110 includes a stereoscopic camera, the display unit 120 may include a component required to display a stereoscopic image to allow the user, i.e. the wearer of the head mounted display 300 to view a stereoscopic image.
Then, the display unit 120 may display augmented reality information related to the food image captured by the camera unit 110. The display unit 120 may receive the augmented reality information from an external server, or the augmented reality information may be previously stored in the head mounted display 300.
The communication unit 130 may implement communication with an external server or an internal storage device using various protocols to receive and/or transmit data.
In addition, the communication unit 130 may be connected to a network in a wireless or wired manner to receive and/or transmit digital data, such as food information, augmented reality information, etc.
The sensor unit 140 may sense the surrounding environment of the head mounted display 300 using at least one sensor mounted to the head mounted display 300, and transmit a sensed result in the form of a signal to the processor 150.
In one example, the sensor unit 140 may sense a user motion of ingesting food (hereinafter referred to as “food ingestion motion”).
In some cases, the sensor unit 140 may sense a user input, and transmit an input signal depending on a sensed result to the processor 150.
Accordingly, the sensor unit 140 may include at least one sensing means.
In one embodiment, the at least one sensing means may include a gravity sensor, geomagnetic sensor, motion sensor, gyro sensor, accelerometer, infrared sensor, inclination sensor, brightness sensor, height sensor, olfactory sensor, temperature sensor, depth sensor, pressure sensor, bending sensor, audio sensor, video sensor, Global Positioning System (GPS) sensor, grip sensor, touch sensor, etc.
In addition, the sensor unit 140 may include at least one of a sound sensor, a vibration sensor, and a muscle movement sensor, which serve to sense a user motion of ingesting food.
Here, the muscle movement sensor may include at least one of an Electromyography (EMG) sensor and a Force Sensing Resistor (FSR) sensor, without being limited thereto.
In some cases, if the sensor unit 140 senses the position or direction of the head mounted display 300, the processor 150 may process information regarding the position or direction of the head mounted display 300.
The sensor unit 140 is a generic term for the above enumerated various sensing means. The sensor unit 140 may sense various user inputs and the environment of the head mounted display 300, and transmit a sensed result to the processor 150 to allow the processor 150 to implement a corresponding operation.
The processor 150 may process internal data of the head mounted display 300, and control the aforementioned respective units of the head mounted display 300 as well as transmission and/or reception of data between the units.
Transmission and/or reception of data or various signals including control signals between the units may be implemented via a bus.
For instance, the processor 150 may control the camera unit 110, the display unit 120, the communication unit 130, and the sensor unit 140.
Here, the processor 150 may query an external server for information regarding food of a captured image, and receive a response including the queried food information from the external server. As such, the processor 150 may acquire information regarding food of a captured image.
In some cases, the processor 150 may acquire information regarding food of a captured image from an internal storage device.
In addition, the processor 150 may acquire and display augmented reality information based on food information, and display at least one of information regarding the intake of food and information regarding the food ingestion motion based on the sensed food ingestion motion.
Although not shown in FIG. 1, the head mounted display 300 may further include a storage unit, an audio input/output unit, or a power unit.
Here, the storage unit (not shown) may store various digital data, such as audio, pictures, moving images, applications, etc. The storage unit includes various digital data storage spaces, such as a flash memory, a Random Access Memory (RAM), a Solid State Driver (SSD), etc.
The storage unit may temporarily store data received from the external server via the communication unit 130.
In this case, the storage unit may serve as a buffer to output data received from the external server to the head mounted display 300.
In some cases, the storage unit may be included in the head mounted display 300, or may be separately connected to the head mounted display 300.
The audio output unit (not shown) includes audio output means, such as a speaker, earphone, etc.
In addition, the audio output unit may output voice based on a control instruction of the processor 150 or content executed by the processor 150.
In this case, the audio output unit may be included in the head mounted display 100, or may be separately connected to the head mounted display 300.
The power unit (not shown) is a power source connected to a battery inside the device or an external power source, and may supply power to the head mounted display 300.
In FIG. 1 as a block diagram of the head mounted display 300 according to an embodiment, separately shown blocks logically distinguish elements of the head mounted display 300.
Accordingly, the elements of the abovedescribed head mounted display 300 may be mounted as a single chip or a plurality of chips based on device design.
FIG. 2 is a view showing a view angle range of the head mounted display according to the present disclosure.
As exemplarily shown in FIG. 2, the camera unit of the head mounted display 300 may capture an image of food within a view angle range 30.
Here, the direction R of the head mounted display 300 refers to a forward direction of the head mounted display 300 in which an image of food is captured.
For instance, the direction of the head mounted display 300 may be a forward direction of the user who wears the head mounted display 300.
As such, the head mounted display 300 may generate direction information θ regarding food based on information regarding the direction of the head mounted display 300.
That is, direction information θ regarding food is information that indicates how far food is distant from the head mounted display 300 in the direction R of the head mounted display 300, and is used to judge whether or not food is located within the view angle range 30 of the head mounted display 300.
In addition, the head mounted display 300 may acquire information regarding the position of the head mounted display 300.
Accordingly, the head mounted display 300 may acquire direction information θ regarding food that indicates how far food is distant from the head mounted display 300 in the direction R of the head mounted display 300 using information regarding the position of the head mounted display 300, information regarding the direction R of the head mounted display 300, and information regarding the position of food.
In this way, based on the acquired information regarding the position of food, the processor of the head mounted display 300 may control the camera unit to capture an image of food if the food is detected within the view angle range 30 of the head mounted display 300.
In one example, the processor of the head mounted display 300 may control the camera unit to capture an image of food if the entire food is detected within the view angle range 30 of the head mounted display 300.
In another example, based on the acquired information regarding the position of food, the processor of the head mounted display 300 may control the camera unit to capture an image of food if the food is detected within the view angle range 30 of the head mounted display 300 and is located within a reference distance from the head mounted display 300.
FIGs. 3 to 5 are views showing image capture of food located within the view angle range of the head mounted display.
As exemplarily shown in FIGs. 3 to 5, the head mounted display 300 may detect food located within the view angle range 30.
Here, the processor of the head mounted display 300 may control the sensor unit to acquire information regarding the position of food that the user will ingest.
The processor of the head mounted display 300 may extract information regarding at least one of the distance, direction, and height of food on the basis of the position of the head mounted display 300 from the acquired information regarding the position of food.
Then, the processor of the head mounted display 300, as exemplarily shown in FIG. 3, may control the camera unit to capture an image of food based on the acquired information regarding the position of food if the food is detected within the view angle range 30 of the head mounted display 300.
In some cases, the processor of the head mounted display 300 may control the camera unit only when the entire food is detected within the view angle range 30 of the head mounted display 300, thereby enabling image capture of the corresponding food.
For instance, as exemplarily shown in FIG. 3, in the case in which food, such as a hamburger 2, a pizza 4, and a piece of cake 6, are located in front of the head mounted display 300, assuming that the hamburger 2 and the pizza 4 are wholly located within the view angle range 30 of the head mounted display 300, and the piece of cake 6 is only partially located within the view angle range 30 of the head mounted display 300, the processor of the head mounted display 300 may control the camera unit to capture images of the hamburger 2 and the pizza 4 without capturing an image of the piece of cake 6.
In another case, the processor of the head mounted display 300 may control the camera unit to capture an image of food based on the acquired information regarding the position of food if the food is detected within the view angle range 30 of the head mounted display 300 and is located within a reference distance from the head mounted display 300.
For instance, as exemplarily shown in FIGs. 4 and 5, in the case in which food, such as the hamburger 2, the pizza 4, and the piece of cake 6, are located in front of the head mounted display 300, the hamburger 2 may be wholly located within the view angle range 30 of the head mounted display 300 and may be located within a reference distance d.
That is, the hamburger 2 may be located within a reference distance d from the camera unit 110 of the head mounted display 300.
For instance, a distance d11 between the hamburger 2 and the camera unit 110 of the head mounted display 300 may be less than the reference distance d.
The pizza 4 may be wholly located within the view angle range 30 of the head mounted display 300 and be located outside the reference distance d.
That is, the pizza 4 may be located outside the reference distance d from the camera unit 110 of the head mounted display 300.
For instance, a distance d12 between the pizza 4 and the camera unit 110 of the head mounted display 300 may be greater than the reference distance d.
The piece of cake 6 may be only partially located within the view angle range 30 of the head mounted display 300.
In this case, the processor of the head mounted display 300 may control the camera unit to capture only an image of the hamburger 2 and to not capture an image of the pizza 4 and the piece of cake 6.
This serves to capture only an image of food that the user brings to the mouth to eat food, thereby providing clear information regarding food that the user will ingest.
Capturing images of all food located within the view angle range 30 of the head mounted display 300 may provide the user with unwanted information regarding food that the user didn’t ingest upon provision of information regarding food that the user has ingested, which may reduce reliability of information provision.
In the present disclosure, upon provision of information regarding food that the user has ingested, in order to enhance reliability of sensing of a real user motion of ingesting food, the processor of the head mounted display 300 may sense a distance variation between the head mounted display 300 and food.
For instance, in order to accurately perceive a user motion of bringing food to the mouth, upon sensing a user motion of ingesting food, the sensed food ingestion motion may include a distance variation between the head mounted display 300 and food.
In one case, the sensed food ingestion motion may include positioning of food within the reference distance from the head mounted display 300.
In another case, the sensed food ingestion motion may include a distance variation between the head mounted display 300 and food and locating the food within the reference distance from the head mounted display 300.
Hereinafter, FIGs. 6 to 13 are views showing images perceived by the user’s eye via a screen of the head mounted display according to the present disclosure.
FIG. 6 is a view showing information regarding food that the user will ingest.
As exemplarily shown in FIG. 6, the user may receive information regarding food that the user will ingest as augmented reality information via a screen of the head mounted display 300.
In one example, although food, such as the hamburger 2, the pizza 4, and the piece of cake 6, are located in front of the head mounted display 300 and all of the food are located within the view angle range 30 of the head mounted display 300, only food information 400 regarding the hamburger 2 located within the reference distance may be displayed.
Here, the displayed food information 400 may include at least one of the kind of food, the caloric content of the food, and components of the food, without being limited thereto.
In another case, the displayed food information 400 may include information regarding objects included in food, and may be displayed on a per object basis.
FIGs. 7 to 9 are views showing food information regarding an object included in food.
As exemplarily shown in FIGs. 7 to 9, the user may receive information regarding an object included in food that the user will ingest as augmented reality information via a screen of the head mounted display 300.
Here, the processor of the head mounted display 300 may detect at least one object included in the captured food image.
The processor of the head mounted display 300 may query the external server for information regarding the detected object, or extract the information from data stored therein.
For instance, as exemplarily shown in FIG. 7, the processor of the head mounted display 300 may detect a bun 2a of the hamburger 2 as an object included in the captured image of the hamburger 2.
Then, the processor of the head mounted display 300 may display information regarding the detected bun 2a of the hamburger 2.
Here, information regarding an object may include at least one of the kind of the object, the caloric content of the object, the weight of the object, and components of the object, without being limited thereto.
As exemplarily shown in FIG. 8, the processor of the head mounted display 300 may detect a patty 2b of the hamburger 2b as an object included in the captured image of the hamburger 2.
Then, the processor of the head mounted display 300 may display information 420 regarding the detected patty 2b of the hamburger 2.
In addition, as exemplarily shown in FIG. 9, the processor of the head mounted display 300 may detect a vegetable 2c of the hamburger 2 as an object included in the captured image of the hamburger 2.
Then, the processor of the head mounted display 300 may display information 430 regarding the detected vegetable 2c of the hamburger 2.
In this way, the processor of the head mounted display 300 may display not only information regarding food, but also information regarding an object included in the food, thereby providing the user with clear detailed information regarding food, which may result in enhanced reliability of information.
FIG. 10 is a view showing information regarding a food ingestion motion.
As exemplarily shown in FIG. 10, the sensor unit of the head mounted display 300 may sense a user motion of ingesting food.
Here, the sensor unit may include at least one of a sound sensor, a vibration sensor, and a muscle movement sensor, which serve to sense a user motion of ingesting food, without being limited thereto.
In one example, the sound sensor may sense sound generated while the user chews food, the vibration sensor may sense the food chewing intensity by the user, and the muscle movement sensor may sense the food chewing direction by the user.
In this case, the muscle movement sensor may include at least one of an Electromyography (EMG) sensor and a Force Sensing Register (FSR) sensor, without being limited thereto.
In one example, among the aforementioned muscle movement sensors, the EMG sensor is configured to detect an electromyography signal generated differently according to a muscular contraction degree.
Next, the FSR sensor may measure resistance that is reduced as force applied to a surface of the sensor increases, thereby detecting a signal indicating movement of muscles.
Accordingly, the processor of the head mounted display 300 may extract and display information related to a user motion of ingesting food upon receiving a signal sensed by the sensor unit.
For instance, the processor of the head mounted display 300 may extract and display information regarding the food chewing direction contained in the user motion of ingesting food upon receiving a signal sensed by the muscle movement sensor of the sensor unit.
Here, the muscle movement sensor may include a first sensor to sense movement of the right jaw muscles of the user, and a second sensor to sense movement of the left jaw muscles of the user.
In some cases, the muscle movement sensor may be a single sensor to sense movement of the right jaw muscles and movement of the left jaw muscles of the user at the same time.
Then, the processor of the head mounted display 300 may extract and display information regarding the number of chews contained in the user motion of ingesting food upon receiving a signal sensed by the muscle movement sensor and the vibration sensor of the sensor unit.
Then, the processor of the head mounted display 300 may extract and display information regarding the food chewing intensity contained in the user motion of ingesting food upon receiving a signal sensed by the muscle movement sensor, the vibration sensor, and the sound sensor of the sensor unit.
In addition, the processor of the head mounted display 300 may extract and display information regarding the number of swallows contained in the user motion of ingesting food upon receiving signals sensed by the muscle movement sensor and the sound sensor of the sensor unit.
In this way, the processor of the head mounted display 300 may display information regarding the food ingestion motion based on the sensed food ingestion motion from the sensor unit.
Here, the displayed information regarding the food ingestion motion may be at least one of the number of chews, food chewing direction, and food chewing intensity, or may include the number of swallows.
FIGs. 11 and 12 are views showing information regarding the intake of food.
As exemplarily shown in FIGs. 11 and 12, after displaying information regarding the food ingestion motion based on the sensed food ingestion motion, the processor of the head mounted display 300 may check whether or not food currently remaining after ingestion is detected within the view angle range of the head mounted display 300.
If the processor of the head mounted display 300 detects the presence of food currently remaining after ingestion within the view angle range of the head mounted display 300, the processor may control the camera unit to capture an image of the remaining food.
Then, the processor of the head mounted display 300 may compare the captured image of the remaining food with an initially captured image of food to analyze the current intake of food.
For instance, upon analyzing the current intake of food, the processor of the head mounted display 300 may first compare the captured image of the remaining food with the initially captured image of food in terms of the size, and calculate a difference between the sizes of the images.
Next, the processor of the head mounted display 300 may calculate the intake of food based on the calculated difference.
Next, the processor of the head mounted display 300, as exemplarily shown in FIG. 11, may extract and display information 600 regarding the intake of food based on the calculated intake of food.
Here, the information 600 regarding the intake of food may include at least one of the kind of food that the user has taken, the caloric content of the food, the weight of the food, and components of the food.
In one example, upon extracting information regarding the intake of food, the processor of the head mounted display 300 may calculate caloric content, weight, and component values with regard to the intake of food based on caloric content, weight, and component values with regard to the initially captured image of food.
In some cases, the processor of the head mounted display 300, as exemplarily shown in FIG. 12, may extract and display information regarding each object included in food based on the calculated intake of food.
That is, the processor of the head mounted display 300 may extract and display information 610 regarding the intake of each object included in the food.
Here, the information 610 regarding the intake of each object may include at least one of the kind of an ingested object, the caloric content of the ingested object, the weight of the ingested object, and components of the ingested object.
For instance, as exemplarily shown in FIG. 12, if food is the hamburger 2, objects of the hamburger 2 may be a bun, a patty, and a vegetable.
Accordingly, the head mounted display 300 according to the present disclosure may provide the user with the current intake of food that the user ingests in real time, which may assist the user in easily and conveniently regulating portion size.
FIG. 13 is a view showing total information regarding food that the user has ingested.
As exemplarily shown in FIG. 13, the processor of the head mounted display 300 may store information regarding the intake of food and information regarding a food ingestion motion in real time based on the sensed food ingestion motion.
Then, the processor of the head mounted display 300 may calculate a total intake of food and total result values with regard to the food ingestion motion based on the stored information regarding the intake of food as well as information regarding the food ingestion motion.
Here, the processor of the head mounted display 300 may calculate the total intake of food and the total result values with regard to the food ingestion motion if a sensing signal indicating the food ingestion motion is not received from the sensor unit for a given time.
Then, the processor of the head mounted display 300 may extract recommended information regarding the food ingestion motion based on the calculated total intake of food and the total result values with regard to the food ingestion motion.
Here, the recommended information regarding the food ingestion motion may be previously stored in the storage unit, or may be transmitted from the external server, according to the total intake of food and the total result values with regard to the food ingestion motion.
Next, the processor of the head mounted display 300 may display total result information 700 including at least one of information 710 regarding the total intake of food, total result information 720 regarding the food ingestion motion, and recommended information 730 regarding the food ingestion motion.
Here, the recommended information 730 regarding the food ingestion motion may include at least one of the recommended number of chews, recommended chewing direction, recommended chewing intensity, deficient components, excessive components, and recommended food.
Accordingly, the head mounted display 300 of the present disclosure may provide the user with the total intake of food, results of the food ingestion ser motion, and recommended detailed information based on the total intake of food, results of the food ingestion ser motion, thereby assisting the user in easily and conveniently improving eating habits.
FIGs. 14 and 15 are flowcharts explaining a method of providing food ingestion information of a head mounted display according to the present disclosure.
As exemplarily shown in FIGs. 14 and 15, first, the processor of the head mounted display may control the sensor unit to acquire information regarding the position of food that the user will ingest.
Then, the processor of the head mounted display may extract information regarding at least one of the distance, direction, and height of the food on the basis of the position of the head mounted display based on the acquired information regarding the position of the food.
Next, the processor of the head mounted display may check whether or not the food is detected within a view angle range of the head mounted display according to the acquired information regarding the position of the food.
Next, the processor of the head mounted display may control the camera unit to capture an image of the food if the entire food is detected within the view angle range of the head mounted display (S10).
In some cases, after checking that the entire food is detected within the view angle range of the head mounted display, the processor of the head mounted display may check whether or not food is present within a reference distance from the head mounted display, and thereafter control the camera unit to capture an image of the food (S10).
Next, the processor of the head mounted display may control the communication unit to query the external server for information regarding the captured image of food (S20).
In some cases, the processor of the head mounted display may extract information regarding the captured image of food from data previously stored in the storage unit.
Next, the processor of the head mounted display may receive a response including food information corresponding to the query from the external server via the communication unit (S30).
Next, the processor of the head mounted display may control the display unit to display augmented reality information related to the image of food (S40).
Here, the augmented reality information may be received from the external server, or may be previously stored in the storage unit of the head mounted display.
In this case, the displayed food information may include at least one of the kind of the food, the caloric content of the food, the weight of the food, and components of the food.
In another case, the processor of the head mounted display may additionally detect at least one object included in the captured image of food from the camera unit.
Then, the processor of the head mounted display may query the external server for information regarding the detected object via the communication unit.
In some cases, the processor of the head mounted display may extract information regarding the object included in the captured image of food from data previously stored in the storage unit.
Next, the processor of the head mounted display may receive a response including information regarding the object corresponding to the query from the external server via the communication unit.
Next, the processor of the head mounted display may control the display unit to display augmented reality information related to an image of the object included in the food.
Here, the displayed information regarding the object may include at least one of the kind of the object, the caloric content of the object, the weight of the object, and components of the object.
Next, the processor of the head mounted display may control the sensor unit to sense a user motion of ingesting food (S50).
Here, the sensor unit may include at least one of the sound sensor, the vibration sensor, and the muscle movement sensor, which serve to sense the user motion of ingesting food, without being limited thereto.
In one example, the sound sensor may sense sound generated while the user chews food, the vibration sensor may sense the food chewing intensity by the user, and the muscle movement sensor may sense the food chewing direction by the user.
In this case, the muscle movement sensor may include at least one of an Electromyography (EMG) sensor and a Force Sensing Register (FSR) sensor, without being limited thereto.
Next, the processor of the head mounted display may check whether or not a sensing signal is received from the sensor unit (S60).
Then, if the sensing signal from the sensor unit is received within a first time, the processor of the head mounted display may store and display information regarding the food ingestion motion in real time based on the sensed food ingestion motion (S70).
Here, the displayed information regarding the food ingestion motion may be at least one of the number of chews, chewing direction, and chewing intensity, or may include the number of times of swallows.
In some cases, as exemplarily shown in FIG. 15, the processor of the head mounted display may analyze food currently remaining after ingestion, and analyze the current intake of food.
For instance, after displaying information regarding the food ingestion motion based on the sensed food ingestion motion (S61), the processor of the head mounted display may check whether or not food currently remaining after ingestion is detected within the view angle range of the head mounted display.
Then, if the currently remaining food after ingestion is detected within the view angle range of the head mounted display, the processor of the head mounted display may control the camera unit to capture an image of the remaining food (S62).
Next, the processor of the head mounted display may compare the captured image of the remaining food with an initially captured image of food to thereby analyze the current intake of food (S63).
Here, upon analyzing the current intake of food, the processor of the head mounted display may compare the captured image of the remaining food with an initially captured image of food in terms of the size to calculate a difference between the sizes of the images, and calculate the intake of food corresponding to the calculated difference.
Next, the processor of the head mounted display may extract information regarding the intake of food based on the current intake of food (S64).
Here, upon extraction of the information regarding the intake of food, the processor of the head mounted display may calculate caloric content, weight, and component values with regard to the intake of food based on caloric content, weight, and component values with regard to the initially captured image of food.
Then, the processor of the head mounted display may display information regarding the intake of food on the display unit.
Here, the displayed information regarding the intake of food may include at least one of the kind of ingested food, the caloric content of the ingested food, the weight of the ingested food, and components of the food.
However, in operation S60, if a sensing signal indicating the food ingestion motion is not received from the sensor unit, the processor of the head mounted display checks whether or not the first time has passed after no sensing signal is received (S80).
Next, if the sensing signal indicating the food ingestion motion is not received from the sensor unit even after the first time has passed, the processor of the head mounted display may calculate a total intake of food, and total result values with regard to the food ingestion motion (S90).
Next, the processor of the head mounted display may extract recommended information regarding the food ingestion motion based on the calculated total intake of food and the calculated total result values of the food ingestion motion (S100).
Next, the processor of the head mounted display may display information regarding the total intake of food, total result information regarding the food ingestion motion, and recommended information regarding the food ingestion motion on the display unit (S110).
Here, the displayed recommended information regarding the food ingestion motion may include at least one of the recommended number of chews, recommended chewing direction, recommended chewing intensity, deficient components, excessive components, and recommended food.
As described above, the head mounted display according to the present disclosure may provide the user with information regarding food that the user ingests, information regarding the intake of food, and information regarding the food ingestion motion in real time, thereby allowing the user to easily and conveniently acquire information regarding food that the user ingests as well as information regarding eating habits in real time even while the user ingests food.
In addition, the head mounted display according to the present disclosure may provide the user with recommended information regarding the food ingestion motion, thereby assisting the user in easily and conveniently controlling portion size and improving eating habits.
FIGs. 16 to 28 are views showing one embodiment in which the user uses the head mounted display according to the present disclosure. Here, FIGs. 19 to 28 are views showing images perceived by the user’s eye via the screen of the head mounted display according to the present disclosure.
As exemplarily shown in FIG. 16, if the user attempts to ingest various food, such as, for example, the hamburger 2, the pizza 4, and the piece of cake 6, the user may wish to acquire various information including, e.g., components and caloric content of food that the user will ingest.
Accordingly, as exemplarily shown in FIG. 17, the user may wear the head mounted display 300 according to the present disclosure, and view the various food, such as the hamburger 2, the pizza 4, and the piece of cake 6 that the user will ingest via the head mounted display 300.
Next, as exemplarily shown in FIG. 18, if the user operates the worn head mounted display 300, the head mounted display 300 may acquire information regarding the position of food that the user will ingest.
Next, as exemplarily shown in FIG. 19, if the user picks up the hamburger 2 to ingest the hamburger 2 among the various food, such as the hamburger 2, the pizza 4, and the piece of cake 6, the head mounted display 300 may check whether or not the hamburger 2 is detected within a view angle range of the head mounted display 300 based on acquired information regarding the position of the hamburger 2.
Here, upon checking that the entire hamburger 2 is detected within the view angle range of the head mounted display 300, the head mounted display 300 may control the camera unit 110 to capture an image of the hamburger 2 only when the hamburger 2 is present within a reference distance from the head mounted display 300.
Next, as exemplarily shown in FIG. 20, as the head mounted display 300 displays augmented reality information related to the captured image of the hamburger 2, the user may view various information regarding the hamburger 2.
Here, displayed information 400 regarding the hamburger 2 may include the caloric content and weight of the hamburger 2, and components of the hamburger 2, such as, for example, fats and carbohydrates.
Next, as exemplarily shown in FIG. 21, while the user eats the hamburger 2, the head mounted display 300 may sense a user motion of ingesting the hamburger 2.
Next, as exemplarily shown in FIG. 22, if the head mounted display 300 displays information regarding the user motion of ingesting the hamburger 2 based on the sensed user motion of ingesting the hamburger 2 in real time, the user may view various information regarding his/her motion of ingesting the hamburger 2.
Here, displayed information 500 regarding the user motion of ingesting the hamburger 2 may include the number of times of chewing the hamburger 2, and the direction of chewing the hamburger 2, for example.
Then, as exemplarily shown in FIG. 23, the head mounted display 300 may continuously update and display the information 500 regarding the sensed user motion of ingesting the hamburger 2.
Accordingly, the user may easily and conveniently perceive his/her motion of ingesting the hamburger 2 and improve their bad eating habits while eating the hamburger 2.
Next, as exemplarily shown in FIG. 24, if the user selects the pizza 4 among the remaining food after eating the hamburger 2, the head mounted display 300 may check whether or not the pizza 4 is detected within the view angle range of the head mounted display 300 based on acquired information regarding the position of the pizza 4.
Here, if it is checked that the entire pizza 4 is detected within the view angle range of the head mounted display 300, the head mounted display 300 may control the camera unit to capture an image of the pizza 4.
Then, as the head mounted display 300 displays augmented reality information related to the captured image of the pizza 4, the user may view various information regarding the pizza 4.
Here, displayed information 400 regarding the pizza 4 may include the caloric content and weight of the pizza 4, and components of the pizza 4, such as, for example, fats and carbohydrates.
Then, as exemplarily shown in FIG. 25, if the user picks up a slice of pizza 4a among the pizza 4 to eat the slice of pizza 4a, the head mounted display 300 may control the camera unit to capture an image of the slice of pizza 4a under the assumption that the entire slice of pizza 4a is detected within the view angle range of the head mounted display 300 and the slice of pizza 4a is located within a reference distance from the head mounted display 300.
Then, if the head mounted display 300 displays augmented reality information related to the captured image of the slice of pizza 4a, the user may view various information regarding the slice of pizza 4a.
Here, displayed information 410 regarding the slice of pizza 4a may include the caloric content and weight of the slice of pizza 4a, and components of the slice of pizza 4a, such as, for example, fats and carbohydrates.
Next, as exemplarily shown in FIG. 26, while the user eats the slice of pizza 4a, the head mounted display 300 may sense the user motion of ingesting the slice of pizza 4a.
Then, if the head mounted display 300 displays information regarding the sensed user motion of ingesting the slice of pizza 4a in real time based on the sensed user motion of ingesting the slice of pizza 4a, the user may view various information (not shown) regarding his/her motion of ingesting the slice of pizza 4a.
Then, as exemplarily shown in FIG. 27, if the user no longer eats food, that is, if a sensing signal indicating the user motion of ingesting food is not received from the sensor unit for a given time, the head mounted display 300 may calculate a total intake of food that the user has ingested up to now and total result values with regard to the food ingestion motion.
Then, the head mounted display 300 may extract recommended information regarding the food ingestion motion based on the calculated total intake of food and the calculated total result values with regard to the food ingestion motion.
Next, as exemplarily shown in FIG. 28, if the head mounted display 300 displays information regarding the total intake of food, total result information regarding the food ingestion motion, and recommended information regarding the food ingestion motion, the user may acquire the total intake of food that the user has ingested up to now and results of the food ingestion motion, and may additionally acquire various recommended information regarding the food ingestion motion.
Here, displayed recommended information 700 regarding the food ingestion motion may include at least one of the recommended number of chews, recommended chewing direction, recommended chewing intensity, deficient components, excessive components, and recommended food.
In this way, the user may acquire various information regarding the kind, quantity, and components of food that the user has ingested as well as various recommended information regarding eating habits, which may assist the user in easily and conveniently controlling portion size and improving eating habits.
The head mounted display and the method of controlling the same according to one embodiment are not limited to the configurations and methods of the above described embodiments, and all or some of the embodiments may be selectively combined to achieve various modifications.
Meanwhile, the head mounted display and the method of controlling the same according to the present disclosure may be implemented as code that may be written on a processor readable recording medium and thus read by a processor provided in a network device. The processor readable recording medium may be any type of recording device in which data is stored in a processor readable manner. Examples of the processor readable recording medium may include a ROM, a RAM, a CDROM, a magnetic tape, a floppy disc, and an optical data storage device. In addition, the processor readable recording medium includes a carrier wave (e.g., data transmission over the Internet). Also, the processor readable recording medium may be distributed over a plurality of computer systems connected to a network so that processor readable code is written thereto and executed therefrom in a decentralized manner.
As is apparent from the above description, according to one embodiment, as a result of sensing a user motion of ingesting food, and displaying information regarding food that the user ingests, information regarding the intake of food, and information regarding the user motion of ingesting food, the user may easily and conveniently acquire information regarding food that the user ingests and information regarding eating habits in real time while the user eats food.
In addition, displaying recommended information related to the user motion of ingesting food may assist the user in easily and conveniently controlling portion size and improving eating habits.
As described above, a related description has sufficiently been discussed in the above “Best Mode” for implementation of the disclosure.
As described above, the disclosure may be wholly or partially applied to a head mounted display, which may provide information regarding food that a user ingests and information regarding eating habits, and a method of controlling the same.

Claims (20)

  1. A head mounted display (HMD) comprising:
    a camera unit configured to capture an image of food that a user will ingest;
    a display unit configured to display augmented reality information related to the food;
    a communication unit configured to transmit and receive data;
    a sensor unit configured to sense a user ingestion motion of the food; and
    a processor configured to control the camera unit, the display unit, the communication unit, and the sensor unit,
    wherein the processor is further configured to:
    acquire information regarding the captured image of the food,
    acquire and display augmented reality information based on the food information, and
    display at least one information regarding the intake of the food and information regarding the food ingestion motion based on the sensed food ingestion motion.
  2. The HMD according to claim 1, wherein the sensed food ingestion motion includes a distance variation between the HMD and the food.
  3. The HMD according to claim 1, wherein the sensed food ingestion motion includes positioning of the food within a reference distance from the HMD.
  4. The HMD according to claim 1, wherein the sensed food ingestion motion includes a distance variation between the HMD and the food, and positioning of the food within a reference distance from the HMD.
  5. The HMD according to claim 1, wherein the processor is further configured to capture an image of the food if the entire food is detected within a view angle range of the HMD and the food is detected within a reference distance from the HMD.
  6. The HMD according to claim 1, wherein the processor is further configured to detect at least one object included in the captured image of food, and configured to query for information regarding the detected object.
  7. The HMD according to claim 6, wherein the information regarding the object includes at least one of a kind of the object, a caloric content of the object, a weight of the object, and components of the object.
  8. The HMD according to claim 1, wherein the food information includes at least one of a kind of the food, a caloric content of the food, a weight of the food, and components of the food.
  9. The HMD according to claim 1, wherein the displayed information regarding the food ingestion motion is at least one of a number of times of chewing the food, chewing direction, and chewing intensity, or includes number of times of swallowing the food.
  10. The HMD according to claim 1, wherein, after displaying the information regarding the food ingestion motion based on the sensed food ingestion motion, the processor is further configured to:
    capture an image of food currently remaining after ingestion if the remaining food is detected within a view angle range of the HMD,
    compare the captured image of the remaining food with an initially captured image of food to analyze a current intake of food, and
    extract and display information regarding the intake of the food based on the current intake of food.
  11. The HMD according to claim 10, wherein the information regarding the intake of food includes at least one of a kind of ingested food, a caloric content of the ingested food, a weight of the ingested food, and components of the ingested food.
  12. The HMD according to claim 10, wherein, upon analyzing the current intake of ingested food, the processor is further configured to:
    compare the captured image of the remaining food with the initially captured image of food in terms of the size to calculate a difference, and
    calculate the intake of food corresponding to the calculated difference.
  13. The HMD according to claim 10, wherein, upon extracting the information regarding the intake of food, the processor is further configured to calculate caloric content, weight, and component values with regard to the intake of food based on caloric content, weight, and component values with regard to the initially captured image of food.
  14. The HMD according to claim 1, wherein the processor is further configured to:
    store information regarding the intake of food and information regarding the food ingestion motion in real time based on the sensed food ingestion motion,
    calculate a total intake of food and total result values with regard to the food ingestion motion based on the stored information regarding the intake of food and the stored information regarding the food ingestion motion,
    extract recommended information regarding the food ingestion motion based on the calculated total intake of food and the calculated total result values with regard to the food ingestion motion, and
    display information regarding the total intake of food, total result information regarding the food ingestion motion, and the recommended information regarding the food ingestion motion.
  15. The HMD according to claim 14, wherein the processor is configured to calculate the total intake of food and the total result values with regard to the food ingestion motion when a sensing signal indicating the food ingestion motion is not received from the sensor unit for a given time.
  16. The HMD according to claim 14, wherein the recommended information regarding the food ingestion motion includes at least one of recommended number of times of chewing the food, recommended chewing direction, recommended chewing intensity, deficient components, excessive components, and recommended food.
  17. The HMD according to claim 1, wherein the sensor unit includes at least one of a sound sensor, a vibration sensor, and a muscle movement sensor.
  18. The HMD according to claim 17, wherein the muscle movement sensor includes at least one of an Electromyography (EMG) sensor and a Force Sensing Resistor (FSR) sensor.
  19. A method of controlling a head mounted display (HMD) including a sensor unit configured to sense a food ingestion motion, the method comprising:
    capturing an image of food that a user will ingest;
    acquiring information regarding the captured image of food;
    acquiring and displaying augmented reality information based on the food information;
    sensing the food ingestion motion; and
    displaying at least one information regarding the intake of food and information regarding the food ingestion motion based on the sensed food ingestion motion.
  20. The method according to claim 19, further comprising:
    calculating a total intake of food and total result values with regard to the food ingestion motion if a sensing signal indicating the food ingestion motion is not received from the sensor unit for a predetermined time;
    extracting recommended information regarding the food ingestion motion based on the calculated total intake of food and the calculated total result values with regard to the food ingestion motion; and
    displaying information regarding the total intake of food, total result information regarding the food ingestion motion, and the recommended information regarding the food ingestion motion.
PCT/KR2014/000392 2013-09-30 2014-01-14 Head mounted display and method of controlling the same WO2015046673A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20130116376A KR20150037108A (en) 2013-09-30 2013-09-30 Head mounted display and method for controlling the same
KR10-2013-0116376 2013-09-30
US201314133221A 2013-12-18 2013-12-18
US14/133,221 2013-12-18

Publications (1)

Publication Number Publication Date
WO2015046673A1 true WO2015046673A1 (en) 2015-04-02

Family

ID=52743783

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2014/000392 WO2015046673A1 (en) 2013-09-30 2014-01-14 Head mounted display and method of controlling the same

Country Status (1)

Country Link
WO (1) WO2015046673A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023187573A1 (en) 2022-03-28 2023-10-05 Centre For Research And Technology Hellas Eating rate estimation through a mobile device camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060122111A (en) * 2005-05-25 2006-11-30 주식회사 팬택 Method of calculating calorie using the image of food
JP2007122311A (en) * 2005-10-27 2007-05-17 Matsushita Electric Ind Co Ltd Nutrition analysis unit
US20080267444A1 (en) * 2005-12-15 2008-10-30 Koninklijke Philips Electronics, N.V. Modifying a Person's Eating and Activity Habits
US20120235827A1 (en) * 2011-03-14 2012-09-20 Google Inc. Methods and Devices for Augmenting a Field of View
KR20130034125A (en) * 2011-09-28 2013-04-05 송영일 Augmented reality function glass type monitor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060122111A (en) * 2005-05-25 2006-11-30 주식회사 팬택 Method of calculating calorie using the image of food
JP2007122311A (en) * 2005-10-27 2007-05-17 Matsushita Electric Ind Co Ltd Nutrition analysis unit
US20080267444A1 (en) * 2005-12-15 2008-10-30 Koninklijke Philips Electronics, N.V. Modifying a Person's Eating and Activity Habits
US20120235827A1 (en) * 2011-03-14 2012-09-20 Google Inc. Methods and Devices for Augmenting a Field of View
KR20130034125A (en) * 2011-09-28 2013-04-05 송영일 Augmented reality function glass type monitor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023187573A1 (en) 2022-03-28 2023-10-05 Centre For Research And Technology Hellas Eating rate estimation through a mobile device camera

Similar Documents

Publication Publication Date Title
WO2014148692A1 (en) Display device and method for controlling the same
WO2019050136A1 (en) Method and system for detecting dangerous situation
WO2020141657A1 (en) Body measurement device, and control method therefor
WO2014021576A1 (en) Electronic device for providing content according to user's posture and content providing method thereof
WO2018143696A1 (en) Electronic device for capturing moving image on basis of change between plurality of images and method for controlling same
WO2017090833A1 (en) Photographing device and method of controlling the same
WO2020204531A1 (en) Tv control system and tv control device suitable therefor
WO2015126044A1 (en) Method for processing image and electronic apparatus therefor
WO2020235852A1 (en) Device for automatically capturing photo or video about specific moment, and operation method thereof
WO2019088555A1 (en) Electronic device and method for determining degree of conjunctival hyperemia by using same
WO2016182090A1 (en) Glasses-type terminal and control method therefor
WO2023182727A1 (en) Image verification method, diagnostic system performing same, and computer-readable recording medium having the method recorded thereon
WO2017164581A1 (en) Mobile terminal facilitating image capture mode switching, and method therefor
WO2020141727A1 (en) Healthcare robot and control method therefor
WO2019172724A1 (en) Electronic device for determining state of display using one or more specified pins
WO2012057475A2 (en) System for controlling the image data of an image sensor for a capsule endoscope
WO2019208915A1 (en) Electronic device for acquiring image using plurality of cameras through position adjustment of external device, and method therefor
WO2015046673A1 (en) Head mounted display and method of controlling the same
WO2021230559A1 (en) Electronic device and operation method thereof
WO2019088481A1 (en) Electronic device and image correction method thereof
WO2014178578A1 (en) Apparatus and method for generating image data in portable terminal
WO2017155365A1 (en) Electronic apparatus for providing panorama image and control method thereof
WO2023277548A1 (en) Method for acquiring side image for eye protrusion analysis, image capture device for performing same, and recording medium
WO2016200059A1 (en) Method and apparatus for providing advertisement content and recording medium
WO2022085929A1 (en) Electronic apparatus and control method therefor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14848531

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14848531

Country of ref document: EP

Kind code of ref document: A1