US20120281888A1 - Electronic apparatus and image display method - Google Patents
Electronic apparatus and image display method Download PDFInfo
- Publication number
- US20120281888A1 US20120281888A1 US13/545,495 US201213545495A US2012281888A1 US 20120281888 A1 US20120281888 A1 US 20120281888A1 US 201213545495 A US201213545495 A US 201213545495A US 2012281888 A1 US2012281888 A1 US 2012281888A1
- Authority
- US
- United States
- Prior art keywords
- image
- viewer
- information
- module
- still images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/30—Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/40—Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/945—User interactive design; Environments; Toolboxes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Definitions
- Embodiments described herein relate generally to an electronic apparatus which displays an image, and an image display method applied to the electronic apparatus.
- the digital photo frame has, for example, a function of successively displaying, at regular intervals, still images in a storage medium connected to the digital photo frame.
- personal computers, digital cameras, etc., as well as the digital photo frames have the function of successively displaying still images at regular intervals.
- Jpn. Pat. Appln. KOKAI Publication No. 2009-171176 discloses a reproduction apparatus which recognizes a face image of a person captured by a camera, and displays favorite image files or audio files which are registered in association with the face image if the recognized face image is a registered face image.
- this reproduction apparatus the user's face image and image files and audio files selected by the user are registered in advance.
- pre-registered image files or audio files are reproduced in accordance with the recognized user's face image.
- the files stored in the reproduction apparatus have been updated or if the files to be reproduced are to be changed, the user is required to change once again the files that are registered as favorites.
- the number of files stored in the reproduction apparatus is very large, it may be time-consuming for the user to select, from the many files, some files which are to be registered.
- FIG. 1 shows an exemplary external appearance of an electronic apparatus according to an embodiment.
- FIG. 2 shows an exemplary system configuration of the electronic apparatus according to the embodiment.
- FIG. 3 is an exemplary block diagram showing the functional structure of a content reproduction application program which runs on the electronic apparatus according to the embodiment.
- FIG. 4 shows an example of the structure of index information used by the content reproduction application program of FIG. 3 .
- FIG. 5 is an exemplary conceptual view for explaining an example of the operation of group extraction which is executed by the content reproduction application program of FIG. 3 .
- FIG. 6 is an exemplary conceptual view for explaining an example of the operation of image selection and image display, which is executed by the content reproduction application program of FIG. 3 .
- FIG. 7 is an exemplary flowchart illustrating the procedure of an image display process executed by the content reproduction application program of FIG. 3 .
- an electronic apparatus comprises a viewer image generating module, a viewer recognition module, a group extraction module, and an image display module.
- the viewer image generating module generates an image of a viewer by capturing the image of the viewer.
- the viewer recognition module detects a face image in the generated image and recognizes the viewer corresponding to the detected face image.
- the group extraction module extracts, from a plurality of groups each comprising still images, groups comprising at least one of a still image comprising the face image of the viewer and a still image imported by the viewer.
- the image display module displays still images in the extracted groups on a screen.
- FIG. 1 is a view showing an external appearance of an electronic apparatus according to an embodiment.
- the electronic apparatus is realized, for example, as a notebook-type personal computer 10 .
- the computer 10 comprises a computer main body 11 and a display unit 12 .
- a display device comprising a liquid crystal display (LCD) 17 is built in the display unit 12 .
- the display unit 12 is attached to the computer main body 11 such that the display unit 12 is rotatable between an open position where the top surface of the computer main body 11 is exposed, and a closed position where the top surface of the computer main body 11 is covered.
- the display unit 12 further comprises a camera module 115 at an upper part of the LCD 17 .
- the camera module 115 is used in order to capture, for instance, an image of the user of the computer 10 , when the display unit 12 is in the open position.
- the computer main body 11 has a thin box-shaped housing.
- a keyboard 13 a power button 14 for powering on/off the computer 10 , an input operation panel 15 , a touch pad 16 , and speakers 18 A and 18 B are disposed on the top surface of the housing of the computer main body 11 .
- Various operation buttons are provided on the input operation panel 15 .
- the right side surface of the computer main body 11 is provided with a USB connector 19 for connection to a USB cable or a USB device of, e.g. the universal serial bus (USB) 2.0 standard.
- a USB connector 19 for connection to a USB cable or a USB device of, e.g. the universal serial bus (USB) 2.0 standard.
- the rear surface of the computer main body 11 is provided with an external display connection terminal (not shown) which supports, e.g. the high-definition multimedia interface (HDMI) standard. This external display connection terminal is used in order to output a digital video signal to an external display.
- HDMI high-definition multimedia interface
- FIG. 2 shows the system configuration of the computer 10 .
- the computer 10 comprises a central processing unit (CPU) 101 , a north bridge 102 , a main memory 103 , a south bridge 104 , a graphics processing unit (GPU) 105 , a video random access memory (VRAM) 105 A, a sound controller 106 , a basic input/output system-read only memory (BIOS-ROM) 107 , a local area network (LAN) controller 108 , a hard disk drive (HDD) 109 , an optical disc drive (ODD) 110 , a USB controller 111 , a wireless LAN controller 112 , an embedded controller/keyboard controller (EC/KBC) 113 , an electrically erasable programmable ROM (EEPROM) 114 , a camera module 115 , and a card controller 116 .
- CPU central processing unit
- GPU graphics processing unit
- VRAM video random access memory
- BIOS-ROM basic input/output system-read only memory
- LAN local area network
- HDD hard
- the CPU 101 is a processor for controlling the operation of various components in the computer 10 .
- the CPU 101 executes an operating system (OS) 201 and various application programs, such as a content reproduction application program 202 , which are loaded from the HDD 109 into the main memory 103 .
- the content reproduction application program 202 is software for reproducing various digital contents, such as digital photos and home video, which are stored in, e.g. a digital versatile disc (DVD) that is set in, e.g. the ODD 110 .
- the content reproduction application program 202 also has a function of displaying a digital image, which is stored in the HDD 109 , like a so-called digital photo frame.
- the CPU 101 also executes a BIOS stored in the BIOS-ROM 107 .
- the BIOS is a program for hardware control.
- the north bridge 102 is a bridge device which connects a local bus of the CPU 101 and the south bridge 104 .
- the north bridge 102 comprises a memory controller which access-controls the main memory 103 .
- the north bridge 102 also has a function of executing communication with the GPU 105 via, e.g. a PCI EXPRESS serial bus.
- the GPU 105 is a display controller which controls the LCD 17 used as a display monitor of the computer 10 .
- a display signal which is generated by the GPU 105 , is sent to the LCD 17 .
- the GPU 105 can send a digital video signal to an external display device 1 via an HDMI control circuit 3 and an HDMI terminal 2 .
- the HDMI terminal 2 is the above-described external display connection terminal.
- the HDMI terminal 2 is capable of sending a non-compressed digital video signal and a digital audio signal to the external display device 1 , such as a TV, via a single cable.
- the HDMI control circuit 3 is an interface for sending a digital video signal to the external display device 1 , which is called “HDMI monitor”, via the HDMI terminal 2 .
- the south bridge 104 controls devices on a peripheral component interconnect (PCI) bus and devices on a low pin count (LPC) bus.
- the south bridge 104 comprises an integrated drive electronics (IDE) controller for controlling the HDD 109 and ODD 110 .
- IDE integrated drive electronics
- the south bridge 104 also has a function of executing communication with the sound controller 106 .
- the sound controller 106 is a sound source device and outputs audio data, which is to be reproduced, to the speakers 18 A and 18 B or the HDMI control circuit 3 .
- the LAN controller 108 is a wired communication device which executes wired communication of, e.g. the IEEE 802.3 standard.
- the wireless LAN controller 112 is a wireless communication device which executes wireless communication of, e.g. the IEEE 802.11g standard.
- the USB controller 113 executes communication with an external device which supports, e.g. the USB 2.0 standard (the external device is connected via the USB connector 19 ). For example, the USB controller 113 executes communication when taking in digital images, which are managed by a digital camera that is an external device, and to store the digital images in the HDD 109 .
- the camera module 115 executes a capturing (imaging) process using a built-in camera.
- the camera module 115 generates image data by using, e.g. an image captured by the built-in camera and executes, e.g. communication for storing the image data in the main memory 103 or HDD 109 .
- the camera module 115 supplies the image data to various application programs such as the content reproduction application program 202 .
- the card controller 116 executes communication with a recording medium 20 A inserted in a card slot 20 .
- the card controller 116 executes, e.g. communication for reading an image file in an SD card (the recording medium 20 A), and storing the read image file in the HDD 109 .
- the EC/KBC 113 is a one-chip microcomputer in which an embedded controller for power management and a keyboard controller for controlling the keyboard 13 and touch pad 16 are integrated.
- the EC/KBC 113 has a function of powering on/off the computer 10 in accordance with the user's operation of the power button 14 .
- the content reproduction application program 202 comprises an indexing module 301 and a display controller 311 .
- the content reproduction application program 202 has an import mode for importing still images and a presentation mode for selectively presenting still images which are imported.
- the indexing module 301 is used in the import mode. Specifically, the indexing module 301 executes various processes relating to indexing for importing the still images 401 and creating index information 402 for searching for a target still image from among the still images 401 .
- the “import” of the still images 401 means taking the still images (still image data) 401 in the computer 10 , to be more specific, taking the still images 401 in the content reproduction application program 202 .
- the still images 401 may be images of frames constituting moving picture data.
- the indexing module 301 comprises a recording media detector 302 , an operator image extraction module 303 , an operator recognition module 304 , an image import module 305 , a grouping module 306 , an appearing person image extraction module 307 , an appearing person recognition module 308 , and an index information storing module 309 .
- the recording media detector 302 detects that the recording medium 20 A, which is the source of import of the still images 401 , has been connected. For example, the recording media detector 302 detects that the recording medium 20 A has been inserted in the card slot 20 .
- the source of import of the still images 401 is not limited to the recording medium 20 A, and may be a storage device in the computer 10 , an external storage device connected to the computer 10 , or some other computer connected to the computer 10 via a network.
- the recording media detector 302 detects that the storage device, or the like, has been connected (recognized), that files (new still image data) have been stored in a designated directory, or the like, or that an instruction has been issued by the user.
- the recording media detector 302 notifies the operator image extraction module 303 that the recording medium 20 A, or the like, has been detected.
- the operator image extraction module 303 analyzes user image (user image data) 403 generated by the camera module 115 , and extracts a face image of the operator of the computer 10 .
- the operator image extraction module 303 detects a face region from the user image 403 , and extracts the detected face region from the user image 403 .
- the detection of the face region can be executed, for example, by analyzing the features of the user image data 403 , and searching for a region having features similar to face image feature samples prepared in advance.
- the face image feature samples are feature data calculated by statistically processing face image features of many persons.
- the operator image extraction module 303 outputs the extracted face image to the operator recognition module 304 .
- the operator recognition module 304 analyzes the face image extracted by the operator image recognition module 303 , and recognizes the person corresponding to the face image as the operator.
- the operator recognition module 304 notifies the image import module 305 of the completion of recognition of the operator.
- the operator recognition module 304 outputs the information of the recognized operator to the index information storing module 309 . If the face image of the operator cannot be detected or if the face image of the operator cannot be recognized, it may be possible to newly capture a user image 403 using the camera module 115 and to execute extraction and recognition of the operator's face image once again.
- the image import module 305 starts import of the still images 401 stored in the recording medium 20 A.
- the image import module 305 imports the still images 401 into the content reproduction application program 202 .
- the image import module 305 reads the still images 401 from the recording medium 20 A and stores the read still images 401 in the HDD 109 .
- the grouping module 306 classifies the still images 401 based on a predetermined classification rule, and creates groups.
- the grouping module 306 classifies the still images 401 , for example, based on the time, location, event, etc. For example, if there are two still images 401 which are successive on a time-series axis and the time interval difference between the date/time of capturing one of these two still images 401 and the date/time of capturing the other of the two still images 401 is greater than a predetermined time period, the grouping module 306 performs grouping by using the boundary between the two still images 401 as a break-point.
- the grouping module 306 detects a so-called scene change point, before and after which the features of images greatly change, and performs grouping by setting each of scenes to be one section.
- a logic-unit group created by classification is also called an event group.
- the appearing person image extraction module 307 analyzes the still image 401 , and extracts a face region in the still image 401 . For example, the appearing person image extraction module 307 detects the face region from the still image 401 , and extracts the detected face region from the still image data 401 . The appearing person image extraction module 307 outputs the extracted face image to the appearing person recognition module 308 .
- the appearing person recognition module 308 analyzes the face image extracted by the appearing person image extraction module 307 , and recognizes a person corresponding to the face image as the appearing person. In addition, the appearing person recognition module 308 generates classification information for classifying face images into those face images which are assumed to be associated with the same person. The appearing person recognition module 308 outputs the information of the recognized appearing person to the index information storing module 309 .
- the index information storing module 309 stores in a database 109 A, as index information 402 , the data which associates the still images 401 with the information of the operator recognized by the operator recognition module 304 , and the information of the appearing person recognized by the appearing person recognition module 308 .
- the above-mentioned operator is the owner of the still images 401 that are imported (e.g. the photographer of photos).
- the owner of the still images 401 can be associated with the still images 401 and registered.
- the index information storing module 309 also registers the information of the groups classified by the grouping module 306 by associating this information with the still images 401 .
- the information of the operator and the appearing person, which is associated with each of the still images is also used as the information of the person associated with the group (event group) comprising the still images.
- the database 109 A is a storage area prepared in the HDD 109 for storing the index information 402 .
- FIG. 4 shows a structure example of the index information 402 in the database 109 A.
- the index information 402 comprises image information 402 A and photographer information 402 B.
- the image information 402 A is stored in association with each of images imported by the image import module 305 .
- the photographer information 402 B is information of the photographer of images (import operator).
- the image information 402 A comprises an image ID, date/time of capturing, face image information, text information, group information, and a photographer ID, in association with each of images.
- the image ID is indicative of identification information which is uniquely allocated to each of still images (still image data) 401 .
- the date/time of capturing is indicative of time information indicating the date/time of capturing of each still image 401 . If a still image is one of frames constituting moving picture data, a value (time stamp information), which is calculated by adding an elapsed time from the first frame, which is based on the frame number, to the date/time of capturing of the moving picture data, is set as the date/time of capturing of this still image. In the meantime, the date/time of capturing may be a date/time of storage or a date/time of update of the still image 401 .
- the face image information is indicative of information of the face image in each still image 401 . If each still image 401 comprises face images, the same numbers of face image information items, as the number of the face images, are stored.
- the face image information comprises a face image, frontality, size, and classification information.
- the face image is indicative of the face image recognized by the appearing person recognition module 308 .
- the frontality is indicative of the degree of frontality of the face image which is captured in the frontal direction.
- the size is indicative of the size of the face image (e.g. pixel-unit image size).
- the classification information is indicative of a result of classification of face images, which are recognized by the appearing person recognition module 308 and classified into face images which are assumed to be associated with the same person. Accordingly, the classification information is indicative of identification information (personal ID) which is uniquely allocated to a person.
- the text information is indicative of information of characters in each still image 401 .
- the indexing module 301 may be provided with a character recognition function for detecting character region (e.g. characters on an advertizing display) in each still image 401 and recognizing the detected character region.
- the characters, which have been recognized by using the character recognition function, are stored as the text information.
- the detection (recognition) of characters are executed, for example, by searching for a region having a feature amount similar to a feature amount of each character which is prepared in advance.
- the group information is indicative of information (group ID) for identifying groups created by the grouping module 306 .
- group ID information indicative of information for identifying groups created by the grouping module 306 .
- the photographer ID is indicative of identification information which is uniquely allocated to the person recognized by the operator recognition module 304 .
- the identification information which is allocated to the operator who has executed the operation of importing the still images 401 , is stored as the photographer ID.
- the same photographer ID corresponding to this person is set.
- the information of the photographer corresponding to this photographer ID is stored as the photographer information 402 B.
- the photographer information 402 B is indicative of the information of the operator (photographer) who executes the operation of importing the still image data 401 .
- the photographer information 402 B comprises a photographer ID, and face image information.
- the photographer ID is indicative of the identification information which is allocated to the operator who has executed the operation of importing the still image data 401 .
- the photographer ID in the image information 402 A corresponds to the photographer ID in the photographer information 402 B.
- the face image information is indicative of information relating to the face image recognized by the operator recognition module 304 , that is, the face image of the operator (photographer).
- the face image information comprises a face image, frontality, size, and classification information.
- the face image is indicative of the face image recognized by the operator recognition module 304 .
- the frontality is indicative of the degree of frontality of the face image which is captured in the frontal direction.
- the size is indicative of the size of the face image (e.g. pixel-unit image size).
- the classification information is indicative of a result of classification of face images, which are recognized by the operator recognition module 304 and classified into face images which are assumed to be associated with the same person. Accordingly, the classification information is indicative of identification information (personal ID) which is uniquely allocated to a person.
- the index information 402 it can be understood, with respect to each still image 401 , who appears in the image, whether text is comprised in the image, to which group the image belongs, and who has captured the image.
- the index information 402 it is possible to quickly search for, from among the still images 401 stored in the HDD 109 , still images 401 in which a target person appears, still images 401 in which the target person does not appear, still images 401 in which the target person appears and text appears, and still images 401 captured by the target person.
- the display controller 311 is used in the presentation mode. Specifically, using the index information 402 , the display controller 311 selects, from the still images 401 in the HDD 109 , still images which meet a predetermined selection condition, and successively displays the selected still images.
- the display controller 311 may not only perform simple successive display of selected still images, but also may display the selected still images by applying thereto a transition effect at a time of a change of display.
- the display controller 311 comprises a viewer image extraction module 312 , a viewer recognition module 313 , a group extraction module 314 , an image selection module 315 , and an image display module 316 .
- the viewer image extraction module 312 analyzes, for example, the user image 403 generated by the camera module 115 during the period of the presentation mode, and extracts the face image of the viewer in the user image 403 .
- the viewer image extraction module 312 detects, for example, a face region from the user image 403 , and extracts the detected face region from the user image 403 .
- the viewer image extraction module 312 outputs the extracted face image to the viewer recognition module 313 .
- the viewer recognition module 313 analyzes the face image extracted by the viewer image extraction module 312 , and recognizes the person, who corresponds to this face image, as the viewer.
- the viewer recognition module 313 outputs the information of the recognized viewer to the group extraction module 314 . If the face image of the viewer cannot be detected or if the face image of the viewer cannot be recognized, it may be possible to newly capture user image 403 by using the camera module 115 and to execute extraction and recognition of the viewer's face image once again.
- the content reproduction application 202 may be transitioned to a power-saving mode in which the operation with reduced power is enabled.
- the content reproduction application 202 which operates in the power-saving mode, powers off the LCD 17 .
- the viewer recognition module 313 outputs the information of the recognized viewer to the group extraction module 314 .
- face images may be extracted from the user image data 403 by the viewer image extraction module 312 .
- viewers may be recognized by the viewer recognition module 313 .
- the viewer recognition module 313 outputs the information of each of the recognized viewers to the group extraction module 314 .
- the group extraction module 314 extracts groups (event groups) comprising the associated still image from the event groups in the HDD 109 .
- the group extraction module 314 extracts, from the event groups, event groups comprising at least one of a still image comprising the face image of the viewer and a still image imported by the viewer, as the event groups relating to the present viewer.
- the face image of the appearing person in at least one of still images in the group, and the face image of the operator who imported at least one of the still images in the group may be associated.
- the group extraction module 314 may extract the groups with which the appearing person's face image or the operator's face image that corresponds to the viewer's face image is associated.
- the group extraction module 314 extracts, from groups, groups in which the face images of all viewers are associated with the appearing person's face image or the photographer's face image (the face image of the operator who has executed the import operation), as the groups associated with those viewers.
- the group extraction module 314 outputs the information of the extracted groups to the image selection module 315 .
- FIG. 5 an example of the groups extracted by the group extraction module 314 is described.
- the case is assumed in which the still images 401 are grouped into three still images 401 A, 401 B and 401 C by the grouping module 306 .
- the still images 401 A, 401 B and 401 C belong to event groups 501 , 502 and 503 .
- Persons 501 A, who are associated with the event group 501 comprise an appearing person 601 corresponding to a face image in the still images 401 A, and a photographer 602 of the still images 401 A.
- the event group 501 comprises at least one still image 401 A, which comprises the face image of the appearing person 601 , and that all still images 401 A in the event group 501 have been imported by the photographer 602 .
- each of the appearing person 601 and photographer 602 is the appearing person and is the photographer.
- the event group 501 comprises at least one still image 401 A comprising the face image of the appearing person 601 and at least one still image 401 A comprising the face image of the photographer (appearing person) 602 , and that some still images 401 A in the event group 501 are imported by the photographer 602 , and some other still images 401 A are imported by the appearing person (photographer) 601 .
- Persons 502 A, who are associated with the event group 502 comprise appearing person 601 and 603 corresponding to face images in the still images 401 B, and a photographer 604 of the still images 401 B.
- the event group 502 comprises at least one still image 401 B, which comprises the face image of the appearing person 601 , and at least one still image 401 B, which comprises the face image of the appearing person 603 (one still image 401 B comprising both the appearing persons 601 and 603 may be comprised in the event group 502 ), and that all still image data 401 B in the event group 502 are still images which have been imported by the photographer 604 .
- the event group 501 it is possibly assumed that each of the appearing persons 601 , 603 and photographer 604 is the appearing person and is the photographer.
- persons 503 A who are associated with the event group 503 , comprise appearing person 605 and 606 corresponding to face images in the still images 401 C, and photographers 602 , 606 of the still images 401 A.
- the event group 503 comprises at least one still image 401 C, which comprises the face image of the appearing person 602 , and at least one still image 401 C, which comprises the face image of the appearing person 605 (one still image 401 C comprising both the appearing persons 602 and 605 may be comprised in the event group 503 ), and that some still images 401 C in the event group 503 are imported by the photographer 606 , and some other still images 401 C are still imported by the photographer 607 .
- the event group 501 it is possibly assumed that each of the appearing persons 605 , 607 and photographers 602 , 606 is the appearing person and is the photographer. This content is stored in the database 109 A as index information 402 .
- the group extraction module 314 preferentially extracts the event group 501 , which is associated with both the viewers 601 and 602 , from the event groups 501 , 502 and 503 in the HDD 109 .
- the group extraction module 314 may extract, from the HDD 109 , the event groups 502 and 503 which are associated with either the viewer 601 or the viewer 602 .
- the image selection module 315 selects still images, which meet a predetermined condition, from the still images in the extracted group. Needless to say, the image selection module 315 may select all still images in the extracted group.
- the image display module 316 displays the still images, which are selected by the image selection module 315 , on the screen in a predetermined order. Referring to FIG. 6 , a description is given of the extraction of groups by the group extraction module 314 , the selection of still images by the image selection module 315 , and the display of images by the image display module 316 .
- part (A) is a first conceptual view illustrating the state in which still images 401 stored in the HDD 109 are arranged on a time-series axis in the order of date/time of capturing, based on the index information 402 stored in the database 109 A, and the still images 401 are classified into groups.
- the still images 401 stored in the HDD 109 are classified into three groups, namely, a group n, a group n+1 and a group n+2.
- Each of the boxes with numerals, such as 1, 2, 3, . . . , in each of the groups is representative of one still image 401 , and these numerals are indicative of the order of dates/times of capturing in each group.
- a circle in the box of the still image 401 which is indicated by symbol c 1 , indicates that the still image 401 comprises the face image of the viewer.
- a triangle in the box of the still image 401 which is indicated by symbol c 2 , indicates that the still image 401 is captured (imported) by the viewer.
- a diamond in the box of the still image 401 which is indicated by symbol c 3 , indicates that the still image 401 is neither an image comprising the face image of the viewer nor an image captured (imported) by the viewer.
- part (A) of FIG. 6 indicates that, of the three groups, i.e. group n, group n+1 and group n+2, those groups which comprise either the still image 401 comprising the face image of the viewer or the still image 401 captured by the viewer are two groups, i.e. group n and group n+2.
- the group extraction module 314 extracts from the HDD 109 the group n and group n+2 as groups comprising still images that are targets of display.
- still images 401 which belong to the same group as the still image 401 comprising the face image of the viewer or the still image 401 captured by the viewer, are determined to be images having a relation to the viewer.
- the image selection module 315 selects, from the still images 401 A belonging to the extracted groups, still images 401 , for example, the upper limit number of which is set at a predetermined number. At this time, in principle, the image selection module 315 preferentially selects, for example, the still image 401 comprising the face image of the viewer and the still image 401 captured by the viewer, but the image selection module 315 also selects images in which no person appears, such as images of scenes.
- part (B) is a second conceptual view showing a selection result of the still images 401 by the image selection module 315 .
- an image indicated by symbol d 1 which has been imported by the viewer, is selected, and also an image indicated by symbol d 2 , which is neither an image comprising the face image of the viewer nor an image imported by the viewer, is selected.
- an image indicated by symbol d 3 which has been imported by the viewer, is selected, and also an image indicated by symbol d 4 , which is neither an image comprising the face image of the viewer nor an image imported by the viewer, is selected.
- the images indicated by symbols d 1 to d 4 are images which cannot be selected when a person is designated as search key information in conventional image search methods.
- the content reproduction application program 202 executes an effective image search with a high-level intelligence, thereby to display a slide show so that “to where and with whom” can be understood.
- part (C) is a third conceptual view showing the order of display of still images 401 by the image display module 316 .
- the image display module 316 constructs the arrangement of groups in a time sequence from the present to the past (group n+2 ⁇ group n), thereby to meet the desire to view latest images at first, and constructs the arrangement of images in each group such that the images can be enjoyed more naturally.
- different algorithms may be adopted for the arrangement of groups and for the arrangement of images in each group.
- the image display module 316 places, for example, the image with no person, which is indicated by symbol d 4 in the group n+2, that is, an image that is assumed to be a landscape image, at the first position, so that “to where” can first be understood. Following this image, other images are arranged in the order of date/time of capturing. Similarly, as regards the group n, the image indicated by symbol d 2 , which is assumed to be a landscape image, is placed at the first position, and other images are subsequently arranged in the order of date/time of capturing.
- the content reproduction application program 202 executes effective image display with a high-level intelligence, for example, in such a manner as to place an image, which is suitable for grasping a scene such as “travel”, as a representative image at the first position. If the theme, such as “travel”, is not clearly set, the images in each group may be arranged in the order of date/time of capturing.
- the content reproduction application 202 determines whether the recording medium 20 A has been detected or not (block B 101 ). If the recording medium 20 A has been detected (YES in block B 101 ), the content reproduction application 202 receives the user image (user image data) 403 which is generated by the camera module 115 (block B 102 ).
- the content reproduction application 202 recognizes the face image of the user (operator), which is comprised in the received user image 403 (block B 103 ). Specifically, the content reproduction application 202 detects the face region comprised in the received user image 403 , and extracts the face region. The content reproduction application 202 analyzes the extracted face region (face image) and recognizes the corresponding person. Then, the content reproduction application 202 determines whether the recognition of the user's face image has successfully been executed (block B 104 ).
- the content reproduction application 202 imports the still images 401 that are stored in the recording medium 20 A (block B 105 ). Responding to the completion of the import, the content reproduction application 202 classifies and groups the imported still images 401 (block B 106 ). Then, the content reproduction application 202 recognizes the person (appearing person) corresponding to the face image in the imported images (block B 107 ).
- the content reproduction application 202 stores in the database 109 A, as the index information 402 , the information relating to the user (operator) recognized in block B 103 (photographer information 402 B), and the information relating to the appearing person recognized in block B 107 (image information 402 A) (block B 108 ).
- the content reproduction application 202 may automatically transition to the presentation mode.
- the content reproduction application 202 extracts the groups which are associated with the user (viewer) who is recognized in block B 103 or in block B 113 (to be described later) (block B 109 ).
- the content reproduction application 202 selects still images, which meet a predetermined condition, from the still images in the extracted groups (block B 110 ).
- the content reproduction application 202 successively displays the selected still images on the screen in a predetermined order (block B 111 ).
- the content reproduction application 202 receives the user image 403 from the camera module 115 (block B 112 ).
- the content reproduction application 202 recognizes the face image of the user (viewer) in the received user image 403 (block B 113 ). Then, the content reproduction application 202 determines whether the recognition of the user's face image has successfully been executed (block B 114 ).
- the content reproduction application 202 determines whether the user, who has been recognized in block B 113 , is different from the previously recognized user (block B 115 ). If the user recognized in block B 113 is different from the previously recognized user (YES in block B 115 ), the content reproduction application 202 returns to block B 109 . Specifically, the content reproduction application 202 displays on the screen proper still images for the newly recognized user by the process beginning with block B 109 .
- the content reproduction application 202 returns to block B 112 . Specifically, the content reproduction application 202 continues to display on the screen the still images which are currently being displayed.
- the content reproduction application 202 determines whether a user, who views the photo displayed on the screen, is absent (block B 116 ). For example, if a region which is assumed to comprise a face image is not detected from the user image 403 or if the face image recognition in block B 113 has failed a predetermined number of times or more, the content reproduction application 202 determines that a user, who views the still image displayed on the screen, is absent. If it is determined that a user, who views the still image displayed on the screen, is absent (YES in block B 116 ), the content reproduction application 202 transitions to a power-saving mode which enables the operation with low power consumption (block B 117 ). The content reproduction application 202 , which operates in the power-saving mode, powers off the LCD 17 , for example.
- the content reproduction application 202 returns to block B 112 in order to recognize the user (viewer).
- the content reproduction application 202 may store, for a predetermined time period, the information of the viewer at the time when the previous recognition was successfully executed, and may display still images based on this information. Thereby, proper images can be displayed on the screen even if the face image of the viewer could not clearly be captured and the recognition failed, for example, in such a case that the viewer has moved the face or the viewer's face has been hidden by some object.
- the content reproduction application 202 stores in the database 109 A, as the index information 402 , the information of the user (operator) who imports the still images (still image data) 401 , and the information of the person (appearing person) corresponding to the face image in the still image 401 , and the content reproduction application 202 stores the still image 401 in the HDD 109 .
- the content reproduction application 202 extracts, in accordance with the user (viewer) who views the still images displayed on the screen, the groups comprising the still images associated with this viewer from the HDD 109 , and displays the still images comprised in the groups on the screen.
- the content reproduction application 202 can extract the still images relating to the viewer (e.g. a still image comprising the face image of the viewer, or a still image imported by the viewer) from the still images 401 in the HDD 109 .
- the content reproduction application 202 can execute, in parallel, the process of displaying the still images corresponding to the viewer on the screen. Moreover, the above-described generation of the index information 402 and the display of the still images corresponding to the viewer can be realized without operations by the user. In short, it is possible to realize the photo frame function which can manage photos (still image data) of users and present photos corresponding to a viewer (or viewers), while maintaining the convenience for the user and the simplicity of the apparatus.
- the content reproduction application 202 may be provided with a function of manually setting or correcting the information relating to the operator. In addition, the content reproduction application 202 may be provided with a function of manually designating the viewer (e.g. designating a “who” mode or an “anyone” mode).
- the camera module 115 which generates the user image data 403 for recognizing the operator and viewer, is not limited to the camera module built in the computer 10 , and it may be, for example, an external Web camera connected to the computer 10 via the USB connector 19 .
- the still images relating to the user who is the viewer can be presented without executing a time-consuming user operation of, e.g. selecting favorite image files.
- the content reproduction application 202 can display, in accordance with the change of the viewer, not only photos in which the viewer appears, but also photos taken by the viewer. Furthermore, the content reproduction application 202 can display, in addition to the photos in which the viewer appears and the photos taken by the viewer, photos relating to these photos, that is, photos belonging to the same event group as these photos. Therefore, the viewer can view the photos relating to the viewer in units of an event group, which is a logical unit. Besides, since the content reproduction application 202 extracts photos relating to the viewer, it is possible to exclude photos, etc. which the viewer does not want a third person to view.
- All the procedures of the image display process according to the present embodiment may be executed by software.
- the same advantageous effects as with the present embodiment can easily be obtained simply by installing a program, which executes the procedures of the image display process, into an ordinary computer through a computer-readable storage medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Television Signal Processing For Recording (AREA)
- Facsimiles In General (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
According to one embodiment, an electronic apparatus includes a viewer image generating module, a viewer recognition module, a group extraction module, and an image display module. The viewer image generating module generates an image of a viewer by capturing the image of the viewer. The viewer recognition module detects a face image in the generated image and recognizes the viewer corresponding to the detected face image. The group extraction module extracts, from a plurality of groups each including still images, groups including at least one of a still image including the face image of the viewer and a still image imported by the viewer. The image display module displays still images in the extracted groups on a screen.
Description
- This application is a continuation of U.S. patent application Ser. No. 12/905,015, filed on Oct. 14, 2010, which is based upon and claims the benefit of priority from Japanese Patent Application No. 2009-255313, filed Nov. 6, 2009; the entire contents of these applications are incorporated herein by reference.
- Embodiments described herein relate generally to an electronic apparatus which displays an image, and an image display method applied to the electronic apparatus.
- In recent years, video reproduction apparatuses, which are called digital photo frames, have been gaining in popularity. The digital photo frame has, for example, a function of successively displaying, at regular intervals, still images in a storage medium connected to the digital photo frame. In general, personal computers, digital cameras, etc., as well as the digital photo frames, have the function of successively displaying still images at regular intervals.
- Jpn. Pat. Appln. KOKAI Publication No. 2009-171176 discloses a reproduction apparatus which recognizes a face image of a person captured by a camera, and displays favorite image files or audio files which are registered in association with the face image if the recognized face image is a registered face image. In this reproduction apparatus, the user's face image and image files and audio files selected by the user are registered in advance.
- In the reproduction apparatus of KOKAI Publication No. 2009-171176, pre-registered image files or audio files are reproduced in accordance with the recognized user's face image. Thus, if the files stored in the reproduction apparatus have been updated or if the files to be reproduced are to be changed, the user is required to change once again the files that are registered as favorites. In addition, if the number of files stored in the reproduction apparatus is very large, it may be time-consuming for the user to select, from the many files, some files which are to be registered.
- A general architecture that implements the various feature of the embodiments will now be described with reference to the drawings. The drawings and the associated descriptions are provided to illustrate the embodiments and not to limit the scope of the invention.
-
FIG. 1 shows an exemplary external appearance of an electronic apparatus according to an embodiment. -
FIG. 2 shows an exemplary system configuration of the electronic apparatus according to the embodiment. -
FIG. 3 is an exemplary block diagram showing the functional structure of a content reproduction application program which runs on the electronic apparatus according to the embodiment. -
FIG. 4 shows an example of the structure of index information used by the content reproduction application program ofFIG. 3 . -
FIG. 5 is an exemplary conceptual view for explaining an example of the operation of group extraction which is executed by the content reproduction application program ofFIG. 3 . -
FIG. 6 is an exemplary conceptual view for explaining an example of the operation of image selection and image display, which is executed by the content reproduction application program ofFIG. 3 . -
FIG. 7 is an exemplary flowchart illustrating the procedure of an image display process executed by the content reproduction application program ofFIG. 3 . - Various embodiments will be described hereinafter with reference to the accompanying drawings.
- In general, according to one embodiment, an electronic apparatus comprises a viewer image generating module, a viewer recognition module, a group extraction module, and an image display module. The viewer image generating module generates an image of a viewer by capturing the image of the viewer. The viewer recognition module detects a face image in the generated image and recognizes the viewer corresponding to the detected face image. The group extraction module extracts, from a plurality of groups each comprising still images, groups comprising at least one of a still image comprising the face image of the viewer and a still image imported by the viewer. The image display module displays still images in the extracted groups on a screen.
-
FIG. 1 is a view showing an external appearance of an electronic apparatus according to an embodiment. The electronic apparatus is realized, for example, as a notebook-typepersonal computer 10. - As shown in
FIG. 1 , thecomputer 10 comprises a computermain body 11 and adisplay unit 12. A display device comprising a liquid crystal display (LCD) 17 is built in thedisplay unit 12. Thedisplay unit 12 is attached to the computermain body 11 such that thedisplay unit 12 is rotatable between an open position where the top surface of the computermain body 11 is exposed, and a closed position where the top surface of the computermain body 11 is covered. Thedisplay unit 12 further comprises acamera module 115 at an upper part of theLCD 17. Thecamera module 115 is used in order to capture, for instance, an image of the user of thecomputer 10, when thedisplay unit 12 is in the open position. - The computer
main body 11 has a thin box-shaped housing. Akeyboard 13, apower button 14 for powering on/off thecomputer 10, aninput operation panel 15, atouch pad 16, andspeakers main body 11. Various operation buttons are provided on theinput operation panel 15. - The right side surface of the computer
main body 11 is provided with aUSB connector 19 for connection to a USB cable or a USB device of, e.g. the universal serial bus (USB) 2.0 standard. Further, the rear surface of the computermain body 11 is provided with an external display connection terminal (not shown) which supports, e.g. the high-definition multimedia interface (HDMI) standard. This external display connection terminal is used in order to output a digital video signal to an external display. -
FIG. 2 shows the system configuration of thecomputer 10. - The
computer 10, as shown inFIG. 2 , comprises a central processing unit (CPU) 101, anorth bridge 102, amain memory 103, asouth bridge 104, a graphics processing unit (GPU) 105, a video random access memory (VRAM) 105A, asound controller 106, a basic input/output system-read only memory (BIOS-ROM) 107, a local area network (LAN)controller 108, a hard disk drive (HDD) 109, an optical disc drive (ODD) 110, aUSB controller 111, awireless LAN controller 112, an embedded controller/keyboard controller (EC/KBC) 113, an electrically erasable programmable ROM (EEPROM) 114, acamera module 115, and acard controller 116. - The
CPU 101 is a processor for controlling the operation of various components in thecomputer 10. TheCPU 101 executes an operating system (OS) 201 and various application programs, such as a contentreproduction application program 202, which are loaded from theHDD 109 into themain memory 103. The contentreproduction application program 202 is software for reproducing various digital contents, such as digital photos and home video, which are stored in, e.g. a digital versatile disc (DVD) that is set in, e.g. the ODD 110. The contentreproduction application program 202 also has a function of displaying a digital image, which is stored in the HDD 109, like a so-called digital photo frame. TheCPU 101 also executes a BIOS stored in the BIOS-ROM 107. The BIOS is a program for hardware control. - The
north bridge 102 is a bridge device which connects a local bus of theCPU 101 and thesouth bridge 104. Thenorth bridge 102 comprises a memory controller which access-controls themain memory 103. Thenorth bridge 102 also has a function of executing communication with theGPU 105 via, e.g. a PCI EXPRESS serial bus. - The GPU 105 is a display controller which controls the
LCD 17 used as a display monitor of thecomputer 10. A display signal, which is generated by theGPU 105, is sent to theLCD 17. In addition, theGPU 105 can send a digital video signal to anexternal display device 1 via anHDMI control circuit 3 and anHDMI terminal 2. - The
HDMI terminal 2 is the above-described external display connection terminal. TheHDMI terminal 2 is capable of sending a non-compressed digital video signal and a digital audio signal to theexternal display device 1, such as a TV, via a single cable. TheHDMI control circuit 3 is an interface for sending a digital video signal to theexternal display device 1, which is called “HDMI monitor”, via theHDMI terminal 2. - The
south bridge 104 controls devices on a peripheral component interconnect (PCI) bus and devices on a low pin count (LPC) bus. Thesouth bridge 104 comprises an integrated drive electronics (IDE) controller for controlling theHDD 109 andODD 110. Thesouth bridge 104 also has a function of executing communication with thesound controller 106. - The
sound controller 106 is a sound source device and outputs audio data, which is to be reproduced, to thespeakers HDMI control circuit 3. - The
LAN controller 108 is a wired communication device which executes wired communication of, e.g. the IEEE 802.3 standard. On the other hand, thewireless LAN controller 112 is a wireless communication device which executes wireless communication of, e.g. the IEEE 802.11g standard. TheUSB controller 113 executes communication with an external device which supports, e.g. the USB 2.0 standard (the external device is connected via the USB connector 19). For example, theUSB controller 113 executes communication when taking in digital images, which are managed by a digital camera that is an external device, and to store the digital images in theHDD 109. - The
camera module 115 executes a capturing (imaging) process using a built-in camera. Thecamera module 115 generates image data by using, e.g. an image captured by the built-in camera and executes, e.g. communication for storing the image data in themain memory 103 orHDD 109. In addition, thecamera module 115 supplies the image data to various application programs such as the contentreproduction application program 202. - The
card controller 116 executes communication with arecording medium 20A inserted in acard slot 20. For example, thecard controller 116 executes, e.g. communication for reading an image file in an SD card (therecording medium 20A), and storing the read image file in theHDD 109. - The EC/
KBC 113 is a one-chip microcomputer in which an embedded controller for power management and a keyboard controller for controlling thekeyboard 13 andtouch pad 16 are integrated. The EC/KBC 113 has a function of powering on/off thecomputer 10 in accordance with the user's operation of thepower button 14. - Next, referring to
FIG. 3 , a description of a functional structure of thecontent reproduction application 202 which runs on thecomputer 10 is given. Of the functions of thecontent reproduction application 202, a description is given of an example of the structure for realizing the function of displaying a digital image (still image data) 401 stored in theHDD 109 like a so-called digital photo frame. The contentreproduction application program 202 comprises anindexing module 301 and adisplay controller 311. The contentreproduction application program 202 has an import mode for importing still images and a presentation mode for selectively presenting still images which are imported. - The
indexing module 301 is used in the import mode. Specifically, theindexing module 301 executes various processes relating to indexing for importing the stillimages 401 and creatingindex information 402 for searching for a target still image from among the stillimages 401. The “import” of the stillimages 401 means taking the still images (still image data) 401 in thecomputer 10, to be more specific, taking the stillimages 401 in the contentreproduction application program 202. In addition, the stillimages 401 may be images of frames constituting moving picture data. - The
indexing module 301 comprises arecording media detector 302, an operatorimage extraction module 303, anoperator recognition module 304, animage import module 305, agrouping module 306, an appearing personimage extraction module 307, an appearingperson recognition module 308, and an indexinformation storing module 309. Therecording media detector 302 detects that therecording medium 20A, which is the source of import of the stillimages 401, has been connected. For example, therecording media detector 302 detects that therecording medium 20A has been inserted in thecard slot 20. The source of import of the stillimages 401 is not limited to therecording medium 20A, and may be a storage device in thecomputer 10, an external storage device connected to thecomputer 10, or some other computer connected to thecomputer 10 via a network. In this case, therecording media detector 302 detects that the storage device, or the like, has been connected (recognized), that files (new still image data) have been stored in a designated directory, or the like, or that an instruction has been issued by the user. Therecording media detector 302 notifies the operatorimage extraction module 303 that therecording medium 20A, or the like, has been detected. - The operator
image extraction module 303 analyzes user image (user image data) 403 generated by thecamera module 115, and extracts a face image of the operator of thecomputer 10. For example, the operatorimage extraction module 303 detects a face region from theuser image 403, and extracts the detected face region from theuser image 403. The detection of the face region can be executed, for example, by analyzing the features of theuser image data 403, and searching for a region having features similar to face image feature samples prepared in advance. The face image feature samples are feature data calculated by statistically processing face image features of many persons. The operatorimage extraction module 303 outputs the extracted face image to theoperator recognition module 304. - The
operator recognition module 304 analyzes the face image extracted by the operatorimage recognition module 303, and recognizes the person corresponding to the face image as the operator. Theoperator recognition module 304 notifies theimage import module 305 of the completion of recognition of the operator. Theoperator recognition module 304 outputs the information of the recognized operator to the indexinformation storing module 309. If the face image of the operator cannot be detected or if the face image of the operator cannot be recognized, it may be possible to newly capture auser image 403 using thecamera module 115 and to execute extraction and recognition of the operator's face image once again. - Responding to the information from the
operator recognition module 304, theimage import module 305 starts import of the stillimages 401 stored in therecording medium 20A. Theimage import module 305 imports the stillimages 401 into the contentreproduction application program 202. Theimage import module 305 reads the stillimages 401 from therecording medium 20A and stores the read stillimages 401 in theHDD 109. - The
grouping module 306 classifies the stillimages 401 based on a predetermined classification rule, and creates groups. Thegrouping module 306 classifies the stillimages 401, for example, based on the time, location, event, etc. For example, if there are two stillimages 401 which are successive on a time-series axis and the time interval difference between the date/time of capturing one of these two stillimages 401 and the date/time of capturing the other of the two stillimages 401 is greater than a predetermined time period, thegrouping module 306 performs grouping by using the boundary between the two stillimages 401 as a break-point. In addition, if the stillimages 401 are images of frames constituting moving picture data, thegrouping module 306 detects a so-called scene change point, before and after which the features of images greatly change, and performs grouping by setting each of scenes to be one section. A logic-unit group created by classification is also called an event group. - The appearing person
image extraction module 307 analyzes thestill image 401, and extracts a face region in thestill image 401. For example, the appearing personimage extraction module 307 detects the face region from thestill image 401, and extracts the detected face region from thestill image data 401. The appearing personimage extraction module 307 outputs the extracted face image to the appearingperson recognition module 308. - The appearing
person recognition module 308 analyzes the face image extracted by the appearing personimage extraction module 307, and recognizes a person corresponding to the face image as the appearing person. In addition, the appearingperson recognition module 308 generates classification information for classifying face images into those face images which are assumed to be associated with the same person. The appearingperson recognition module 308 outputs the information of the recognized appearing person to the indexinformation storing module 309. - The index
information storing module 309 stores in adatabase 109A, asindex information 402, the data which associates the stillimages 401 with the information of the operator recognized by theoperator recognition module 304, and the information of the appearing person recognized by the appearingperson recognition module 308. In some cases, it is assumed that the above-mentioned operator is the owner of the stillimages 401 that are imported (e.g. the photographer of photos). Thus, not only the appearing person in the stillimages 401, but also the owner of the stillimages 401 can be associated with the stillimages 401 and registered. - In addition, the index
information storing module 309 also registers the information of the groups classified by thegrouping module 306 by associating this information with the stillimages 401. Thus, the information of the operator and the appearing person, which is associated with each of the still images, is also used as the information of the person associated with the group (event group) comprising the still images. - The
database 109A is a storage area prepared in theHDD 109 for storing theindex information 402.FIG. 4 shows a structure example of theindex information 402 in thedatabase 109A. Theindex information 402 comprisesimage information 402A andphotographer information 402B. Theimage information 402A is stored in association with each of images imported by theimage import module 305. Thephotographer information 402B is information of the photographer of images (import operator). - The
image information 402A comprises an image ID, date/time of capturing, face image information, text information, group information, and a photographer ID, in association with each of images. The image ID is indicative of identification information which is uniquely allocated to each of still images (still image data) 401. The date/time of capturing is indicative of time information indicating the date/time of capturing of eachstill image 401. If a still image is one of frames constituting moving picture data, a value (time stamp information), which is calculated by adding an elapsed time from the first frame, which is based on the frame number, to the date/time of capturing of the moving picture data, is set as the date/time of capturing of this still image. In the meantime, the date/time of capturing may be a date/time of storage or a date/time of update of thestill image 401. - The face image information is indicative of information of the face image in each
still image 401. If eachstill image 401 comprises face images, the same numbers of face image information items, as the number of the face images, are stored. The face image information comprises a face image, frontality, size, and classification information. The face image is indicative of the face image recognized by the appearingperson recognition module 308. The frontality is indicative of the degree of frontality of the face image which is captured in the frontal direction. The size is indicative of the size of the face image (e.g. pixel-unit image size). The classification information is indicative of a result of classification of face images, which are recognized by the appearingperson recognition module 308 and classified into face images which are assumed to be associated with the same person. Accordingly, the classification information is indicative of identification information (personal ID) which is uniquely allocated to a person. - The text information is indicative of information of characters in each
still image 401. Theindexing module 301 may be provided with a character recognition function for detecting character region (e.g. characters on an advertizing display) in eachstill image 401 and recognizing the detected character region. The characters, which have been recognized by using the character recognition function, are stored as the text information. The detection (recognition) of characters are executed, for example, by searching for a region having a feature amount similar to a feature amount of each character which is prepared in advance. - The group information is indicative of information (group ID) for identifying groups created by the
grouping module 306. Thus, the information indicative of the group, to which the associated still image belongs, is stored as the group information. - The photographer ID is indicative of identification information which is uniquely allocated to the person recognized by the
operator recognition module 304. Specifically, the identification information, which is allocated to the operator who has executed the operation of importing the stillimages 401, is stored as the photographer ID. Thus, to each of still images imported by the same person, the same photographer ID corresponding to this person is set. The information of the photographer corresponding to this photographer ID is stored as thephotographer information 402B. - The
photographer information 402B is indicative of the information of the operator (photographer) who executes the operation of importing thestill image data 401. Thephotographer information 402B comprises a photographer ID, and face image information. - The photographer ID, as described above, is indicative of the identification information which is allocated to the operator who has executed the operation of importing the
still image data 401. The photographer ID in theimage information 402A corresponds to the photographer ID in thephotographer information 402B. - The face image information is indicative of information relating to the face image recognized by the
operator recognition module 304, that is, the face image of the operator (photographer). The face image information comprises a face image, frontality, size, and classification information. The face image is indicative of the face image recognized by theoperator recognition module 304. The frontality is indicative of the degree of frontality of the face image which is captured in the frontal direction. The size is indicative of the size of the face image (e.g. pixel-unit image size). The classification information is indicative of a result of classification of face images, which are recognized by theoperator recognition module 304 and classified into face images which are assumed to be associated with the same person. Accordingly, the classification information is indicative of identification information (personal ID) which is uniquely allocated to a person. - Specifically, according to the
index information 402, it can be understood, with respect to eachstill image 401, who appears in the image, whether text is comprised in the image, to which group the image belongs, and who has captured the image. In other words, using theindex information 402, it is possible to quickly search for, from among the stillimages 401 stored in theHDD 109, stillimages 401 in which a target person appears, stillimages 401 in which the target person does not appear, stillimages 401 in which the target person appears and text appears, and stillimages 401 captured by the target person. - The
display controller 311 is used in the presentation mode. Specifically, using theindex information 402, thedisplay controller 311 selects, from the stillimages 401 in theHDD 109, still images which meet a predetermined selection condition, and successively displays the selected still images. Thedisplay controller 311 may not only perform simple successive display of selected still images, but also may display the selected still images by applying thereto a transition effect at a time of a change of display. - The
display controller 311 comprises a viewerimage extraction module 312, aviewer recognition module 313, agroup extraction module 314, animage selection module 315, and animage display module 316. The viewerimage extraction module 312 analyzes, for example, theuser image 403 generated by thecamera module 115 during the period of the presentation mode, and extracts the face image of the viewer in theuser image 403. The viewerimage extraction module 312 detects, for example, a face region from theuser image 403, and extracts the detected face region from theuser image 403. The viewerimage extraction module 312 outputs the extracted face image to theviewer recognition module 313. - The
viewer recognition module 313 analyzes the face image extracted by the viewerimage extraction module 312, and recognizes the person, who corresponds to this face image, as the viewer. Theviewer recognition module 313 outputs the information of the recognized viewer to thegroup extraction module 314. If the face image of the viewer cannot be detected or if the face image of the viewer cannot be recognized, it may be possible to newly captureuser image 403 by using thecamera module 115 and to execute extraction and recognition of the viewer's face image once again. In addition, if the viewer cannot be recognized even if recognition of the viewer (extraction of the face image of the viewer) is executed a predetermined number of times, it may be determined that there is no viewer of the screen (LCD 17), and thecontent reproduction application 202 may be transitioned to a power-saving mode in which the operation with reduced power is enabled. For example, thecontent reproduction application 202, which operates in the power-saving mode, powers off theLCD 17. - If the viewer has been recognized, the
viewer recognition module 313 outputs the information of the recognized viewer to thegroup extraction module 314. In the meantime, face images may be extracted from theuser image data 403 by the viewerimage extraction module 312. In other words, viewers may be recognized by theviewer recognition module 313. In this case, theviewer recognition module 313 outputs the information of each of the recognized viewers to thegroup extraction module 314. - Based on the information of the viewer who has been recognized by the
viewer recognition module 313, thegroup extraction module 314 extracts groups (event groups) comprising the associated still image from the event groups in theHDD 109. Specifically, thegroup extraction module 314 extracts, from the event groups, event groups comprising at least one of a still image comprising the face image of the viewer and a still image imported by the viewer, as the event groups relating to the present viewer. With each of the event groups, the face image of the appearing person in at least one of still images in the group, and the face image of the operator who imported at least one of the still images in the group may be associated. In this case, thegroup extraction module 314 may extract the groups with which the appearing person's face image or the operator's face image that corresponds to the viewer's face image is associated. - If there are a plurality of viewers, the
group extraction module 314 extracts, from groups, groups in which the face images of all viewers are associated with the appearing person's face image or the photographer's face image (the face image of the operator who has executed the import operation), as the groups associated with those viewers. Thegroup extraction module 314 outputs the information of the extracted groups to theimage selection module 315. - Referring to
FIG. 5 , an example of the groups extracted by thegroup extraction module 314 is described. In the example shown inFIG. 5 , the case is assumed in which the stillimages 401 are grouped into three stillimages grouping module 306. Thestill images event groups Persons 501A, who are associated with theevent group 501, comprise an appearingperson 601 corresponding to a face image in thestill images 401A, and aphotographer 602 of thestill images 401A. This means that theevent group 501 comprises at least one stillimage 401A, which comprises the face image of the appearingperson 601, and that all stillimages 401A in theevent group 501 have been imported by thephotographer 602. In the meantime, it is possibly assumed that each of the appearingperson 601 andphotographer 602 is the appearing person and is the photographer. In this case, for example, it is understood that theevent group 501 comprises at least one stillimage 401A comprising the face image of the appearingperson 601 and at least one stillimage 401A comprising the face image of the photographer (appearing person) 602, and that some stillimages 401A in theevent group 501 are imported by thephotographer 602, and someother still images 401A are imported by the appearing person (photographer) 601.Persons 502A, who are associated with theevent group 502, comprise appearingperson images 401B, and aphotographer 604 of the stillimages 401B. This means that theevent group 502 comprises at least one stillimage 401B, which comprises the face image of the appearingperson 601, and at least one stillimage 401B, which comprises the face image of the appearing person 603 (one stillimage 401B comprising both the appearingpersons data 401B in theevent group 502 are still images which have been imported by thephotographer 604. Like the case of theevent group 501, it is possibly assumed that each of the appearingpersons photographer 604 is the appearing person and is the photographer. Besides,persons 503A, who are associated with theevent group 503, comprise appearingperson images 401C, andphotographers still images 401A. This means that theevent group 503 comprises at least one stillimage 401C, which comprises the face image of the appearingperson 602, and at least one stillimage 401C, which comprises the face image of the appearing person 605 (one stillimage 401C comprising both the appearingpersons images 401C in theevent group 503 are imported by thephotographer 606, and some other stillimages 401C are still imported by thephotographer 607. Like the case of theevent group 501, it is possibly assumed that each of the appearingpersons photographers database 109A asindex information 402. - If it is assumed that the
viewer recognition module 313 has recognized thepresent viewers group extraction module 314 preferentially extracts theevent group 501, which is associated with both theviewers event groups HDD 109. In the meantime, thegroup extraction module 314 may extract, from theHDD 109, theevent groups viewer 601 or theviewer 602. - The
image selection module 315, for example, selects still images, which meet a predetermined condition, from the still images in the extracted group. Needless to say, theimage selection module 315 may select all still images in the extracted group. In addition, theimage display module 316 displays the still images, which are selected by theimage selection module 315, on the screen in a predetermined order. Referring toFIG. 6 , a description is given of the extraction of groups by thegroup extraction module 314, the selection of still images by theimage selection module 315, and the display of images by theimage display module 316. - In
FIG. 6 , part (A) is a first conceptual view illustrating the state in which stillimages 401 stored in theHDD 109 are arranged on a time-series axis in the order of date/time of capturing, based on theindex information 402 stored in thedatabase 109A, and the stillimages 401 are classified into groups. As shown in part (A) ofFIG. 6 , it is assumed that the stillimages 401 stored in theHDD 109 are classified into three groups, namely, a group n, a group n+1 and a group n+2. Each of the boxes with numerals, such as 1, 2, 3, . . . , in each of the groups is representative of one stillimage 401, and these numerals are indicative of the order of dates/times of capturing in each group. - A circle in the box of the
still image 401, which is indicated by symbol c1, indicates that thestill image 401 comprises the face image of the viewer. A triangle in the box of thestill image 401, which is indicated by symbol c2, indicates that thestill image 401 is captured (imported) by the viewer. A diamond in the box of thestill image 401, which is indicated by symbol c3, indicates that thestill image 401 is neither an image comprising the face image of the viewer nor an image captured (imported) by the viewer. - In short, part (A) of
FIG. 6 indicates that, of the three groups, i.e. group n, group n+1 and group n+2, those groups which comprise either thestill image 401 comprising the face image of the viewer or thestill image 401 captured by the viewer are two groups, i.e. group n and group n+2. Thus, thegroup extraction module 314 extracts from theHDD 109 the group n and group n+2 as groups comprising still images that are targets of display. In the present embodiment, stillimages 401, which belong to the same group as thestill image 401 comprising the face image of the viewer or thestill image 401 captured by the viewer, are determined to be images having a relation to the viewer. - Based on the
index information 402 stored in thedatabase 109A, theimage selection module 315 selects, from the stillimages 401A belonging to the extracted groups, stillimages 401, for example, the upper limit number of which is set at a predetermined number. At this time, in principle, theimage selection module 315 preferentially selects, for example, thestill image 401 comprising the face image of the viewer and thestill image 401 captured by the viewer, but theimage selection module 315 also selects images in which no person appears, such as images of scenes. - In
FIG. 6 , part (B) is a second conceptual view showing a selection result of the stillimages 401 by theimage selection module 315. As shown in part (B) ofFIG. 6 , in the group n, an image indicated by symbol d1, which has been imported by the viewer, is selected, and also an image indicated by symbol d2, which is neither an image comprising the face image of the viewer nor an image imported by the viewer, is selected. Similarly, in the group n+2, an image indicated by symbol d3, which has been imported by the viewer, is selected, and also an image indicated by symbol d4, which is neither an image comprising the face image of the viewer nor an image imported by the viewer, is selected. - The images indicated by symbols d1 to d4 are images which cannot be selected when a person is designated as search key information in conventional image search methods. In this respect, the content
reproduction application program 202 executes an effective image search with a high-level intelligence, thereby to display a slide show so that “to where and with whom” can be understood. - In
FIG. 6 , part (C) is a third conceptual view showing the order of display of stillimages 401 by theimage display module 316. As shown in part (C) ofFIG. 6 , theimage display module 316 constructs the arrangement of groups in a time sequence from the present to the past (group n+2→group n), thereby to meet the desire to view latest images at first, and constructs the arrangement of images in each group such that the images can be enjoyed more naturally. In short, different algorithms may be adopted for the arrangement of groups and for the arrangement of images in each group. - If it is assumed that the still
images 401 with the theme of “travel” have been selected, theimage display module 316 places, for example, the image with no person, which is indicated by symbol d4 in the group n+2, that is, an image that is assumed to be a landscape image, at the first position, so that “to where” can first be understood. Following this image, other images are arranged in the order of date/time of capturing. Similarly, as regards the group n, the image indicated by symbol d2, which is assumed to be a landscape image, is placed at the first position, and other images are subsequently arranged in the order of date/time of capturing. - In other words, the content
reproduction application program 202 executes effective image display with a high-level intelligence, for example, in such a manner as to place an image, which is suitable for grasping a scene such as “travel”, as a representative image at the first position. If the theme, such as “travel”, is not clearly set, the images in each group may be arranged in the order of date/time of capturing. - Next, referring to a flowchart of
FIG. 7 , a description is given of the procedure of the image display process executed by the contentreproduction application program 202. - To start with, the case is assumed in which the
content reproduction application 202 has been set in the import mode by the user. In the import mode, thecontent reproduction application 202 determines whether therecording medium 20A has been detected or not (block B101). If therecording medium 20A has been detected (YES in block B101), thecontent reproduction application 202 receives the user image (user image data) 403 which is generated by the camera module 115 (block B102). - Subsequently, the
content reproduction application 202 recognizes the face image of the user (operator), which is comprised in the received user image 403 (block B103). Specifically, thecontent reproduction application 202 detects the face region comprised in the receiveduser image 403, and extracts the face region. Thecontent reproduction application 202 analyzes the extracted face region (face image) and recognizes the corresponding person. Then, thecontent reproduction application 202 determines whether the recognition of the user's face image has successfully been executed (block B104). - If the recognition of the user's face image has successfully been executed (YES in block B104), the
content reproduction application 202 imports the stillimages 401 that are stored in therecording medium 20A (block B105). Responding to the completion of the import, thecontent reproduction application 202 classifies and groups the imported still images 401 (block B106). Then, thecontent reproduction application 202 recognizes the person (appearing person) corresponding to the face image in the imported images (block B107). Subsequently, thecontent reproduction application 202 stores in thedatabase 109A, as theindex information 402, the information relating to the user (operator) recognized in block B103 (photographer information 402B), and the information relating to the appearing person recognized in block B107 (image information 402A) (block B108). - When the import has been completed, the
content reproduction application 202 may automatically transition to the presentation mode. Thecontent reproduction application 202 extracts the groups which are associated with the user (viewer) who is recognized in block B103 or in block B113 (to be described later) (block B109). Thecontent reproduction application 202 selects still images, which meet a predetermined condition, from the still images in the extracted groups (block B110). Thecontent reproduction application 202 successively displays the selected still images on the screen in a predetermined order (block B111). - In parallel with the display of the still images in block B111, the
content reproduction application 202 receives theuser image 403 from the camera module 115 (block B112). Thecontent reproduction application 202 recognizes the face image of the user (viewer) in the received user image 403 (block B113). Then, thecontent reproduction application 202 determines whether the recognition of the user's face image has successfully been executed (block B114). - If the recognition of the user's face image has successfully been executed (YES in block B114), the
content reproduction application 202 determines whether the user, who has been recognized in block B113, is different from the previously recognized user (block B115). If the user recognized in block B113 is different from the previously recognized user (YES in block B115), thecontent reproduction application 202 returns to block B109. Specifically, thecontent reproduction application 202 displays on the screen proper still images for the newly recognized user by the process beginning with block B109. - If the user recognized in block B113 is identical to the previously recognized user (NO in block B115), the
content reproduction application 202 returns to block B112. Specifically, thecontent reproduction application 202 continues to display on the screen the still images which are currently being displayed. - If the recognition of the user's face image has failed (NO in block B114), the
content reproduction application 202 determines whether a user, who views the photo displayed on the screen, is absent (block B116). For example, if a region which is assumed to comprise a face image is not detected from theuser image 403 or if the face image recognition in block B113 has failed a predetermined number of times or more, thecontent reproduction application 202 determines that a user, who views the still image displayed on the screen, is absent. If it is determined that a user, who views the still image displayed on the screen, is absent (YES in block B116), thecontent reproduction application 202 transitions to a power-saving mode which enables the operation with low power consumption (block B117). Thecontent reproduction application 202, which operates in the power-saving mode, powers off theLCD 17, for example. - If it is determined that a user, who views the still image displayed on the screen, is present (NO in block B116), the
content reproduction application 202 returns to block B112 in order to recognize the user (viewer). - Even if the recognition of the face image of the user (viewer) has failed, the
content reproduction application 202 may store, for a predetermined time period, the information of the viewer at the time when the previous recognition was successfully executed, and may display still images based on this information. Thereby, proper images can be displayed on the screen even if the face image of the viewer could not clearly be captured and the recognition failed, for example, in such a case that the viewer has moved the face or the viewer's face has been hidden by some object. - By the above-described process, the
content reproduction application 202 stores in thedatabase 109A, as theindex information 402, the information of the user (operator) who imports the still images (still image data) 401, and the information of the person (appearing person) corresponding to the face image in thestill image 401, and thecontent reproduction application 202 stores thestill image 401 in theHDD 109. In addition, based on theindex information 402, thecontent reproduction application 202 extracts, in accordance with the user (viewer) who views the still images displayed on the screen, the groups comprising the still images associated with this viewer from theHDD 109, and displays the still images comprised in the groups on the screen. Thus, thecontent reproduction application 202 can extract the still images relating to the viewer (e.g. a still image comprising the face image of the viewer, or a still image imported by the viewer) from the stillimages 401 in theHDD 109. - While executing on the background the process relating to the grouping of the still
images 401 and the generation of theindex information 402, thecontent reproduction application 202 can execute, in parallel, the process of displaying the still images corresponding to the viewer on the screen. Moreover, the above-described generation of theindex information 402 and the display of the still images corresponding to the viewer can be realized without operations by the user. In short, it is possible to realize the photo frame function which can manage photos (still image data) of users and present photos corresponding to a viewer (or viewers), while maintaining the convenience for the user and the simplicity of the apparatus. - The
content reproduction application 202 may be provided with a function of manually setting or correcting the information relating to the operator. In addition, thecontent reproduction application 202 may be provided with a function of manually designating the viewer (e.g. designating a “who” mode or an “anyone” mode). - The
camera module 115, which generates theuser image data 403 for recognizing the operator and viewer, is not limited to the camera module built in thecomputer 10, and it may be, for example, an external Web camera connected to thecomputer 10 via theUSB connector 19. - As has been described above, according to the present embodiment, the still images relating to the user who is the viewer can be presented without executing a time-consuming user operation of, e.g. selecting favorite image files. The
content reproduction application 202 can display, in accordance with the change of the viewer, not only photos in which the viewer appears, but also photos taken by the viewer. Furthermore, thecontent reproduction application 202 can display, in addition to the photos in which the viewer appears and the photos taken by the viewer, photos relating to these photos, that is, photos belonging to the same event group as these photos. Therefore, the viewer can view the photos relating to the viewer in units of an event group, which is a logical unit. Besides, since thecontent reproduction application 202 extracts photos relating to the viewer, it is possible to exclude photos, etc. which the viewer does not want a third person to view. - All the procedures of the image display process according to the present embodiment may be executed by software. Thus, the same advantageous effects as with the present embodiment can easily be obtained simply by installing a program, which executes the procedures of the image display process, into an ordinary computer through a computer-readable storage medium.
- While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
- The various modules of the systems described herein can be implemented as software applications, hardware and/or software modules, or components on one or more computers, such as servers. While the various modules are illustrated separately, they may share some or all of the same underlying logic or code. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Claims (6)
1. An electronic apparatus comprising:
a capturing module configured to capture an image;
an information extraction module configured to extract information items from a plurality of information items, based on the image captured by the capturing module when each of the plurality of information items is imported; and
an information display controller configured to control display of the extracted information items on a screen.
2. The electronic apparatus of claim 1 , wherein the information extraction module is configured to extract the information items associated with a first image captured by the capturing module, based on a second image captured by the capturing module when a first information item of the plurality of information items is imported.
3. The electronic apparatus of claim 1 , wherein the plurality of information items satisfy a first condition.
4. The electronic apparatus of claim 3 , further comprising:
an information selection module configured to select information items satisfying a second condition from the extracted information items, wherein
the information display controller is configured to control display, on the screen, of the selected information items in a predetermined order.
5. An information display method comprising:
capturing an image;
extracting information items from a plurality of information items, based on the image captured by the capturing when each of the plurality of information items is imported; and
controlling display of the extracted information items on a screen.
6. A computer-readable, non-transitory storage medium having stored thereon a program, the program controlling the computer to execute functions of:
capturing an image;
extracting information items from a plurality of information items, based on the image captured by the capturing when each of the plurality of information items is imported; and
controlling display of the extracted information items on a screen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/545,495 US20120281888A1 (en) | 2009-11-06 | 2012-07-10 | Electronic apparatus and image display method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009255313A JP4768846B2 (en) | 2009-11-06 | 2009-11-06 | Electronic apparatus and image display method |
JP2009-255313 | 2009-11-06 | ||
US12/905,015 US8244005B2 (en) | 2009-11-06 | 2010-10-14 | Electronic apparatus and image display method |
US13/545,495 US20120281888A1 (en) | 2009-11-06 | 2012-07-10 | Electronic apparatus and image display method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/905,015 Continuation US8244005B2 (en) | 2009-11-06 | 2010-10-14 | Electronic apparatus and image display method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120281888A1 true US20120281888A1 (en) | 2012-11-08 |
Family
ID=43974206
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/905,015 Expired - Fee Related US8244005B2 (en) | 2009-11-06 | 2010-10-14 | Electronic apparatus and image display method |
US13/545,495 Abandoned US20120281888A1 (en) | 2009-11-06 | 2012-07-10 | Electronic apparatus and image display method |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/905,015 Expired - Fee Related US8244005B2 (en) | 2009-11-06 | 2010-10-14 | Electronic apparatus and image display method |
Country Status (2)
Country | Link |
---|---|
US (2) | US8244005B2 (en) |
JP (1) | JP4768846B2 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010272077A (en) * | 2009-05-25 | 2010-12-02 | Toshiba Corp | Method and device for reproducing information |
JP4900739B2 (en) * | 2009-09-04 | 2012-03-21 | カシオ計算機株式会社 | ELECTROPHOTOGRAPH, ITS CONTROL METHOD AND PROGRAM |
JP6124658B2 (en) * | 2013-04-12 | 2017-05-10 | キヤノン株式会社 | Image processing apparatus and image processing apparatus control method |
JP6018029B2 (en) | 2013-09-26 | 2016-11-02 | 富士フイルム株式会社 | Apparatus for determining main face image of captured image, control method thereof and control program thereof |
GB2537296B (en) * | 2014-01-16 | 2018-12-26 | Bartco Traffic Equipment Pty Ltd | System and method for event reconstruction |
JP6465398B6 (en) * | 2015-03-13 | 2019-03-13 | Dynabook株式会社 | Electronic device, display method and program |
US11367305B2 (en) * | 2018-09-28 | 2022-06-21 | Apple Inc. | Obstruction detection during facial recognition processes |
JP6797388B1 (en) * | 2020-07-31 | 2020-12-09 | アカメディア・ジャパン株式会社 | Online learning system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009171176A (en) * | 2008-01-16 | 2009-07-30 | Seiko Epson Corp | Reproduction apparatus, its control method, and program |
Family Cites Families (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6144755A (en) * | 1996-10-11 | 2000-11-07 | Mitsubishi Electric Information Technology Center America, Inc. (Ita) | Method and apparatus for determining poses |
US6118888A (en) * | 1997-02-28 | 2000-09-12 | Kabushiki Kaisha Toshiba | Multi-modal interface apparatus and method |
US7190475B2 (en) * | 2000-03-16 | 2007-03-13 | Nikon Corporation | Method for providing a print and apparatus |
JP4177598B2 (en) * | 2001-05-25 | 2008-11-05 | 株式会社東芝 | Face image recording apparatus, information management system, face image recording method, and information management method |
US6931147B2 (en) * | 2001-12-11 | 2005-08-16 | Koninklijke Philips Electronics N.V. | Mood based virtual photo album |
US7221809B2 (en) * | 2001-12-17 | 2007-05-22 | Genex Technologies, Inc. | Face recognition system and method |
US7362919B2 (en) * | 2002-12-12 | 2008-04-22 | Eastman Kodak Company | Method for generating customized photo album pages and prints based on people and gender profiles |
JP2005033276A (en) | 2003-07-07 | 2005-02-03 | Seiko Epson Corp | System, program, and method for image reproducing |
EP1566788A3 (en) * | 2004-01-23 | 2017-11-22 | Sony United Kingdom Limited | Display |
US20060018522A1 (en) * | 2004-06-14 | 2006-01-26 | Fujifilm Software(California), Inc. | System and method applying image-based face recognition for online profile browsing |
US7602993B2 (en) * | 2004-06-17 | 2009-10-13 | Olympus Corporation | Image processing and display using a plurality of user movable viewer areas |
JP4739062B2 (en) * | 2005-02-28 | 2011-08-03 | 富士フイルム株式会社 | Image output apparatus, image output method, and program |
US7773832B2 (en) | 2005-02-28 | 2010-08-10 | Fujifilm Corporation | Image outputting apparatus, image outputting method and program |
JP2008131081A (en) * | 2006-11-16 | 2008-06-05 | Pioneer Electronic Corp | Operation setting system for television receiver and/or video recording and reproducing device, remote controller and television receiver and/or video recording and reproducing device |
JP2008141484A (en) | 2006-12-01 | 2008-06-19 | Sanyo Electric Co Ltd | Image reproducing system and video signal supply apparatus |
JP2008165009A (en) * | 2006-12-28 | 2008-07-17 | Fujifilm Corp | Image display device |
JP4999589B2 (en) * | 2007-07-25 | 2012-08-15 | キヤノン株式会社 | Image processing apparatus and method |
JP2009038680A (en) * | 2007-08-02 | 2009-02-19 | Toshiba Corp | Electronic device and face image display method |
JP4834639B2 (en) * | 2007-09-28 | 2011-12-14 | 株式会社東芝 | Electronic device and image display control method |
JP2009141678A (en) * | 2007-12-06 | 2009-06-25 | Fujifilm Corp | Digital photo frame, and image display method thereof |
US20090178126A1 (en) * | 2008-01-03 | 2009-07-09 | Sterling Du | Systems and methods for providing user-friendly computer services |
JP2010067104A (en) * | 2008-09-12 | 2010-03-25 | Olympus Corp | Digital photo-frame, information processing system, control method, program, and information storage medium |
JP2010086221A (en) * | 2008-09-30 | 2010-04-15 | Fujifilm Corp | Image editing method and device, and computer readable recording medium storing program for implementing the method |
JP2011061341A (en) * | 2009-09-08 | 2011-03-24 | Casio Computer Co Ltd | Electrophotography display device, control method for the same, and program |
-
2009
- 2009-11-06 JP JP2009255313A patent/JP4768846B2/en not_active Expired - Fee Related
-
2010
- 2010-10-14 US US12/905,015 patent/US8244005B2/en not_active Expired - Fee Related
-
2012
- 2012-07-10 US US13/545,495 patent/US20120281888A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009171176A (en) * | 2008-01-16 | 2009-07-30 | Seiko Epson Corp | Reproduction apparatus, its control method, and program |
Also Published As
Publication number | Publication date |
---|---|
JP2011101251A (en) | 2011-05-19 |
US8244005B2 (en) | 2012-08-14 |
US20110110564A1 (en) | 2011-05-12 |
JP4768846B2 (en) | 2011-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8244005B2 (en) | Electronic apparatus and image display method | |
US8457407B2 (en) | Electronic apparatus and image display method | |
US8488914B2 (en) | Electronic apparatus and image processing method | |
US8503832B2 (en) | Electronic device and facial image display apparatus | |
US8121349B2 (en) | Electronic apparatus and video processing method | |
US7970257B2 (en) | Image display method and electronic apparatus implementing the image display method | |
US20140050422A1 (en) | Electronic Apparatus and Image Processing Method | |
US8943020B2 (en) | Techniques for intelligent media show across multiple devices | |
US11211097B2 (en) | Generating method and playing method of multimedia file, multimedia file generation apparatus and multimedia file playback apparatus | |
CN111343512B (en) | Information acquisition method, display device and server | |
US20110064319A1 (en) | Electronic apparatus, image display method, and content reproduction program | |
US8988457B2 (en) | Multi image-output display mode apparatus and method | |
CN112232260A (en) | Subtitle region identification method, device, equipment and storage medium | |
US20110304644A1 (en) | Electronic apparatus and image display method | |
US8494347B2 (en) | Electronic apparatus and movie playback method | |
US20110304779A1 (en) | Electronic Apparatus and Image Processing Method | |
US8463052B2 (en) | Electronic apparatus and image search method | |
US20110231763A1 (en) | Electronic apparatus and image processing method | |
US20140153836A1 (en) | Electronic device and image processing method | |
CN110929056B (en) | Multimedia file generating method, multimedia file playing method, multimedia file generating device and multimedia file playing device | |
WO2022194084A1 (en) | Video playing method, terminal device, apparatus, system and storage medium | |
JP2011233974A (en) | Electronic device and image processing program | |
JP5050115B2 (en) | Electronic device, image display method, and content reproduction program | |
JP5414842B2 (en) | Electronic device, image display method, and content reproduction program | |
JP2011243212A (en) | Electronic device, image display method and content reproduction program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |