KR102050594B1 - Method and apparatus for playing contents in electronic device - Google Patents

Method and apparatus for playing contents in electronic device Download PDF

Info

Publication number
KR102050594B1
KR102050594B1 KR1020130001066A KR20130001066A KR102050594B1 KR 102050594 B1 KR102050594 B1 KR 102050594B1 KR 1020130001066 A KR1020130001066 A KR 1020130001066A KR 20130001066 A KR20130001066 A KR 20130001066A KR 102050594 B1 KR102050594 B1 KR 102050594B1
Authority
KR
South Korea
Prior art keywords
tag information
image
menu
tag
processor
Prior art date
Application number
KR1020130001066A
Other languages
Korean (ko)
Other versions
KR20140089170A (en
Inventor
김현경
김대성
김소라
박항규
임승경
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020130001066A priority Critical patent/KR102050594B1/en
Publication of KR20140089170A publication Critical patent/KR20140089170A/en
Application granted granted Critical
Publication of KR102050594B1 publication Critical patent/KR102050594B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • G06F16/4387Presentation of query results by the use of playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • G06F16/134Distributed indices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Television Signal Processing For Recording (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a method and a device for playing a content in an electronic device. The method includes selecting a setting condition value for playing a content using tag information included in the content, and selecting a plurality of contents satisfying the setting condition value. Including the process of playing according to the predefined order, the user can enjoy the content without being bored, and also eliminate the inconvenience of having to manually select the content to play.

Description

METHOD AND APPARATUS FOR PLAYING CONTENTS IN ELECTRONIC DEVICE}

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention generally relates to electronic devices, and more particularly, to a method and apparatus for dynamically playing content according to setting conditions in an electronic device.

Most portable terminals are equipped with a camera to provide a picture function. In particular, a slide show function for viewing images captured by a camera or images and still images provided from an external device on a screen of a portable terminal is provided. The slideshow function displays successive images in sequence.

1A to 1B show an example of constructing a photo slideshow according to the prior art.

As shown in (a) of FIG. 1, the pre-stored pictures are displayed in the form of thumbnails, and a picture for composing the slideshow is selected by the user. Here, by the user, Photo 1 (100), Photo 2 (101), Photo 6 (102), Photo 7 (103), Photo 8 (104), Photo 11 (105), Photo 12 (105) are slideshows. Assume that it is selected for.

Thereafter, as shown in FIG. 1B, the selected Photo 1 (100), Photo 2 (101), Photo 6 (102), Photo 7 (103), Photo 8 (104), Photo 11 (105), Photo 12 105 are repeatedly displayed on the screen 120 in the selected order.

However, the conventional photo slideshow is not only capable of displaying selected photos one by one at a constant speed from the beginning to the end, and after the slideshow composition, the user must manually add or delete photos to reconstruct the slideshow. There is this. In addition, there is a problem that can be bored from the user's point of view by displaying only the photos selected by the user.

Therefore, there is a need for a method and apparatus for dynamically composing photo slideshows according to set conditions.

An object of the present invention is to provide a method and apparatus for dynamically playing content according to a setting condition in an electronic device.

Another object of the present invention is to provide a method and an apparatus for dynamically composing a photo slideshow according to setting conditions in an electronic device.

Still another object of the present invention is to provide a method and apparatus for solving the inconvenience of manually configuring a slideshow.

According to a first aspect of the present invention for achieving the above objects, a method of operating an electronic device includes the steps of selecting a setting condition value for playing content using the tag information included in the content; And playing back a plurality of contents satisfying the contents in a predetermined order.

Preferably, using the tag information included in the content, selecting a setting condition value for playing the content, displaying the pre-stored content, the process of selecting at least one or more of the pre-stored content And combining the respective tag information included in the at least one selected content, and determining the combined tag information as a setting condition value for playing the content.

Preferably, the method further comprises displaying respective tag information included in the selected at least one or more contents.

Preferably, the combined tag information is one of a union and an intersection of tag informations included in the at least one selected content.

Preferably, the plurality of contents satisfying the setting condition values are played in the order defined in the form of a slide show.

Preferably, using the tag information included in the content, selecting a set condition value for playing the content, extracting the tag information from the pre-stored content, and the tag information extracted from the content Displaying tag content for each tag item, selecting tag content corresponding to a setting condition value for reproducing the content for each tag item, and setting condition values for reproducing the content for the tag content selected for each tag item It includes the process of determining.

The method may further include displaying tag contents selected for each tag item.

The method may further include determining whether there is content satisfying the setting condition value among the at least one additional content when the at least one additional content is stored, and wherein the plurality of contents satisfying the setting condition value are selected. Together with the content, the process further includes the step of playing back in the predefined order.

Preferably, when any one of a plurality of contents satisfying the setting condition value is deleted, searching for other contents including at least one or more tag information of the deleted content; and at least tag information of the deleted content. The method may further include reducing a play weight of each of the other contents including one or more contents.

Preferably, the process further includes the step of deleting from the playlist a content whose play weight is less than or equal to a threshold.

Preferably, the tag information includes at least one of a face tag, a location tag, a weather tag, an event tag, and a time tag.

According to a second aspect of the present invention for achieving the above objects, at least one processor; Memory; And at least one program stored in the memory and configured to be executed by the at least one processor, wherein the program selects a setting condition value for playing the content using tag information included in the content. And reproducing a plurality of contents satisfying the setting condition values in a predetermined order.

Preferably, the program displays the pre-stored contents, selects at least one or more contents of the pre-stored contents, combines the respective tag information included in the selected at least one or more contents, the combined tag information The method further includes a command for determining a setting condition value for the content reproduction.

Preferably, the program further includes instructions for displaying respective tag information included in the selected at least one or more contents.

Preferably, the combined tag information is one of a union and an intersection of tag informations included in the at least one selected content.

Preferably, the plurality of contents satisfying the setting condition values are played in the order defined in the form of a slide show.

Preferably, the program extracts tag information from prestored contents, displays tag contents extracted from the contents for each tag item, and sets the tag information to the set condition value for reproducing the content for each tag item. The method may further include selecting a corresponding tag content and determining the tag content selected for each tag item as a setting condition value for playing the content.

The program further includes a command for displaying tag contents selected for each tag item.

Preferably, when the at least one additional content is stored, the program determines whether there is content that satisfies the setting condition value among the at least one or more additional contents, and the content that satisfies the setting condition value includes the plurality of contents. In addition, it further includes a command to play in the predefined order.

Preferably, when the program deletes any one of a plurality of contents satisfying the setting condition value, the program searches for other contents including at least one tag information of the deleted content, and the tag information of the deleted content. And reducing a play weight of each of the other contents including at least one.

The program further includes instructions for deleting from the playlist a content whose play weight is less than or equal to a threshold.

Preferably, the tag information includes at least one of a face tag, a location tag, a weather tag, an event tag, and a time tag.

As described above, by dynamically playing the content according to the setting conditions, the user can enjoy the photo slideshow without being bored. In addition, the inconvenience of having to manually select the target of the slideshow can be eliminated.

In addition, by selecting the target of the slideshow according to the setting conditions, newly added pictures may be automatically updated to the target of the slideshow.

1A to 1B are views showing an example of constructing a photo slideshow according to the prior art;
2 is tag information added to content according to an embodiment of the present invention;
3A to 3B are diagrams for configuring face tag information according to an exemplary embodiment of the present invention;
4 is a diagram for configuring event tag information according to a first embodiment of the present invention;
5 is a diagram for configuring event tag information according to a second embodiment of the present invention;
6 is a diagram for configuring event tag information according to a third embodiment of the present invention;
7 (a) to 7 (d) are diagrams for determining setting conditions of a photo slideshow according to the first embodiment of the present invention;
8 (a) to 8 (b) are diagrams for determining setting conditions of a photo slideshow according to a second embodiment of the present invention;
9 (a) to 9 (b) are diagrams showing the object addition of a slide show after the slide show configuration according to the embodiment of the present invention;
10 is a flowchart for constructing a photo slideshow of an electronic device according to a first embodiment of the present disclosure;
11 is a flowchart for constructing a photo slideshow of an electronic device according to a second embodiment of the present disclosure;
12 is a view showing an example of a sequence in which a subject of a photo slideshow is displayed according to an embodiment of the present invention;
FIG. 13 illustrates an example of deleting an item from a slide playlist according to an embodiment of the present invention; FIG.
14 is a flowchart for deleting an object of a photo slideshow according to an embodiment of the present invention; And
15 is a configuration diagram of an electronic device according to an embodiment of the present disclosure.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description of the present invention, detailed descriptions of related well-known functions or configurations will be omitted if it is determined that the detailed description of the present invention may unnecessarily obscure the subject matter of the present invention. Terms to be described later are terms defined in consideration of functions in the present invention, and may be changed according to intentions or customs of users or operators. Therefore, the definition should be made based on the contents throughout the specification.

Hereinafter, the present invention will be described with respect to a method and apparatus for playing content in an electronic device.

In particular, the present invention can set content to be reproduced conditionally by using metadata (or tag information) and recognition technology of the content, and can accurately classify and reproduce the content according to the property of the content. Also, based on the setting conditions, newly added content can be automatically updated and played back.

Although a picture or an image is described as an example in the present invention, the present invention is not limited to the picture or the image, and the definition of the content may be extended to a picture, a video, a music, a game, an application, or the like. For example, the present invention is not limited to a simple photo slideshow, but can automatically update a music playlist according to a condition, and can be utilized for a function of playing and temporarily storing content.

2 illustrates tag information added to content according to an exemplary embodiment of the present invention.

Referring to FIG. 2, a face tag 201, a location tag 202, a weather tag 203, an event tag 204, a time / seasonal tag 205, and the like may be added to content such as a photo. The present invention is not limited to the tag information, and more tag information may be added or deleted according to various implementations.

A tag is a keyword or classification given as metadata about some information. A single piece of information can be tagged with multiple tags to show that information is related in a variety of ways, and the information or material provided can be easily accessed to easily retrieve that information or be associated with other data.

The face tag 200 is human face information included in the picture, the location tag 202 is location information of the place where the picture was taken, the weather tag 203 is weather information when the picture was taken, and the event tag 204 ) Is event information related to the photo, and time / season tag 205 is time and season information when the photo is captured.

Meanwhile, tag information added to a picture may be determined by a user input or received from another external device (eg, a server or a GPS receiver). For example, the location tag 202 may be a GPS coordinate received from a GPS receiver or an address converted from a GPS coordinate (eg, longitude 37.493445 latitude 127,022213 → Seocho 1 dong). The weather is classified into Sun, Cloudy, Rainy, Snowy, Windy, Clear, and the weather tag 203 is selected by the user in the weather list. Or it may be determined by receiving from the server providing the weather information. Time / season tags 205 may include spring, summer, autumn, winter, or January to December, morning, afternoon, night, time.

The face tag 201 is information about a person's face included in a picture, and indicates information about who the picture is related to. The face image extracted from the photo through the face recognition technique may be used as a face tag. According to another implementation, the first face image recognized from the photograph may be compared with the previously stored second face image, and when there is matching face information, the second stored face image may be used as a face tag. In this case, the face tag may further include a person name corresponding to the human face together with the face image, and the face tag may be determined by a combination of the face image and the person name.

According to another implementation, the face tag may be determined by user input. For example, a face image and a person's name may be determined as a face tag of a corresponding picture by a user input.

The event tag 204 is information corresponding to an event related to a schedule, an event based on a photo scene analysis, or an event (comment, description of a photo, etc.) related to a photo uploaded to Social Network Sites (SNS).

3A to 3B illustrate an example of configuring a face tag according to an exemplary embodiment of the present invention.

Referring to (a) of FIG. 3, when there is a picture including Katie and Tom and the baby, Katie and Tom and the baby are added as the face tag 201 of the picture. Can be. The face image used as the face tag may be an image extracted from a picture or an image previously stored. Depending on the implementation, the face tag 201 may be added by user input, regardless of the human face included in the picture. For example, the face images 302 and 312 may be selected from a list of pre-stored human face images and set as the face tag of the photo (a). Meanwhile, in the face tag, all human faces included in the picture may be used as the face tag, or some human faces included in the picture may be used as the face tag.

Referring to FIG. 3B, Jane (331), James (330), and Chris (332) may be grouped into a family group. Alternatively, A 340, B 341, and C 342 may be grouped into a peer group 360. This may be used as a setting condition of a family or a colleague instead of using an individual human face as a setting condition.

4 illustrates an example of configuring event tag information according to a first embodiment of the present invention.

Referring to FIG. 4, when there is a company workshop schedule 410 on July 26, 2012, the schedule management 400 includes information indicating that the event tag is a picture associated with the company workshop.

For example, an event tag of a picture taken on July 26, 2012 includes company workshop information, or on July 26, 2012, a picture related to the company workshop is automatically displayed one by one.

5 illustrates an example of configuring event tag information according to a second embodiment of the present invention.

Referring to FIG. 5, an event tag may recognize a general color of a picture to recognize a scene or may be added through an exposed object. For example, when the photograph is taken, if the overall color of the photographed picture is green 500 or an object exposed to the picture is recognized as the ball 510, the photographic scene is recognized as a soccer field, soccer associated with the soccer ball.

6 illustrates an example of configuring event tag information according to a third embodiment of the present invention.

Referring to FIG. 6, in the case of a picture downloaded from SNS (Social Network Sites), the event name may be presented by combining a description and a comment when the picture is uploaded.

For example, based on the photo description 602 when Steve uploaded the photo and the comment content 604 posted by others, the keywords "baseball" and "Suwon" were extracted, and "Suwon Baseball" Information can be added as an event tag.

7 (a) to 7 (d) show examples of determining setting conditions of a photo slideshow according to the first embodiment of the present invention.

FIG. 7A shows all pictures or specific pictures in the memory area, and when one picture is selected by the user among the displayed pictures, tag information included in the selected picture is displayed. For example, when the picture 700 is selected, the weather tag (Sunny), the location tag (paris), the event tag (travel), and the time / seasonal tag (Night) included in the picture 700 are displayed.

7B illustrates tag information included in the selected picture when another picture is selected by the user among the displayed pictures. For example, when the photo 702 is selected, the weather tag Rainy, the location tag Home, the event tag Daily and the time / seasonal tag included in the photo 702 are displayed. At this time, both tag information of the previous picture 700 and the picture 702 are displayed.

FIG. 7C illustrates tag information Night included in the first picture of FIG. 7A and the second picture of FIG. 7B when the high accuracy check box is selected. Here, "high accuracy" means that tag information included in a plurality of pictures is represented by an AND condition. On the other hand, when the high accuracy check box is not selected, tag information included in a plurality of pictures appears as an OR condition (see FIG. 7B).

FIG. 7 (d) shows tag information of other pictures and other pictures including at least one of tag information of the first picture in FIG. 7 (a) and the second picture in FIG. 7 (b). have.

For example, a weather tag (Sunny), a location tag (paris), an event tag (travel), a time / season tag (Night) exist in the first photo 700, and a face tag (Night) in the second photo 702. Jane, Tom), Weather Tag (Rainy), Location Tag (Home), Event Tag (Daily), Time / Seasonal Tag (Night) exist, and common tag information of the first and second photo is time / seasonal tag (Night), and the third photo including the weather tag (Sunny) of the first photo 700 includes face tags (Alice, Bill), weather tag (Sunny), location tags (Versailles), and event tags (Picnic). ), Time / season tags (Day), and the fourth photo including the event tag (Travel) of the first photo 700 includes a face tag (James), a weather tag (Snowy), a location tag (Yosemite), There are event tags (Travel) and time / season tags (Day).

On the other hand, although not shown, after the images satisfying the setting conditions are filtered, the images that will constitute the slideshow are displayed in thumbnail form.

8 (a) to 8 (b) show an example of determining setting conditions of a photo slideshow according to the second embodiment of the present invention.

Referring to FIG. 8A, contents of a corresponding tag item are displayed by being classified for each tag item, and the content of some tag items is determined as a setting condition value by a user input (eg, a user touch input). For example, face tag items include Alice, Tom, Jane, John, and Chris, Alice and Tom are selected as setting conditions, location tag items include Church, Paris, Home, Paris is selected as setting conditions, Picnic, Project, Party are in event tag, Picnic is selected as setting condition.

Here, the contents of each tag item are contents extracted from photos in the memory area or predefined contents.

Referring to FIG. 8B, a face tag setting condition value may be set through the face tag selection menu 800. In the face tag selection menu, Alice, James, Jane, Julia, Tay, Yumi, Grace, and Jim are displayed, and Alice and James are selected as face tag setting conditions through a user touch. The contents of the face tag are contents extracted from the photos in the memory area or predefined contents.

The location tag setting condition value may be set through the location tag selection menu 810. In the location tag selection menu, Time square, Ganngnam, Binben, and Home are displayed, and Gangnam and Home are selected as the location tag setting conditions. The contents of the location tag are either contents extracted from photos in the memory area or predefined contents.

The weather tag setting condition value may be set through the weather tag selection menu 820. In the weather tag selection menu, Sunny, Cludy, Rainy and Snowy are displayed and Sunny is selected as the weather tag setting condition. The contents of the weather tag are either extracted from the photos in the memory area or predefined.

The weather tag setting condition value may be set through the event tag selection menu 830. In the event tag selection menu, Workshop, Picnic, Travel, Sports, Seminar, etc. are displayed and Picnic is selected as the event tag setting condition. The contents of the event tag are contents extracted from the photos in the memory area or predefined contents.

A time / season tag setting condition value may be set through the time / season tag selection menu 840. When selecting time / season tag, Spring, Summer, Fall, Winter are displayed and Spring is selected as time / season tag setting condition. The contents of the time / seasonal tag are the contents extracted from the photos in the memory area or the predefined contents.

9 (a) to 9 (b) show an example of adding a target of a slide show after the slide show configuration according to an embodiment of the present invention.

9A shows current setting condition values for each tag item. For example, "Alice" 902 and "Tom" 902 are selected as face tag setting condition values, and "Picnic" 904 is selected as event tag setting condition values.

9 (b) automatically includes additional pictures that satisfy the current set condition values (“Alice” 902, “Tom” 902, and “Picnic” 904) in the group of photos composing the slideshow. An example is shown.

For example, for a newly added picture, "Alice" 902 and "Tom" 902 are included, and when the scene is "Picnic", it is included in the picture group to automatically compose a slide show. In this case, the additional photograph may be a picnic photograph including only "Alice" 902 and "Tom" 902 or a picnic photograph including "Alice" 902 and "Tom" 902 and others.

10 is a flowchart illustrating a configuration of a photo slideshow of an electronic device according to a first embodiment of the present disclosure.

Referring to FIG. 10, in operation 1000, the electronic device extracts tag information from images stored in a specific memory area or an entire memory area.

Then, the electronic device classifies the tag information extracted in step 1002 for each item and generates items corresponding to the condition value. Preferably, information included in the face tag, location tag, event tag, weather tag, time / seasonal tag is extracted from each picture.

In step 1004, the electronic device displays items corresponding to the condition value. For example, as shown in FIGS. 8A and 8B, the extracted tag information is displayed for each item (eg, face tag, location tag, event tag, weather tag, time / seasonal tag).

Thereafter, the electronic device selects a setting condition value for each tag item displayed in step 1006. For example, Alice, Tom, Jane, John, Chris are displayed in the face tag item, Alice and Tom are selected as the setting condition, Church, Paris, Home is displayed in the location tag item, and Paris is selected as the setting condition. Picnic, Project, Party are displayed in the event tag, and Picnic is selected as the setting condition.

In operation 1008, when the electronic device filters the pictures by the first condition, the electronic device proceeds to step 1010 to determine the pictures that satisfy the first condition, whereas when filtering the pictures by the second condition, the electronic device proceeds to step 1012. The photographs satisfying the second condition are determined. The first condition is to filter the setting condition value selected for each tag item by AND condition, and the second condition is to filter the setting condition value selected for each tag item by OR condition. Depending on the implementation, the setting condition value selected for each tag item may be filtered by a combination of AND and OR.

In operation 1014, the electronic device displays the pictures satisfying the first condition or the second condition one by one in a predefined order. In other words, the slideshow is performed using the determined photos. Here, the images constituting the slideshow may be displayed in recent order or randomly.

Thereafter, when the picture is captured in step 1016, the electronic device proceeds to step 1008 to determine whether the first condition or the second condition that is the new picture is satisfied, and to determine whether to add the picture to the picture slideshow according to the corresponding condition.

The procedure of the present invention is then terminated.

11 is a flowchart illustrating a configuration of a photo slideshow of an electronic device according to a second embodiment of the present disclosure.

Referring to FIG. 11, in operation 1100, the electronic device displays images stored in a specific memory area or an entire memory area.

In operation 1102, the electronic device selects at least one or more images from the displayed images to determine the set condition value.

In operation 1104, the electronic device extracts and displays tag information corresponding to the selected at least one image (see FIGS. 7A to 7C). Preferably, information included in the face tag, location tag, event tag, weather tag, time / seasonal tag is extracted from each image. That is, all tag information included in the selected at least one image is displayed.

In operation 1108, when the electronic device filters the images with the first condition, the electronic device proceeds to operation 1110 to determine the pictures that satisfy the first condition, whereas when the electronic device filters the pictures with the second condition, the operation proceeds to operation 1112. Determine images that meet the second condition. The first condition is to filter the setting condition value by the tag item included in the at least one selected image by the AND condition, and the second condition is to set the setting condition value by the tag item included in the at least one selected image as the OR condition. To filter. According to an implementation, the setting condition value may be filtered by a combination of AND and OR for each tag item included in the selected at least one image.

In operation 1114, the electronic device displays the pictures satisfying the first condition or the second condition one by one in a predefined order. In other words, the slideshow is performed using the determined photos. Here, the images constituting the slideshow may be displayed in recent order or randomly.

Then, when the picture is captured in step 1116, the electronic device proceeds to step 1108 to determine whether the new condition is satisfied with the first condition or the second condition and determines whether to add the picture to the picture slideshow according to the corresponding condition.

The procedure of the present invention is then terminated.

12 illustrates an example of a procedure of displaying a target of a photo slideshow according to an exemplary embodiment of the present invention.

Referring to FIG. 12, there are pictures 1200 stored in the whole or a specific memory area, and pictures 1202 newly added to the memory area by taking a picture, and pictures 1 to 9 and 1204 and 16 to 21. 1206 satisfies the corresponding set condition value.

First, before the additional pictures 1202 are generated, the pictures 1 to 9 1204 of the pre-stored pictures 1200 satisfy the corresponding setting condition value, so that the pictures are displayed in the latest order in the LCD ON state 1208. It is assumed that the added pictures 1202 are generated by being reproduced in the order of → picture 8 → picture 7. At this time, the transition to the LCD OFF state 1210, the slide playback is stopped. Then, when the transition from the LCD OFF state 1210 to the LCD ON state 1212, the slideshow in the latest order, Photo 21 → Photo 20 → Photo 19 → Photo 18 → Photo 17 → Photo 16 → Photo 9 → Photo 8 → Picture 7 is played back.

According to another implementation, when transitioning from the LCD OFF state 1210 to the LCD ON state 1212, playback may begin after the previously stopped slideshow. For example, if the slideshow is stopped in Photo 7, the sequence is: Photo 6 → Photo 5 → Photo 4 → Photo 3 → Photo 2 → Photo 1 → Photo 21 → Photo 20 → Photo 19 → Photo 18 → Photo 17 → Photo 16 Can be recycled.

Then, when transitioning from the LCD ON state 1212 to the LCD OFF state 1214, when the slideshow playback is stopped in the picture 7 and again transitions from the LCD OFF state 1214 to the LCD ON state 1216, the previous The playback can then be started after the slideshow is stopped. For example, pictures 6 → picture 5 → picture 4 → picture 3 → picture 2 → picture 1 → picture 21 → picture 20 → picture 19 → picture 18 → picture 17 → picture 16.

The present invention is not limited to the latest slideshow playback order, and the slideshow playback order may be determined in various ways. For example, they can be played in alphabetical order or in reverse order.

In the present invention, the slide show is stopped in the LCD OFF state, but in another embodiment, the slide show may continue in the LCD OFF state.

13 illustrates an example of deleting an item from a slide playlist according to an embodiment of the present invention.

Referring to FIG. 13, an example of reducing weights of other pictures associated with tag information included in a deleted picture when deleting the picture is shown while six pictures are reproduced one by one according to the setting condition. Here, when the weight of another picture not deleted falls below the threshold, it is automatically deleted from the slideshow playlist. Assume that the reproduction weight of the initial picture starts with 10. Depending on the implementation, each tag item of the picture may have a weight. For example, the reproduction weight value of the face tag is 10, the reproduction weight value of the location tag is 10, the reproduction weight value of the event tag is 10, and the reproduction weight value of the time tag is 10. If there is no corresponding tag information in the picture, the reproduction weight of the corresponding tag is zero.

For example, when the slideshow is played, the third photo 1300 slides through an operation of selecting a third photo 1300 and then pushing the photo in a corresponding direction (hereinafter, referred to as a flick operation). When deleted from the show playlist 1301, the reproduction weight of the photos including at least one or more tag information (James, Snowy, Yosemite, Travel, Day) of the third photo 1300 is reduced. That is, the reproduction weight value of the first picture including the Travel 1301 is 9 (= 10-1), the reproduction weight value of the fourth picture including the Day 1301 is 9 (= 10-1), The reproduction weights of the fifth and sixth pictures including Yosemite, Travel, and Day 1301 are 7 (= 10-3). Thereafter, when the user selects the fifth photo 1310 and pushes the photo in the corresponding direction, when the fifth photo 1310 having a playback weight of 7 is deleted from the slideshow playlist (1302), the fifth photo The reproduction weight of pictures including at least one tag information (Sunny, Yosemite, Travel, Day) of the picture 1310 is reduced. That is, the reproduction weight value of the first picture including the Travel 1301 is 8 (= 9-1), and the reproduction weight value of the fourth picture including the Day 1302 is 8 (= 9-1), The reproduction weighting value of the sixth picture including Sunny, Yosemite, Travel, and Day 1302 is 3 (= 7-4).

As described above, when a similar picture is deleted based on the reproduction weight of the picture, when there is a picture matching the set condition value of the deleted picture, the other picture is automatically deleted from the slideshow playlist. That is, if there is a matching value of the initial picture with a predetermined value and there is a matching condition each time another picture is deleted, the playback weight of the picture decreases, and as the deleted value increases gradually, the playback weight approaches zero. It is automatically deleted.

As described above, when only the family pictures are to be displayed on the background screen, if the family name is checked, only the family pictures are played on the background screen where only the family face is recognized. In addition, even when there is a newly taken family picture, the background screen may be provided in a live form, which is updated on the background screen and reproduces only the previous picture to the recently taken picture.

In addition, the present invention is not limited to playing a photo slideshow, and when selecting a singer name as a setting condition in the music player, a new playlist may be constructed by searching only the singer A name that satisfies the setting condition in the music playlist. If the live update condition is set, the newly stored, input or existing content on the server and the acquired singer A that satisfies the set condition may be automatically updated in the playlist.

14 is a flowchart for deleting an object of a photo slideshow according to an exemplary embodiment of the present invention.

Referring to FIG. 14, in operation 1400, the electronic device deletes the image from the slide playlist in response to a gesture of the user, that is, a gesture of selecting the image and pushing the image in the corresponding direction.

In operation 1402, the electronic device searches for another image including tag information of the corresponding image deleted from the slide playlist, and decreases the reproduction weight of the other images found in operation 1404. For example, as shown in FIG. 13, the reproduction weight of a photo may be reduced according to the number of matching tag items.

Thereafter, the electronic device proceeds to step 1400 when the deletion is not terminated in step 1404, and terminates the procedure of the present invention.

Although not shown, regardless of the user's gesture, the pictures are automatically deleted from the slide playlist when the playback weight of the other pictures falls below the threshold.

15 is a block diagram illustrating an electronic device according to an embodiment of the present disclosure.

The electronic device may be a portable electronic device, and may be a portable terminal, a mobile terminal, a mobile pad, a media player, or a tablet computer. Or a device such as a handheld computer or a personal digital assistant. It may also be any portable electronic device including a device that combines two or more of these devices.

Referring to FIG. 15, the electronic device includes a controller 1500, a speaker / microphone 1510, a camera 1520, a GPS receiver 1530, an RF processor 1540, a sensor module 1550, and a touch screen 1560. , A touch screen controller 1565 and an expansion memory 1570.

The controller 1500 can include an interface 1501, one or more processors 1502, 1503, and an internal memory 1504. In some cases, the entire controller 1500 may be referred to as a processor. The interface 1501, the application processor 1502, the communication processor 1503, and the internal memory 1504 may be separate components or integrated into one or more integrated circuits.

The application processor 1502 executes various software programs to perform various functions for the electronic device, and the communication processor 1503 performs processing and control for voice communication and data communication. Further, in addition to these conventional functions, the processors 1502 and 1503 execute specific software modules (instruction sets) stored in the expansion memory 1570 or the internal memory 1504, and various specific types corresponding to the modules. It also plays a role. That is, the processors 1502 and 1503 work with software modules stored in the expansion memory 1570 or the internal memory 1504 to perform a method for reproducing and deleting contents of the present invention.

For example, according to the first embodiment, the application processor 1502 extracts tag information from images stored in a specific memory area or an entire memory area, and classifies the extracted tag information by items (eg, a face). Tag, location tag, event tag, weather tag, time / seasonal tag) to create the items corresponding to the condition value, display the items corresponding to the condition value if necessary, and select the setting condition value by the displayed tag items. When the pictures are filtered by the first condition, the pictures satisfying the first condition are determined, while when the pictures are filtered by the second condition, the pictures satisfying the second condition are determined. The first condition is to filter the setting condition value selected for each tag item by AND condition, and the second condition is to filter the setting condition value selected for each tag item by OR condition. Depending on the implementation, the setting condition value selected for each tag item may be filtered by a combination of AND and OR.

The application processor 1502, in addition, displays the pictures that meet the first condition or the second condition one by one in a predefined order, and when the pictures are taken (i.e., when the pictures are newly added / stored), the new pictures. It is determined whether the first condition or the second condition is satisfied, and it is determined whether to add to the photo slideshow according to the condition.

Meanwhile, according to the second embodiment, the application processor 1502 displays at least one image from the displayed images to display the images stored in the specific memory area or the entire memory area, and to determine a setting condition value. Select, extract and display tag information corresponding to the at least one selected image, and when filtering the images by the first condition, determine the pictures that satisfy the first condition, while filtering the pictures by the second condition. At time, images satisfying the second condition are determined. The first condition is to filter setting condition values according to tag items included in the at least one selected image by an AND condition, and the second condition is to set OR condition conditions on each tag item included in the at least one selected image as an OR condition. To filter. According to an implementation, the setting condition value may be filtered by a combination of AND and OR for each tag item included in the selected at least one image.

In addition, the application processor 1502 displays the pictures satisfying the first condition or the second condition one by one in a predefined order, and when the pictures are newly added / stored, the application processor 1502 displays the new picture as the first condition or the second condition. Determine if you're satisfied and decide whether or not to add it to your photo slideshow based on that condition.

Meanwhile, according to the method for deleting the object of the photo slideshow, the application processor 1502 deletes the image from the slide playlist in response to a gesture of the user, that is, a gesture of selecting and pushing the image in the corresponding direction. Search other images including the tag information of the corresponding image deleted from the slide playlist, and reduce the playback weight of the found other images according to the number of matching tag items, and the corresponding other photos regardless of the user's gesture. When their playback weight is below the threshold, the photos are automatically deleted from the slide playlist.

Meanwhile, another processor (not shown) may include one or more data processors, image processors, or codecs. The data processor, image processor or codec may be separately configured. It may also be composed of several processors that perform different functions. The interface 1501 connects to the touch screen controller 1565 and the expansion memory 1570 of the electronic device.

The sensor module 1550 may be coupled to the interface 1501 to enable various functions. For example, a motion sensor and an optical sensor may be coupled to the interface 1501 to enable motion detection and light detection from the outside of the electronic device, respectively. In addition, other sensors, such as a positioning system, temperature sensor or biometric sensor, may be connected to the interface 550 to perform related functions.

The camera 1520 may be combined with the sensor module 1550 through the interface 1501 to perform camera functions such as recording of pictures and video clips.

The RF processor 1540 performs a communication function. For example, under the control of the communication processor 1503, the RF signal is converted into a baseband signal and provided to the communication processor 1503, or the baseband signal from the communication processor 1503 is converted into an RF signal and transmitted. Here, the communication processor 1503 processes baseband signals in various communication schemes. For example, the communication method is not limited to these, but the Global System for Mobile Communication (GSM) communication method, Enhanced Data GSM Environment (EDGE) communication method, Code Division Multiple Access (CDMA) communication method, and W-CDMA (W) Code Division Multiple Access (LTE), Long Term Evolution (LTE), Orthogonal Frequency Division Multiple Access (OFDMA), Wi-Fi (Wireless Fidelity), WiMax or / and Bluetooth can do.

Speaker / microphone 1510 may be responsible for the input and output of audio streams, such as voice recognition, voice replication, digital recording, and telephony functions. That is, the speaker / microphone 1510 converts a voice signal into an electric signal or converts the electric signal into a voice signal. Although not shown, attachable and detachable earphones, a headphone, or a headset may be connected to the electronic device through an external port.

The touch screen controller 1565 may be coupled to the touch screen 1560. Gesture screen 560 and touch screen controller 1565 include, but are not limited to, capacitive, resistive, infrared and surface acoustic wave technologies as well as other to determine one or more contact points with gesture screen 560. Any multi-touch sensing technique, including proximity sensor arrangements or other elements, can be used to detect contact and movement or disruption thereof.

The touch screen 1560 provides an input / output interface between the electronic device and the user. That is, the touch screen 1560 transmits a user's touch input to the electronic device. It is also a medium for showing the output from the electronic device to the user. In other words, the touch screen shows a visual output to the user. This visual output appears in the form of text, graphics, video, and combinations thereof.

Various displays may be used for the touch screen 1560. For example, but not limited to, liquid crystal display (LCD), light emitting diode (LED), light emitting polymer display (LPD), organic light emitting diode (OLED), active matrix organic light emitting diode (AMOLED), or FLED (Flexible LED) can be used.

The GPS receiver 1530 converts a signal received from the satellite into information such as position, speed, and time. For example, the distance between the satellite and the GPS receiver is calculated by multiplying the speed of light by the signal arrival time, and the location of the electronic device is measured on a known triangulation principle by obtaining the exact position and distance of the three satellites.

Expansion memory 1570 or internal memory 1504 includes fast random access memory and / or nonvolatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices and / or flash memory (eg, NAND, NOR). can do.

Extended memory 1570 or internal memory 1504 stores software. Software components include operating system software modules, communication software modules, graphics software modules, user interface software modules and MPEG modules, camera software modules, one or more application software modules, and the like. In addition, since a module, which is a software component, may be represented by a set of instructions, a module may be referred to as an instruction set. Modules are also represented programmatically.

Operating system software includes several software components that control general system operations. Controlling such general system operation means, for example, memory management and control, storage hardware (device) control and management, power control and management, and the like. Such operating system software also functions to facilitate communication between various hardware (devices) and software components (modules).

The communication software module may enable communication with other electronic devices, such as computers, servers, and / or portable terminals, via the RF processor 1540. The communication software module is configured with a protocol structure corresponding to the communication method.

The graphics software module includes various software components for presenting and displaying graphics on the touchscreen 1560. The term graphics is used to mean text, web pages, icons, digital images, video, animations, and the like.

The user interface software module includes various software components related to the user interface. This includes how the state of the user interface changes or under what conditions the state of the user interface changes.

The camera software module includes camera related software components that enable camera related processes and functions. Application modules include the rendering engine, web browsers, e-mails, instant messages, word processing, keyboard emulation, address book, contact Touch lists, widgets, digital rights management (DRM), voice recognition, voice replication, position determining functions, location based services, etc. It includes. The memory 570, 504 may include additional modules (instructions) in addition to the modules described above. Or, if necessary, some modules (instructions) may not be used.

In the context of the present invention, the application module includes instructions for playing and deleting the content of the present invention (see Figs. 10, 11 and 14).

For example, the application module, for example, according to the first embodiment, the application module extracts tag information from images stored in a specific memory area or the entire memory area, and extracts the extracted tag information item by item. Create items corresponding to condition values by categorizing (eg, face tag, location tag, event tag, weather tag, time / seasonal tag), display items corresponding to condition value if necessary, and set by displayed tag items When the condition value is selected and the pictures are filtered by the first condition, the pictures satisfying the first condition are determined, while when the pictures are filtered by the second condition, the pictures satisfying the second condition are determined. The first condition is to filter the setting condition value selected for each tag item by AND condition, and the second condition is to filter the setting condition value selected for each tag item by OR condition. Depending on the implementation, the setting condition value selected for each tag item may be filtered by a combination of AND and OR.

The application module, in addition, displays pictures satisfying the first condition or the second condition one by one in a predefined order, and when the picture is captured (ie, when a picture is newly added / stored), the first picture is a new picture. It is determined whether the condition or the second condition is satisfied and whether to add to the photo slideshow according to the condition.

Meanwhile, according to the second embodiment, the application module displays images stored in a specific memory area or an entire memory area, selects at least one image from the displayed images, and determines a setting condition value. When extracting and displaying tag information corresponding to the selected at least one image and filtering the images by the first condition, determining the pictures satisfying the first condition, while filtering the pictures by the second condition, 2 Determine images that satisfy the condition. The first condition is to filter the setting condition value by the tag item included in the at least one selected image by the AND condition, and the second condition is to set the setting condition value by the tag item included in the at least one selected image as the OR condition. To filter. According to an implementation, the setting condition value may be filtered by a combination of AND and OR for each tag item included in the selected at least one image.

In addition, the application module displays pictures that satisfy the first condition or the second condition one by one in a predefined order, and determines whether the new condition satisfies the first condition or the second condition when the pictures are newly added / stored. Decides whether to add them to the photo slideshow according to their conditions.

Meanwhile, according to the method for deleting the object of the photo slideshow, the application module deletes the image from the slide playlist and corresponds to the gesture of the user, that is, the gesture of selecting the image and pushing it in the corresponding direction. Searches for other images including tag information of the corresponding image deleted from the playlist, and decreases the playback weight of the searched other images according to the number of matching tag items, and the playback weight of the corresponding other photos, regardless of the user's gesture. When is below the threshold, the photos are automatically deleted from the slide playlist.

Methods according to the embodiments described in the claims and / or specification of the present invention may be implemented in the form of hardware, software, or a combination of hardware and software.

When implemented in software, a computer-readable storage medium for storing one or more programs (software modules) may be provided. One or more programs stored in a computer readable storage medium are configured for execution by one or more processors in an electronic device. One or more programs include instructions that cause an electronic device to execute methods in accordance with embodiments described in the claims and / or specifications of the present invention.

Such programs (software modules, software) may include random access memory, non-volatile memory including flash memory, read only memory (ROM), and electrically erasable programmable ROM. (EEPROM, Electrically Erasable Programmable Read Only Memory), magnetic disc storage device (CD-ROM, Compact Disc-ROM), digital versatile discs (DVDs) or other forms It can be stored in an optical storage device, a magnetic cassette. Or it may be stored in a memory composed of some or all of these combinations. In addition, each configuration memory may be included in plural.

In addition, the electronic device may be connected to a communication network such as the Internet, an intranet, a local area network (LAN), a wide area network (WLAN), or a storage area network (SAN), or a combination thereof. It may be stored in an attachable storage device that is accessible. Such a storage device may access an electronic device through an external port.

In addition, a separate storage device on the communication network may connect to the portable electronic device.

Meanwhile, in the detailed description of the present invention, specific embodiments have been described, but various modifications are possible without departing from the scope of the present invention. Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined not only by the scope of the following claims, but also by the equivalents of the claims.

1500: controller, 1510: speaker / microphone,
1520: camera, 1530: GPS receiver,
1540: RF processor, 1550: sensor module
1560: touch screen, 1565: touch screen controller,
1570: extended memory.

Claims (22)

In a portable communication device,
touch screen;
A processor operatively connected with the touch screen;
A memory operatively connected with the processor,
When the memory is executed, the processor,
First tag information corresponding to a location where at least one first image of a plurality of images stored in the memory is obtained, and second tag information corresponding to an object recognized from at least one second image of the plurality of images. And third tag information corresponding to a specific application used to obtain at least one third image of the plurality of images,
Displaying a first menu associated with the first tag information, a second menu associated with the second tag information, and a third menu associated with the third tag information through the touch screen,
Receive an input for the first menu, the second menu, or the third menu, and
A first user interface corresponding to the first menu via the touch screen, a second user corresponding to the second menu based at least in part on the selection of the first menu, the second menu, or the third menu And a instruction to display an interface, or a third user interface corresponding to the third menu.
The method of claim 1,
The instructions, the processor,
And generate the first tag information based at least in part on metadata of the at least one first image.
The method of claim 1,
Further comprising a satellite communication receiver for obtaining location information corresponding to the location,
The instructions, the processor,
And generate the first tag information based at least in part on the location information obtained using the satellite communication receiver.
The method of claim 1,
Further comprising a camera for obtaining an image,
The instructions, the processor,
And generate the second tag information based at least in part on an image analysis result of the at least one second image acquired through the camera.
The method of claim 1,
The instructions, the processor,
And generate the second tag information based at least in part on a face analysis result for the object.
The method of claim 5,
The instructions, the processor,
Generate fourth tag information based at least in part on the received name information for the face, and
And display the fourth tag information in relation to a corresponding one of the at least one second image through the touch screen.
In the operating method of a portable communication device,
First tag information corresponding to a location where at least one first image of a plurality of images stored in a memory of the portable communication device is obtained, and an object recognized from at least one second image of the plurality of images Identifying second tag information and third tag information corresponding to a particular application used to obtain a third image of at least one of the plurality of images;
Displaying a first menu associated with the first tag information, a second menu associated with the second tag information, and a third menu associated with the third tag information through a touch screen of the portable communication device;
Receiving an input for the first menu, the second menu, or the third menu; And
A first user interface corresponding to the first menu via the touch screen, a second user corresponding to the second menu based at least in part on the selection of the first menu, the second menu, or the third menu Displaying an interface, or a third user interface corresponding to the third menu.
The method of claim 7, wherein
Generating the first tag information based at least in part on metadata of the at least one first image.
The method of claim 7, wherein
Generating the first tag information based at least in part on location information obtained using a satellite communication receiver included in the portable communication device.
The method of claim 7, wherein
Generating the second tag information based at least in part on an image analysis result of the at least one second image obtained through a camera included in the portable communication device.
The method of claim 7, wherein
Generating the second tag information based at least in part on a face analysis result for the object.
The method of claim 11,
Generating fourth tag information based at least in part on the received name information for the face; And
And displaying the fourth tag information in relation to a corresponding image of the at least one second image through the touch screen.
In a portable communication device,
touch screen;
A processor operatively connected with the touch screen; And
A memory operatively connected with the processor,
When the memory is executed, the processor,
First tag information tagged in at least one first image among a plurality of images stored in the memory, second tag information tagged in at least one second image among the plurality of images, and among the plurality of images Identify third tag information tagged in the at least one third image,
Displaying, in thumbnail form, a first menu associated with the first tag information, a second menu associated with the second tag information, and a third menu associated with the third tag information through the touch screen;
Receive a selection of the first menu, the second menu, or the third menu, and
The at least one first image and the second tag information tagged with the first tag information through the touch screen based at least in part on the selection of the first menu, the second menu, or the third menu. Store instructions for displaying the at least one second image tagged, or the at least one third image tagged with the third tag information,
The first tag information corresponds to a location where the at least one first image is obtained,
The second tag information corresponds to an object recognized from the at least one second image, and
And the third tag information corresponds to an application used to obtain the at least one third image.
The method of claim 13,
The instructions, the processor,
When the first menu is selected, the at least one first image is displayed in the form of a slide show through the touch screen.
When the second menu is selected, displaying the at least one second image in the form of a slide show through the touch screen, and
And when the third menu is selected, displaying the at least one third image in the form of a slide show through the touch screen.
The method of claim 13,
The instructions, the processor,
And generate the first tag information based at least in part on the at least one first image.
The method of claim 13,
Further comprising a satellite communication receiver configured to obtain location information corresponding to the location,
The instructions, the processor,
And generate the first tag information based at least in part on the location information obtained using the satellite communication receiver.
The method of claim 13,
The instructions, the processor,
And generate the second tag information based at least in part on a face analysis result for the object.
The method of claim 17,
The instructions, the processor,
Generate fourth tag information based at least in part on the received name information for the face, and
And display the fourth tag information in relation to a corresponding image of the at least one second image corresponding to the name information through the touch screen.
The method of claim 13,
One of the first tag information and the at least one first image is simultaneously displayed as at least part of the first menu,
One of the second tag information and the at least one second image is simultaneously displayed as at least part of the second menu,
And the one of the third tag information and the at least one third image are simultaneously displayed as at least part of the third menu.
The method of claim 13,
The instructions, the processor,
Receive user input for selecting at least one displayed image, and
And display tag information corresponding to the at least one selected image through the touch screen.
delete delete
KR1020130001066A 2013-01-04 2013-01-04 Method and apparatus for playing contents in electronic device KR102050594B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130001066A KR102050594B1 (en) 2013-01-04 2013-01-04 Method and apparatus for playing contents in electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130001066A KR102050594B1 (en) 2013-01-04 2013-01-04 Method and apparatus for playing contents in electronic device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
KR1020190152701A Division KR102165339B1 (en) 2019-11-25 2019-11-25 Method and apparatus for playing contents in electronic device

Publications (2)

Publication Number Publication Date
KR20140089170A KR20140089170A (en) 2014-07-14
KR102050594B1 true KR102050594B1 (en) 2019-11-29

Family

ID=51737436

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130001066A KR102050594B1 (en) 2013-01-04 2013-01-04 Method and apparatus for playing contents in electronic device

Country Status (1)

Country Link
KR (1) KR102050594B1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007226536A (en) * 2006-02-23 2007-09-06 Seiko Epson Corp Image search device, image search method, and program for searching image
JP2009124206A (en) * 2007-11-12 2009-06-04 Mega Chips Corp Multimedia composing data generation device
JP2011048668A (en) * 2009-08-27 2011-03-10 Hitachi Kokusai Electric Inc Image retrieval device
JP2012191629A (en) * 2012-04-26 2012-10-04 Casio Comput Co Ltd Image display apparatus, image display method, and image display program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007226536A (en) * 2006-02-23 2007-09-06 Seiko Epson Corp Image search device, image search method, and program for searching image
JP2009124206A (en) * 2007-11-12 2009-06-04 Mega Chips Corp Multimedia composing data generation device
JP2011048668A (en) * 2009-08-27 2011-03-10 Hitachi Kokusai Electric Inc Image retrieval device
JP2012191629A (en) * 2012-04-26 2012-10-04 Casio Comput Co Ltd Image display apparatus, image display method, and image display program

Also Published As

Publication number Publication date
KR20140089170A (en) 2014-07-14

Similar Documents

Publication Publication Date Title
US11249620B2 (en) Electronic device for playing-playing contents and method thereof
US8819030B1 (en) Automated tag suggestions
WO2019109245A1 (en) Method and device for displaying story album
JP6349031B2 (en) Method and apparatus for recognition and verification of objects represented in images
US20180314390A1 (en) User Interface, Method and System for Crowdsourcing Event Notification Sharing Using Mobile Devices
US9384197B2 (en) Automatic discovery of metadata
US20190026313A1 (en) Device, method, and user interface for managing and interacting with media content
JP2019149182A (en) Systems and methods for selecting media items
US8879890B2 (en) Method for media reliving playback
WO2017107672A1 (en) Information processing method and apparatus, and apparatus for information processing
JP6628115B2 (en) Multimedia file management method, electronic device, and computer program.
TW201426608A (en) Portable electronic device, content recommendation method and computer-readable medium
US20120213497A1 (en) Method for media reliving on demand
US20200081931A1 (en) Techniques for disambiguating clustered occurrence identifiers
WO2018152822A1 (en) Method and device for generating album, and mobile terminal
TW201636878A (en) Method and apparatus for voice information augmentation and displaying, picture categorization and retrieving
US20140125692A1 (en) System and method for providing image related to image displayed on device
JP2014052915A (en) Electronic apparatus, display control method, and program
JP2012004747A (en) Electronic equipment and image display method
US20150012537A1 (en) Electronic device for integrating and searching contents and method thereof
KR102289293B1 (en) Method and apparatus for playing contents in electronic device
KR102165339B1 (en) Method and apparatus for playing contents in electronic device
KR102050594B1 (en) Method and apparatus for playing contents in electronic device
TWI633784B (en) Multimedia playing method and system for moving vehicle
US20130287370A1 (en) Multimedia importing application

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
A107 Divisional application of patent
GRNT Written decision to grant