TWI637347B - Method and device for providing image - Google Patents

Method and device for providing image Download PDF

Info

Publication number
TWI637347B
TWI637347B TW104124634A TW104124634A TWI637347B TW I637347 B TWI637347 B TW I637347B TW 104124634 A TW104124634 A TW 104124634A TW 104124634 A TW104124634 A TW 104124634A TW I637347 B TWI637347 B TW I637347B
Authority
TW
Taiwan
Prior art keywords
image
effect
interest
identification information
region
Prior art date
Application number
TW104124634A
Other languages
Chinese (zh)
Other versions
TW201618038A (en
Inventor
鄭文植
金蕙善
裵秀晶
李聖午
車賢熙
崔成燾
崔賢秀
Original Assignee
三星電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三星電子股份有限公司 filed Critical 三星電子股份有限公司
Publication of TW201618038A publication Critical patent/TW201618038A/en
Application granted granted Critical
Publication of TWI637347B publication Critical patent/TWI637347B/en

Links

Abstract

一種影像提供方法,包括:顯示第一影像,所述第一影像包括物體及背景;接收將所述物體或所述背景選擇為感興趣區域的使用者輸入;基於所述第一影像的第一屬性資訊來取得與所述感興趣區域相關聯的第一辨識資訊;自目標影像取得第二影像,所述第二影像包括第二辨識資訊,所述第二辨識資訊相同於所述第一辨識資訊;以及基於所述第一影像與所述第二影像中的至少一者而產生效果影像。An image providing method includes: displaying a first image, the first image including an object and a background; receiving a user input selecting the object or the background as a region of interest; based on the first image Attribute information to obtain first identification information associated with the region of interest; obtaining a second image from the target image, the second image includes second identification information, and the second identification information is identical to the first identification Information; and generating an effect image based on at least one of the first image and the second image.

Description

提供影像的方法與裝置Method and device for providing images

本申請案主張在韓國智慧財產局於2014年7月31日提出申請的第10-2014-0098589號、於2014年8月26日提出申請的第10-2014-0111628號、以及於2015年6月3日提出申請的第10-2015-0078777號韓國專利申請案的優先權,所述專利申請案的揭露內容全文併入本文供參考。 This application claims No. 10-2014-0098589 filed by the Korea Intellectual Property Office on July 31, 2014, and No. 10-2014-0111628 filed on August 26, 2014, and in 2015 The priority of the Korean Patent Application No. 10-2015-0078777, filed on Jan. 3, the entire disclosure of which is hereby incorporated by reference.

一或多個示例性實施例是有關於一種提供影像的方法與裝置。 One or more exemplary embodiments are directed to a method and apparatus for providing images.

電子設備已變得多樣化,且每個人所使用的電子設備的類型已變得更加多樣化。使用者利用其所擁有的多個電子設備來使用各種影像、各種應用程式、以及各種服務,且使用者可使用的影像的數目正日益增加。 Electronic devices have become diverse, and the types of electronic devices used by each have become more diverse. Users use a variety of electronic devices they use to use a variety of images, applications, and services, and the number of images available to users is increasing.

因此,使用者可遇到許多類型的影像,但使用者所偏好的影像可能有所不同。此外,使用者可對影像的具體部分感興趣。因此,仍需高效地提供影像的使用者所感興趣的部分。 Therefore, the user can encounter many types of images, but the images preferred by the user may be different. In addition, the user may be interested in specific parts of the image. Therefore, it is still necessary to efficiently provide a portion of the image that is of interest to the user.

根據示例性實施例的一種態樣,一種影像提供方法可包括:顯示第一影像,所述第一影像包括物體及背景;接收將所述物體或所述背景選擇為感興趣區域的使用者輸入;基於所述第一影像的第一屬性資訊來取得與所述感興趣區域相關聯的第一辨識資訊;自目標影像取得第二影像,所述第二影像包括第二辨識資訊,所述第二辨識資訊相同於所述第一辨識資訊;以及基於所述第一影像與所述第二影像中的至少一者而產生效果影像。 According to an aspect of an exemplary embodiment, an image providing method may include: displaying a first image, the first image including an object and a background; and receiving a user input selecting the object or the background as a region of interest Obtaining first identification information associated with the region of interest based on the first attribute information of the first image; acquiring a second image from the target image, the second image including second identification information, The second identification information is the same as the first identification information; and the effect image is generated based on at least one of the first image and the second image.

所述第一屬性資訊可包括與所述第一影像的產生相關聯的上下文資訊與關於所述第一影像的註解資訊中的至少一者,所述註解資訊是由使用者添加。 The first attribute information may include at least one of context information associated with the generation of the first image and annotation information about the first image, the annotation information being added by a user.

所述第一辨識資訊可藉由基於字網(WordNet)來歸納所述第一屬性資訊而取得。 The first identification information can be obtained by summarizing the first attribute information based on a word network (WordNet).

所述取得所述第二影像的步驟可包括:利用所述第二影像的第二屬性資訊與所述第二影像的影像分析資訊中的至少一者來取得所述第二影像的第二辨識資訊。 The obtaining the second image may include: acquiring, by using at least one of the second attribute information of the second image and the image analysis information of the second image, the second identification of the second image News.

所述感興趣區域的所述第一辨識資訊可自所述第一屬性資訊取得,所述第一屬性資訊可包括所述第一影像的多個屬性。 The first identification information of the region of interest may be obtained from the first attribute information, and the first attribute information may include a plurality of attributes of the first image.

所述方法可包括顯示所述第一影像的所述多個屬性的清單。 The method can include displaying a list of the plurality of attributes of the first image.

所述方法可包括:接收使用者輸入,所述使用者輸入選 擇所述第一影像的所述多個屬性中的至少一者;以及基於所述所選擇的至少一個屬性來產生所述第一辨識資訊,其中所述取得所述第二影像的步驟包括:將所述第一辨識資訊與所述目標影像的第三辨識資訊進行比較。 The method can include receiving user input, the user inputting Selecting at least one of the plurality of attributes of the first image; and generating the first identification information based on the selected at least one attribute, wherein the step of obtaining the second image comprises: Comparing the first identification information with the third identification information of the target image.

所述產生所述效果影像的步驟可包括:顯示所述第二影像的部分影像,所述部分影像對應於所述第一辨識資訊。 The step of generating the effect image may include: displaying a partial image of the second image, the partial image corresponding to the first identification information.

所述效果影像可利用以下各項中的至少一者而產生:突出顯示所述部分影像的光環效果、減小所述部分影像的畫素值之間的差異的模糊效果、改變所述部分影像的尺寸的尺寸效果、以及改變所述部分影像的深度資訊的深度效果。 The effect image may be generated by at least one of highlighting a halo effect of the partial image, reducing a blur effect of a difference between pixel values of the partial image, and changing the partial image The size effect of the size and the depth effect of changing the depth information of the partial image.

所述效果影像可藉由將所述第二影像的部分影像與所述第一影像的所述感興趣區域進行組合而獲得,其中所述部分影像對應於所述第一辨識資訊。 The effect image may be obtained by combining a partial image of the second image with the region of interest of the first image, wherein the partial image corresponds to the first identification information.

所述第一影像可為即時取景影像。 The first image may be a live view image.

所述第二影像可為在接收到用於儲存影像的使用者輸入之前自所述即時取景影像產生的臨時影像。 The second image may be a temporary image generated from the live view image before receiving a user input for storing an image.

每當所述即時取景影像的部分影像發生足夠的變化時即可產生所述臨時影像,其中所述部分影像對應於所述第一辨識資訊,且其中所述足夠的變化是與等於或大於參考值的值相關聯的變化。 The temporary image may be generated whenever a partial image of the live view image is sufficiently changed, wherein the partial image corresponds to the first identification information, and wherein the sufficient change is equal to or greater than a reference The value associated with the value of the change.

所述效果影像可為移動圖片,且所述效果影像包括所述第一影像及所述第二影像。 The effect image may be a moving picture, and the effect image includes the first image and the second image.

根據示例性實施例的另一態樣,一種行動裝置可包括:顯示器,用以顯示第一影像,所述第一影像包括物體及背景;使用者輸入,用以接收將所述物體或所述背景選擇為感興趣區域的使用者輸入;以及控制器,用以基於所述第一影像的第一屬性資訊來取得所述感興趣區域的第一辨識資訊,並自目標影像取得第二影像,其中所述第二影像包括第二辨識資訊,且所述第二辨識資訊相同於所述第一辨識資訊。 According to another aspect of the exemplary embodiments, a mobile device may include: a display for displaying a first image, the first image including an object and a background; and user input for receiving the object or the The background is selected as a user input of the region of interest; and the controller is configured to obtain the first identification information of the region of interest based on the first attribute information of the first image, and obtain the second image from the target image, The second image includes second identification information, and the second identification information is the same as the first identification information.

所述控制器可用以基於所述第一影像與所述第二影像中的至少一者而產生效果影像。 The controller can be configured to generate an effect image based on at least one of the first image and the second image.

所述效果影像可藉由將所述第二影像的部分影像與所述第一影像的所述感興趣區域進行組合而獲得,且其中所述部分影像是所述第二影像的對應於所述第一辨識資訊的部分。 The effect image may be obtained by combining a partial image of the second image with the region of interest of the first image, and wherein the partial image is corresponding to the second image The first part of the identification information.

所述第一屬性資訊可包括與所述第一影像的產生相關聯的上下文資訊與關於所述第一影像的註解資訊中的至少一者,所述註解資訊是由使用者添加。 The first attribute information may include at least one of context information associated with the generation of the first image and annotation information about the first image, the annotation information being added by a user.

所述控制器可用以藉由基於WordNet來歸納所述第一屬性資訊而取得所述第一辨識資訊。 The controller may be configured to obtain the first identification information by summarizing the first attribute information based on WordNet.

所述控制器可用以藉由將所述第二影像的部分影像與所述第一影像的所述感興趣區域進行組合而產生所述效果影像,其中所述部分影像與所述第一辨識資訊相關聯。 The controller may be configured to generate the effect image by combining a partial image of the second image with the region of interest of the first image, wherein the partial image and the first identification information Associated.

根據示例性實施例的又一態樣,一種提供影像的方法可包括:接收第一影像,所述第一影像至少包括物體及背景;接收 將所述物體或所述背景選擇為感興趣區域的使用者輸入;確定與所述感興趣區域相關聯的辨識資訊;利用所述辨識資訊搜尋多個目標影像;選擇與所述辨識資訊相關聯的第二影像;以及藉由向所述第一影像或所述第二影像中的至少一者應用效果而產生至少一個效果影像。 According to still another aspect of the exemplary embodiments, a method of providing an image may include: receiving a first image, the first image including at least an object and a background; and receiving Selecting the object or the background as a user input of a region of interest; determining identification information associated with the region of interest; searching for a plurality of target images using the identification information; selecting to associate with the identification information a second image; and generating at least one effect image by applying an effect to at least one of the first image or the second image.

產生至少一個效果影像的步驟可包括將所述效果應用至所述感興趣區域、或將所述第一影像與所述第二影像進行組合中的至少一者。 The step of generating the at least one effect image can include applying the effect to the region of interest, or combining at least one of the first image and the second image.

產生至少一個效果影像的步驟可包括藉由將所述效果應用至所述感興趣區域而產生第一效果影像、以及藉由將所述效果應用至所述第二影像而產生第二效果影像,且所述方法更包括儲存所述第一效果影像及所述第二效果影像。 The step of generating the at least one effect image may include generating a first effect image by applying the effect to the region of interest, and generating a second effect image by applying the effect to the second image, And the method further includes storing the first effect image and the second effect image.

確定所述辨識資訊的步驟可包括利用所述第一影像的一或多個屬性而產生所述辨識資訊。 The step of determining the identification information may include generating the identification information using one or more attributes of the first image.

利用所述第一影像的一或多個屬性而產生所述辨識資訊的步驟可包括藉由對所述第一影像執行影像分析而產生所述第一影像的所述一或多個屬性。 The step of generating the identification information using one or more attributes of the first image may include generating the one or more attributes of the first image by performing image analysis on the first image.

100‧‧‧裝置 100‧‧‧ device

110‧‧‧使用者輸入 110‧‧‧User input

120‧‧‧控制器 120‧‧‧ Controller

130‧‧‧顯示器 130‧‧‧ display

140‧‧‧記憶體 140‧‧‧ memory

141‧‧‧使用者介面(UI)模組 141‧‧ User Interface (UI) Module

142‧‧‧通知模組 142‧‧‧Notification module

143‧‧‧影像處理模組 143‧‧‧Image Processing Module

150‧‧‧通訊器 150‧‧‧Communicator

151‧‧‧短程無線通訊器 151‧‧‧Short-range wireless communicator

152‧‧‧行動通訊器 152‧‧‧Mobile Communicator

153‧‧‧廣播接收器 153‧‧‧Broadcast receiver

160‧‧‧相機/數位相機 160‧‧‧Camera/digital camera

170‧‧‧輸出器 170‧‧‧Output

172‧‧‧音訊輸出器 172‧‧‧ audio output

173‧‧‧振動馬達 173‧‧‧Vibration motor

180‧‧‧感測器 180‧‧‧ sensor

181‧‧‧磁性感測器 181‧‧‧Magnetic sensor

182‧‧‧加速度感測器 182‧‧‧Acceleration sensor

183‧‧‧傾斜感測器 183‧‧‧ tilt sensor

184‧‧‧紅外線感測器 184‧‧‧Infrared sensor

185‧‧‧陀螺儀感測器 185‧‧‧Gyro sensor

186‧‧‧位置感測器 186‧‧‧ position sensor

187‧‧‧大氣壓感測器 187‧‧‧Atmospheric pressure sensor

188‧‧‧接近感測器 188‧‧‧ proximity sensor

189‧‧‧光學感測器 189‧‧‧ Optical Sensor

190‧‧‧麥克風 190‧‧‧ microphone

200‧‧‧外部裝置 200‧‧‧External devices

200‧‧‧雲端伺服器 200‧‧‧Cloud Server

200-1‧‧‧表示圖2中示意的部分 200-1‧‧‧ indicates the part shown in Figure 2

200-2‧‧‧表示圖2中示意的部分 200-2‧‧‧ indicates the part shown in Figure 2

210‧‧‧功能視窗 210‧‧‧ function window

210‧‧‧雲端伺服器 210‧‧‧Cloud Server

210‧‧‧通訊器 210‧‧‧Communicator

212‧‧‧「編輯」項 212‧‧‧"Edit" item

220‧‧‧編輯視窗/控制器 220‧‧‧Edit Window/Controller

222‧‧‧「效果編輯」項 222‧‧‧"Effect Editing" item

230‧‧‧儲存器 230‧‧‧Storage

300-1‧‧‧表示圖3中示意的部分 300-1‧‧‧ indicates the part shown in Figure 3

300-2‧‧‧表示圖3中示意的部分 300-2‧‧‧ indicates the part shown in Figure 3

310‧‧‧物體 310‧‧‧ objects

320‧‧‧物體 320‧‧‧ objects

400-1‧‧‧表示圖4中示意的部分 400-1‧‧‧ indicates the part shown in Figure 4

400-2‧‧‧表示圖4中示意的部分 400-2‧‧‧ indicates the part shown in Figure 4

410‧‧‧物體 410‧‧‧ objects

420‧‧‧物體 420‧‧‧ objects

500-1‧‧‧表示圖5中示意的部分 500-1‧‧‧ indicates the part shown in Figure 5

500-2‧‧‧表示圖5中示意的部分 500-2‧‧‧ indicates the part shown in Figure 5

510‧‧‧物體 510‧‧‧ objects

520‧‧‧物體 520‧‧‧ objects

600-1‧‧‧表示圖6中示意的部分 600-1‧‧‧ indicates the part shown in Figure 6

600-2‧‧‧表示圖6中示意的部分 600-2‧‧‧ indicates the part shown in Figure 6

610‧‧‧物體 610‧‧‧ objects

620‧‧‧物體 620‧‧‧ objects

700-1‧‧‧表示圖7中示意的部分 700-1‧‧‧ indicates the part shown in Figure 7

700-2‧‧‧表示圖7中示意的部分 700-2‧‧‧ indicates the part shown in Figure 7

710‧‧‧物體 710‧‧‧ objects

720‧‧‧物體 720‧‧‧ objects

800-1‧‧‧表示圖8中示意的部分 800-1‧‧‧ indicates the part shown in Figure 8

800-2‧‧‧表示圖8中示意的部分 800-2‧‧‧ indicates the part shown in Figure 8

810‧‧‧物體 810‧‧‧ objects

820‧‧‧物體 820‧‧‧ objects

900-1‧‧‧表示圖9中示意的部分 900-1‧‧‧ indicates the part shown in Figure 9.

900-2‧‧‧表示圖9中示意的部分 900-2‧‧‧ indicates the part shown in Figure 9.

910‧‧‧物體 910‧‧‧ objects

920‧‧‧效果清單 920‧‧‧ effect list

1000-1‧‧‧表示圖10中示意的部分 1000-1‧‧‧ indicates the part shown in Figure 10

1000-2‧‧‧表示圖10中示意的部分 1000-2‧‧‧ indicates the part shown in Figure 10

1000-3‧‧‧表示圖10中示意的部分 1000-3‧‧‧ indicates the part shown in Figure 10

1010‧‧‧第一物體 1010‧‧‧First object

1012‧‧‧第一物體 1012‧‧‧First object

1020‧‧‧第二物體 1020‧‧‧Second object

1022‧‧‧第二物體 1022‧‧‧Second object

1100-1‧‧‧表示圖11中示意的部分 1100-1‧‧‧ indicates the part shown in Figure 11

1100-2‧‧‧表示圖11中示意的部分 1100-2‧‧‧ indicates the part shown in Figure 11

1110‧‧‧背景 1110‧‧‧Background

1120‧‧‧模糊的背景 1120‧‧‧ blurred background

1200-1‧‧‧表示圖12A中示意的部分 1200-1‧‧‧ indicates the part shown in Figure 12A

1200-2‧‧‧表示圖12A中示意的部分 1200-2‧‧‧ indicates the part shown in Figure 12A

1200-3‧‧‧表示圖12A中示意的部分 1200-3‧‧‧ indicates the part shown in Figure 12A

1200-4‧‧‧表示圖12B中示意的部分 1200-4‧‧‧ indicates the part shown in Figure 12B

1200-5‧‧‧表示圖12B中示意的部分 1200-5‧‧‧ indicates the part shown in Figure 12B

1200-6‧‧‧表示圖12B中示意的部分 1200-6‧‧‧ indicates the part shown in Figure 12B

1210‧‧‧第一物體 1210‧‧‧First object

1212‧‧‧具有光環效果的物體 1212‧‧‧An object with a halo effect

1220‧‧‧背景 1220‧‧‧Background

1222‧‧‧模糊的背景 1222‧‧‧blurred background

1260‧‧‧背景 1260‧‧‧Background

1262‧‧‧流動的背景 1262‧‧‧ flowing background

1270‧‧‧指示器 1270‧‧‧ indicator

1300-1‧‧‧表示圖13中示意的部分 1300-1‧‧‧ indicates the part shown in Figure 13

1300-2‧‧‧表示圖13中示意的部分 1300-2‧‧‧ indicates the part shown in Figure 13

1310‧‧‧詢問視窗 1310‧‧‧Inquiry window

1320‧‧‧清單 List of 1320‧‧

1601‧‧‧第一影像 1601‧‧‧ first image

1602‧‧‧第二影像 1602‧‧‧Second image

1603‧‧‧第三影像 1603‧‧‧ Third image

1610‧‧‧類型 Type 1610‧‧‧

1611‧‧‧時間 1611‧‧‧Time

1612‧‧‧GPS 1612‧‧‧GPS

1613‧‧‧解析度 1613‧‧‧ resolution

1614‧‧‧尺寸 1614‧‧‧ Size

1615‧‧‧天氣資訊 1615‧‧‧Weather Information

1616‧‧‧溫度資訊 1616‧‧‧ Temperature Information

1617‧‧‧收集裝置 1617‧‧‧Collection device

1618‧‧‧使用者添加資訊 1618‧‧‧Users add information

1619‧‧‧物體資訊 1619‧‧‧ Object Information

1710‧‧‧影像 1710‧‧‧ images

1712‧‧‧背景 1712‧‧‧Background

1720‧‧‧屬性資訊 1720‧‧‧Attribute Information

1730‧‧‧辨識資訊 1730‧‧‧ Identification Information

1810‧‧‧影像 1810‧‧‧Image

1812‧‧‧第一物體 1812‧‧‧First object

1820‧‧‧辨識資訊 1820‧‧‧ Identification Information

1900-1‧‧‧表示圖19中示意的部分 1900-1‧‧‧ indicates the part shown in Figure 19

1900-2‧‧‧表示圖19中示意的部分 1900-2‧‧‧ indicates the part shown in Figure 19

1900-3‧‧‧表示圖19中示意的部分 1900-3‧‧‧ indicates the part shown in Figure 19

1910‧‧‧第一影像 1910‧‧‧ first image

1912‧‧‧第一物體 1912‧‧‧First object

1920‧‧‧辨識資訊清單 1920‧‧‧ Identification Information List

1930‧‧‧第二影像 1930‧‧‧Second image

2000-1‧‧‧表示圖20中示意的部分 2000-1‧‧‧ indicates the part shown in Figure 20

2000-2‧‧‧表示圖20中示意的部分 2000-2‧‧‧ indicates the part shown in Figure 20

2010‧‧‧效果資料夾 2010‧‧‧Effects folder

2020‧‧‧效果影像 2020‧‧‧ effect image

2300-1‧‧‧表示圖23中示意的部分 2300-1‧‧ means the part shown in Figure 23

2300-2‧‧‧表示圖23中示意的部分 2300-2‧‧‧ indicates the part shown in Figure 23

2310‧‧‧效果資料夾 2310‧‧‧ Effect Folder

2320‧‧‧選單視窗 2320‧‧‧Menu Window

2322‧‧‧傳送項 2322‧‧‧Transfer

2330‧‧‧選擇視窗 2330‧‧‧Select window

2332‧‧‧聯絡人 2332‧‧‧Contact

2600-1‧‧‧表示圖26A中示意的部分 2600-1‧‧‧ indicates the part shown in Figure 26A

2600-2‧‧‧表示圖26A中示意的部分 2600-2‧‧‧ indicates the part shown in Figure 26A

2600-3‧‧‧表示圖26B中示意的部分 2600-3‧‧‧ indicates the part shown in Figure 26B

2600-4‧‧‧表示圖26B中示意的部分 2600-4‧‧‧ indicates the part shown in Figure 26B

2600-5‧‧‧表示圖26C中示意的部分 2600-5‧‧‧ indicates the part shown in Figure 26C

2600-6‧‧‧表示圖26C中示意的部分 2600-6‧‧‧ indicates the part shown in Figure 26C

2610‧‧‧第一影像 2610‧‧‧ first image

2612‧‧‧物體 2612‧‧‧ objects

2620‧‧‧效果清單 2620‧‧‧ effect list

2630‧‧‧辨識資訊清單 2630‧‧‧ Identification Information List

2640‧‧‧目標影像清單 2640‧‧‧Target image list

2650‧‧‧第二影像 2650‧‧‧Second image

2660‧‧‧第二影像 2660‧‧‧Second image

2662‧‧‧物體 2662‧‧‧ objects

2670‧‧‧效果影像 2670‧‧‧ Effect image

2710‧‧‧第一影像 2710‧‧‧ first image

2712‧‧‧物體 2712‧‧‧ Objects

2714‧‧‧影像/第一部分影像 2714‧‧‧Image/Part 1 Image

2720‧‧‧第二影像 2720‧‧‧Second image

2722‧‧‧物體/第二部分影像 2722‧‧‧Objects / Part 2 Image

2730‧‧‧效果影像 2730‧‧‧ Effect image

2732‧‧‧區 2732‧‧‧ District

2734‧‧‧區 2734‧‧‧ District

2800-1‧‧‧表示圖28A中示意的部分 2800-1‧‧‧ indicates the part shown in Figure 28A

2800-2‧‧‧表示圖28A中示意的部分 2800-2‧‧‧ indicates the part shown in Figure 28A

2800-3‧‧‧表示圖28B中示意的部分 2800-3‧‧‧ indicates the part shown in Figure 28B

2800-4‧‧‧表示圖28B中示意的部分 2800-4‧‧‧ indicates the part shown in Figure 28B

2800-5‧‧‧表示圖28C中示意的部分 2800-5‧‧‧ indicates the part shown in Figure 28C

2800-6‧‧‧表示圖28C中示意的部分 2800-6‧‧‧ indicates the part shown in Figure 28C

2810‧‧‧第一影像 2810‧‧‧ first image

2814‧‧‧背景 2814‧‧‧Background

2820‧‧‧效果清單 2820‧‧‧ effect list

2830‧‧‧辨識資訊清單 2830‧‧‧ Identification Information List

2840‧‧‧目標影像清單 2840‧‧‧ Target image list

2850‧‧‧第二影像 2850‧‧‧Second image

2860‧‧‧第二影像 2860‧‧‧second image

2864‧‧‧背景 2864‧‧‧Background

2870‧‧‧效果影像 2870‧‧‧ Effect image

2910‧‧‧第一影像 2910‧‧‧ first image

2912‧‧‧影像/第三部分影像 2912‧‧‧Image/Part III Image

2920‧‧‧第二影像 2920‧‧‧Second image

2924‧‧‧部分影像/第四部分影像 2924‧‧‧Partial image/Part 4 image

2930‧‧‧背景影像 2930‧‧‧Background imagery

2932‧‧‧第四部分影像2924中的不存在畫素資訊的區/不具有畫素資訊的區 2932‧‧‧Part 4 Image 2924 Area where no pixel information exists / Area without pixel information

2940‧‧‧效果影像 2940‧‧‧ Effect image

3100-1‧‧‧表示圖31中示意的部分 3100-1‧‧‧ indicates the part shown in Figure 31

3100-2‧‧‧表示圖31中示意的部分 3100-2‧‧‧ indicates the part shown in Figure 31

3100-3‧‧‧表示圖31中示意的部分 3100-3‧‧‧ indicates the part shown in Figure 31

3110‧‧‧即時取景影像 3110‧‧‧ Live view image

3112‧‧‧物體 3112‧‧‧ Objects

3120‧‧‧感興趣影像 3120‧‧‧ Interested images

3130‧‧‧即時取景影像 3130‧‧‧ Live view image

3140‧‧‧影像 3140‧‧ images

3200-1‧‧‧表示圖32中示意的部分 3200-1‧‧‧ indicates the part shown in Figure 32

3200-2‧‧‧表示圖32中示意的部分 3200-2‧‧‧ indicates the part shown in Figure 32

3200-3‧‧‧表示圖32中示意的部分 3200-3‧‧‧ indicates the part shown in Figure 32

3210‧‧‧即時取景影像 3210‧‧‧ live view image

3212‧‧‧第一物體 3212‧‧‧First object

3220‧‧‧感興趣影像 3220‧‧‧ Interested images

3230‧‧‧即時取景影像/最終臨時影像 3230‧‧‧Live view image/final temporary image

3240‧‧‧效果影像 3240‧‧‧ Effect image

3300-1‧‧‧表示圖33中示意的部分 3300-1‧‧‧ indicates the part shown in Figure 33

3300-2‧‧‧表示圖33中示意的部分 3300-2‧‧‧ indicates the part shown in Figure 33

3300-3‧‧‧表示圖33中示意的部分 3300-3‧‧‧ indicates the part shown in Figure 33

3310‧‧‧即時取景影像 3310‧‧‧ live view image

3312‧‧‧第一物體 3312‧‧‧First object

3320‧‧‧感興趣影像 3320‧‧‧Video of interest

3330‧‧‧即時取景影像 3330‧‧‧ live view image

3332‧‧‧第一物體 3332‧‧‧First object

3340‧‧‧效果影像 3340‧‧‧ Effect image

3500-1‧‧‧表示圖35中示意的部分 3500-1‧‧ indicates the part shown in Figure 35

3500-2‧‧‧表示圖35中示意的部分 3500-2‧‧‧ indicates the part shown in Figure 35

3500-3‧‧‧表示圖35中示意的部分 3500-3‧‧‧ indicates the part shown in Figure 35

3510‧‧‧即時取景影像 3510‧‧‧ live view image

3512‧‧‧第一物體 3512‧‧‧First object

3520‧‧‧臨時影像 3520‧‧‧ Temporary imagery

3530‧‧‧最終臨時影像 3530‧‧‧ final temporary image

3532‧‧‧第一物體 3532‧‧‧First object

3540‧‧‧臨時影像 3540‧‧‧ Temporary imagery

3542‧‧‧第一物體 3542‧‧‧First object

3550‧‧‧效果影像 3550‧‧‧ Effect image

3800-1‧‧‧表示圖38中示意的部分 3800-1‧‧‧ indicates the part shown in Figure 38

3800-2‧‧‧表示圖38中示意的部分 3800-2‧‧‧ indicates the part shown in Figure 38

3800-3‧‧‧表示圖38中示意的部分 3800-3‧‧‧ indicates the part shown in Figure 38

3810‧‧‧選單影像 3810‧‧‧Menu image

3812‧‧‧選單項 3812‧‧‧ menu item

3820‧‧‧效果清單 3820‧‧‧ effect list

3822‧‧‧效果項 3822‧‧‧effect items

3830‧‧‧選單項 3830‧‧‧ menu item

4010‧‧‧選單影像 4010‧‧‧Menu image

4012‧‧‧第一選單項 4012‧‧‧First menu item

4014‧‧‧第二選單項 4014‧‧‧Second menu item

S110‧‧‧操作 S110‧‧‧ operation

S120‧‧‧操作 S120‧‧‧ operation

S130‧‧‧操作 S130‧‧‧ operation

S1410‧‧‧操作 S1410‧‧‧ operation

S1420‧‧‧操作 S1420‧‧‧ operation

S1430‧‧‧操作 S1430‧‧‧ operation

S1440‧‧‧操作 S1440‧‧‧ operation

S1450‧‧‧操作 S1450‧‧‧ operation

S1510‧‧‧操作 S1510‧‧‧ operation

S1520‧‧‧操作 S1520‧‧‧ operation

S1530‧‧‧操作 S1530‧‧‧ operation

S1540‧‧‧操作 S1540‧‧‧ operation

S2110‧‧‧操作 S2110‧‧‧ operation

S2120‧‧‧操作 S2120‧‧‧ operation

S2130‧‧‧操作 S2130‧‧‧ operation

S2140‧‧‧操作 S2140‧‧‧ operation

S2150‧‧‧操作 S2150‧‧‧ operation

S2160‧‧‧操作 S2160‧‧‧ operation

S2170‧‧‧操作 S2170‧‧‧ operation

S2180‧‧‧操作 S2180‧‧‧ operation

S2190‧‧‧操作 S2190‧‧‧ operation

S2210‧‧‧操作 S2210‧‧‧ operation

S2220‧‧‧操作 S2220‧‧‧ operation

S2230‧‧‧操作 S2230‧‧‧ operation

S2510‧‧‧操作 S2510‧‧‧ operation

S2520‧‧‧操作 S2520‧‧‧ operation

S2530‧‧‧操作 S2530‧‧‧ operation

S2540‧‧‧操作 S2540‧‧‧ operation

S2550‧‧‧操作 S2550‧‧‧ operation

S3010‧‧‧操作 S3010‧‧‧ operation

S3020‧‧‧操作 S3020‧‧‧ operation

S3030‧‧‧操作 S3030‧‧‧ operation

S3040‧‧‧操作 S3040‧‧‧ operation

S3050‧‧‧操作 S3050‧‧‧ operation

S3410‧‧‧操作 S3410‧‧‧ operation

S3420‧‧‧操作 S3420‧‧‧ operation

S3430‧‧‧操作 S3430‧‧‧ operation

S3440‧‧‧操作 S3440‧‧‧ operation

S3450‧‧‧操作 S3450‧‧‧ operation

S3610‧‧‧操作 S3610‧‧‧ operation

S3620‧‧‧操作 S3620‧‧‧ operation

S3630‧‧‧操作 S3630‧‧‧ operation

S3640‧‧‧操作 S3640‧‧‧ operation

S3660‧‧‧操作 S3660‧‧‧ operation

S3710‧‧‧操作 S3710‧‧‧ operation

S3720‧‧‧操作 S3720‧‧‧ operation

S3730‧‧‧操作 S3730‧‧‧ operation

S3740‧‧‧操作 S3740‧‧‧ operation

S3910‧‧‧操作 S3910‧‧‧ operation

S3920‧‧‧操作 S3920‧‧‧ operation

S3930‧‧‧操作 S3930‧‧‧ operation

S3940‧‧‧操作 S3940‧‧‧ operation

S3950‧‧‧操作 S3950‧‧‧ operation

結合附圖閱讀以下對示例性實施例的說明,該些及/或其他態樣將變得顯而易見且更易於理解,附圖中:圖1是根據示例性實施例,一種向影像提供效果的方法的流 程圖。 The following description of the exemplary embodiments, which will be apparent and understood, Flow Cheng Tu.

圖2說明根據示例性實施例,一種用於向影像提供效果的圖形使用者介面(graphical user interface,GUI)。 2 illustrates a graphical user interface (GUI) for providing an effect to an image, in accordance with an exemplary embodiment.

圖3是根據示例性實施例,用於解釋一種向物體提供光環效果的方法的參考圖。 FIG. 3 is a reference diagram for explaining a method of providing a halo effect to an object, according to an exemplary embodiment.

圖4是根據示例性實施例,用於解釋一種向物體提供模糊效果的方法的參考圖。 FIG. 4 is a reference diagram for explaining a method of providing a blurring effect to an object, according to an exemplary embodiment.

圖5及圖6是根據示例性實施例,用於解釋一種向物體提供尺寸效果的方法的參考圖。 5 and 6 are reference diagrams for explaining a method of providing a size effect to an object, according to an exemplary embodiment.

圖7及圖8是根據示例性實施例,用於解釋一種向物體提供深度效果的方法的參考圖。 7 and 8 are reference diagrams for explaining a method of providing a depth effect to an object, according to an exemplary embodiment.

圖9是根據示例性實施例,用於解釋一種顯示效果清單的方法的參考圖。 FIG. 9 is a reference diagram for explaining a method of displaying an effect list, according to an exemplary embodiment.

圖10是根據示例性實施例,用於解釋一種向影像中的多個物體提供效果的方法的參考圖。 FIG. 10 is a reference diagram for explaining a method of providing an effect to a plurality of objects in an image, according to an exemplary embodiment.

圖11是根據示例性實施例,用於解釋一種向背景提供效果的方法的參考圖。 FIG. 11 is a reference diagram for explaining a method of providing an effect to a background, according to an exemplary embodiment.

圖12A是根據示例性實施例,用於解釋一種向物體及背景二者提供效果的方法的參考圖。 FIG. 12A is a reference diagram for explaining a method of providing an effect to both an object and a background, according to an exemplary embodiment.

圖12B是根據示例性實施例,用於解釋一種因應於多個使用者輸入而向影像提供效果的方法的參考圖。 FIG. 12B is a reference diagram for explaining a method of providing an effect to an image in response to a plurality of user inputs, according to an exemplary embodiment.

圖13說明根據示例性實施例,用於向多個影像提供效果的圖 形使用者介面。 FIG. 13 illustrates a diagram for providing effects to a plurality of images, according to an exemplary embodiment User interface.

圖14是根據示例性實施例,其中裝置利用第一影像的辨識資訊向第二影像提供效果的方法的流程圖。 14 is a flowchart of a method in which a device provides an effect to a second image using identification information of a first image, in accordance with an exemplary embodiment.

圖15是根據示例性實施例,其中裝置產生辨識資訊的方法的流程圖。 Figure 15 is a flow diagram of a method in which a device generates identification information, in accordance with an exemplary embodiment.

圖16說明根據示例性實施例的影像的屬性資訊。 FIG. 16 illustrates attribute information of an image according to an exemplary embodiment.

圖17是用於解釋其中裝置基於影像的屬性資訊而產生影像的辨識資訊的實例的參考圖。 17 is a reference diagram for explaining an example of identification information in which a device generates an image based on attribute information of an image.

圖18是用於解釋其中裝置利用影像分析資訊產生辨識資訊的實例的參考圖。 18 is a reference diagram for explaining an example in which a device generates identification information using image analysis information.

圖19說明根據示例性實施例,其中裝置顯示辨識資訊清單的實例。 FIG. 19 illustrates an example in which a device displays a list of identification information, according to an exemplary embodiment.

圖20說明其中裝置顯示效果資料夾的實例。 Figure 20 illustrates an example in which the device displays an effect folder.

圖21是根據示例性實施例,其中裝置向儲存於外部裝置中的影像提供效果的方法的流程圖。 21 is a flowchart of a method in which a device provides an effect to an image stored in an external device, according to an exemplary embodiment.

圖22是根據示例性實施例,其中裝置與外部裝置共用效果影像的方法的流程圖。 22 is a flowchart of a method in which a device shares an effect image with an external device, according to an exemplary embodiment.

圖23說明其中裝置與外部裝置共用效果影像的實例。 Figure 23 illustrates an example in which a device shares an effect image with an external device.

圖24是根據示例性實施例的影像管理系統的示意圖。 FIG. 24 is a schematic diagram of an image management system, according to an exemplary embodiment.

圖25是根據示例性實施例,一種藉由將多個影像彼此組合而提供效果影像的方法的流程圖。 FIG. 25 is a flowchart of a method of providing an effect image by combining a plurality of images with each other, according to an exemplary embodiment.

圖26A至圖26C說明根據示例性實施例,利用多個影像向物 體提供效果的實例。 26A-26C illustrate utilizing multiple image orientations, according to an exemplary embodiment The body provides an example of the effect.

圖27是根據示例性實施例,用於解釋一種將多個影像進行組合的方法的參考圖。 FIG. 27 is a reference diagram for explaining a method of combining a plurality of images, according to an exemplary embodiment.

圖28A至圖28C說明根據示例性實施例,利用多個影像向背景提供效果的實例。 28A through 28C illustrate an example of providing effects to a background using a plurality of images, according to an exemplary embodiment.

圖29是根據另一示例性實施例,用於解釋一種將多個影像進行組合的方法的參考圖。 FIG. 29 is a reference diagram for explaining a method of combining a plurality of images, according to another exemplary embodiment.

圖30是根據示例性實施例,一種利用即時取景影像來提供效果影像的方法的流程圖。 FIG. 30 is a flowchart of a method of providing an effect image using a live view image, according to an exemplary embodiment.

圖31是根據示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。 FIG. 31 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to an exemplary embodiment.

圖32是根據另一示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。 FIG. 32 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to another exemplary embodiment.

圖33是根據另一示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。 FIG. 33 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to another exemplary embodiment.

圖34是根據另一示例性實施例,一種自即時取景影像產生效果影像的方法的流程圖。 FIG. 34 is a flowchart of a method of generating an effect image from an instant view image, according to another exemplary embodiment.

圖35是根據示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。 FIG. 35 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to an exemplary embodiment.

圖36是根據示例性實施例,一種自即時取景影像產生移動圖片的方法的流程圖。 FIG. 36 is a flowchart of a method of generating a moving picture from an live view image, according to an exemplary embodiment.

圖37是根據示例性實施例,一種再現移動圖片的方法的流程 圖。 FIG. 37 is a flowchart of a method of reproducing a moving picture, according to an exemplary embodiment Figure.

圖38是根據示例性實施例,用於解釋一種於選單影像上顯示效果的方法的參考圖。 FIG. 38 is a reference diagram for explaining a method of displaying an effect on a menu image, according to an exemplary embodiment.

圖39是根據示例性實施例,一種根據對應於選單項的應用程式被執行的次數而向選單項提供效果的方法的流程圖。 FIG. 39 is a flowchart of a method of providing an effect to a menu item according to the number of times an application corresponding to a menu item is executed, according to an exemplary embodiment.

圖40說明根據示例性實施例,顯示選單影像的實例,在所述選單影像中已根據對應於選單項的應用程式被執行的次數而向選單項提供效果。 FIG. 40 illustrates an example of displaying a menu image in which an effect has been provided to a menu item according to the number of times an application corresponding to a menu item is executed, according to an exemplary embodiment.

圖41至圖45是根據示例性實施例的裝置的方框圖。 41 to 45 are block diagrams of apparatuses according to an exemplary embodiment.

圖46是根據示例性實施例,雲端伺服器的結構的方框圖。 FIG. 46 is a block diagram showing the structure of a cloud server, according to an exemplary embodiment.

現在將詳細參照示例性實施例,所述示例性實施例的實例在附圖中予以說明,其中通篇中相同的參考編號指代相同的元件。就此而言,所述示例性實施例可具有不同形式,而不應被解釋為僅限於本文中所作的說明。因此,以下僅藉由參照圖式來闡述所述示例性實施例以解釋本說明的態樣。 The exemplary embodiments are now described in detail with reference to the exemplary embodiments in which the In this regard, the exemplary embodiments may have different forms and should not be construed as limited to the descriptions set forth herein. Accordingly, the exemplary embodiments are merely described by way of the accompanying drawings,

儘管選擇目前廣泛使用的一般性用語來慮及本發明的功能而闡述本發明,但該些一般性用語可根據此項技術中具有通常知識者的意圖、情形先例、新技術的出現等而有所變化。本申請者所任意選擇的用語亦可用於具體情形中。在此種情形中,需在本說明書的詳細說明中給出所述用語的意義。因此,所述用語 必須基於其意義及整個說明書的內容進行定義,而非僅藉由陳述所述用語而進行定義。 Although the present invention has been described in connection with the general terms that are widely used at the present time in consideration of the function of the present invention, the general terms may be based on the intention of the person having ordinary knowledge in the art, the situation precedent, the appearance of new technology, and the like. Changed. The terms arbitrarily chosen by the applicant may also be used in a specific case. In this case, the meaning of the term is given in the detailed description of the specification. Therefore, the terminology It must be defined on the basis of its meaning and the content of the entire specification, and is not defined solely by the stated terms.

在本說明書中使用的用語「包括(comprises)」及/或「包括(comprising)」抑或「包含(includes)」及/或「包含(including)」是用於指明所陳述元件的存在,但並不排除一或多個其他元件的存在或添加。在本說明書中使用的用語「...單元」及「...模組」指代在其中執行至少一個功能或操作的單元,且可被實作為硬體、軟體、或硬體與軟體的組合。 The terms "comprises" and / or "comprising" or "includes" and / or "including" are used in this specification to indicate the existence of the stated elements, and The existence or addition of one or more other elements is not excluded. The terms "..." and "...module" as used in this specification refer to a unit in which at least one function or operation is performed, and can be implemented as hardware, software, or hardware and software. combination.

在本說明書通篇中,「影像」可包括物體及背景。物體為可藉由影像處理等以輪廓線與背景區分開的部分影像,且所述物體可為例如人、動物、建築、車輛等。背景為除物體以外的部分影像。可為物體及背景的部分影像是不固定的,而是可為相對的。舉例而言,在包括人、車輛、及天空的影像中,人及車輛可為物體,而天空可為背景。在包括人及車輛的影像中,人可為物體,而車輛可為背景。然而,物體的部分影像的尺寸可小於背景的部分影像的尺寸。每一裝置100可預先定義用於區分物體與背景的標準。 Throughout this specification, "images" may include objects and backgrounds. The object is a partial image that can be distinguished from the background by contouring by image processing or the like, and the object can be, for example, a person, an animal, a building, a vehicle, or the like. The background is a partial image other than the object. Some of the images that can be objects and backgrounds are not fixed, but can be relative. For example, in an image including a person, a vehicle, and a sky, a person and a vehicle may be objects, and the sky may be a background. In images including people and vehicles, a person may be an object and a vehicle may be a background. However, the size of a portion of the image of the object may be smaller than the size of a portion of the image of the background. Each device 100 can be pre-defined to standardize the object and background.

在本說明書通篇中,影像可為靜止影像(例如,圖片或圖畫)、移動圖片(例如,電視節目影像、隨選視訊(Video On Demand,VOD)、使用者建立的內容(user-created content,UCC)、音樂視訊、或YouTube影像)、即時取景影像、選單影像等。 Throughout this specification, images can be still images (eg, pictures or drawings), moving images (eg, TV program images, Video On Demand (VOD), user-created content (user-created content) , UCC), music video, or YouTube video), live view images, menu images, and more.

在說明書通篇中,感興趣區域可為影像的部分影像,且 可為物體或背景。向影像提供效果是一種影像編輯類型,且表示以與先前所提供的感興趣區域完全不同的方式提供感興趣區域。提供影像表示顯示影像、再現影像、儲存影像等。 Throughout the specification, the region of interest can be a partial image of the image, and Can be an object or background. Providing an effect to an image is an image editing type and means providing a region of interest in a completely different way than the previously provided region of interest. Providing images indicates displaying images, reproducing images, storing images, and the like.

現在將闡述向影像提供效果的影像系統。所述影像系統可包括能夠再現及儲存影像的裝置100,且可更包括儲存所述影像的伺服器。稍後將詳細闡述其中影像系統包括伺服器的情形。 An imaging system that provides an effect to an image will now be explained. The imaging system can include a device 100 capable of reproducing and storing images, and can further include a server that stores the images. The case where the image system includes the server will be explained in detail later.

根據示例性實施例的裝置100可為能夠顯示影像並向影像提供效果的裝置。可以各種類型達成根據示例性實施例的裝置100。舉例而言,裝置100可為桌上型電腦、行動電話、智慧型電話、膝上型電腦、平板個人電腦(personal computer,PC)、電子書終端、數位廣播終端、個人數位助理(personal digital assistant,PDA)、可攜式多媒體播放機(portable multimedia player,PMP)、導航機、MP3播放機、數位相機、網際網路電視(Internet Protocol television,IPTV)、數位電視(digital television,DTV)、消費性電子產品(consumer electronics,CE)設備(例如,各自包括顯示器的冰箱及空調機)等,但示例性實施例並非僅限於此。裝置100亦可為可由使用者穿戴的裝置。舉例而言,裝置100可為手錶、眼鏡、戒指、手環、項鏈等。 The device 100 according to an exemplary embodiment may be a device capable of displaying an image and providing an effect to the image. The apparatus 100 according to an exemplary embodiment may be achieved in various types. For example, the device 100 can be a desktop computer, a mobile phone, a smart phone, a laptop, a personal computer (PC), an e-book terminal, a digital broadcast terminal, and a personal digital assistant. , PDA), portable multimedia player (PMP), navigation machine, MP3 player, digital camera, Internet Protocol television (IPTV), digital television (DTV), consumption Consumer electronics (CE) devices (for example, refrigerators and air conditioners each including a display) and the like, but the exemplary embodiments are not limited thereto. Device 100 can also be a device that can be worn by a user. For example, device 100 can be a watch, glasses, ring, bracelet, necklace, or the like.

圖1是根據示例性實施例,一種向影像提供效果的方法的流程圖。 1 is a flow chart of a method of providing an effect to an image, in accordance with an exemplary embodiment.

在操作S110中,裝置100可顯示影像。影像可包括物體及背景,且可為靜止影像、移動圖片、即時取景影像、選單影 像等。 In operation S110, the device 100 can display an image. Images can include objects and backgrounds, and can be still images, moving pictures, live view images, menu shots Like waiting.

根據示例性實施例,顯示於裝置100上的影像可為儲存於內置於裝置100中的記憶體中的靜止影像、移動圖片、或選單影像,可為由內置於裝置100中的相機160所拍攝的即時取景影像,可為儲存於外部裝置(例如,另一使用者所使用的可攜式終端、社交網路服務(social networking service,SNS)伺服器、雲端伺服器、或網頁伺服器)中的靜止影像、移動圖片、或選單影像,或可為由外部裝置所拍攝的即時取景影像。 According to an exemplary embodiment, the image displayed on the device 100 may be a still image, a moving picture, or a menu image stored in a memory built in the device 100, and may be captured by the camera 160 built in the device 100. The live view image can be stored in an external device (for example, a portable terminal used by another user, a social networking service (SNS) server, a cloud server, or a web server) Still image, moving picture, or menu image, or can be a live view image taken by an external device.

在操作S120中,裝置100可選擇感興趣區域。所述感興趣區域為所顯示影像的部分影像,且可為物體或背景。舉例而言,裝置100可自多個物體中選擇一個物體作為感興趣區域,抑或可自所述多個物體中選擇至少兩個物體作為感興趣區域。作為另一選擇,裝置100可選擇影像的背景作為感興趣區域。 In operation S120, the device 100 may select a region of interest. The region of interest is a partial image of the displayed image and may be an object or a background. For example, the device 100 may select one object from among a plurality of objects as a region of interest, or may select at least two objects from the plurality of objects as a region of interest. Alternatively, device 100 may select the background of the image as the region of interest.

根據示例性實施例,裝置100可基於使用者輸入來選擇感興趣區域。舉例而言,裝置100可接收對影像上的部分區域進行選擇的使用者輸入,並將包括所選擇部分區域的物體或背景確定為感興趣區域。 According to an exemplary embodiment, device 100 may select a region of interest based on user input. For example, device 100 can receive a user input that selects a partial region on the image and determine an object or background that includes the selected partial region as the region of interest.

根據示例性實施例,選擇感興趣區域的使用者輸入可有所變化。在本說明書中,使用者輸入可為鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、多重輸入等。 According to an exemplary embodiment, the user input selecting the region of interest may vary. In the present specification, the user input may be a key input, a touch input, a motion input, a bending input, a voice input, a multiple input, or the like.

「觸控輸入(touch input)」表示使用者於觸控螢幕上作出的用以控制裝置100的手勢等。觸控輸入的實例可包括敲擊 (tap)、長按(touch & hold)、雙擊(double tap)、拖動(drag)、平移(panning)、滑動(flick)、及拖放(drag & drop)。 The "touch input" indicates a gesture or the like made by the user on the touch screen to control the device 100. Examples of touch input can include tapping (tap), touch & hold, double tap, drag, panning, flick, and drag & drop.

「敲擊」表示使用者以指尖或觸控工具(例如,電子筆)觸控螢幕然後極快地自螢幕抬起指尖或觸控工具而不進行移動的動作。 "Tap" means that the user touches the screen with a fingertip or a touch tool (for example, an electronic pen) and then quickly lifts the fingertip or the touch tool from the screen without moving.

「長按」表示使用者在以指尖或觸控工具(例如,電子筆)觸控螢幕之後使觸控輸入保持臨界時間週期(例如,兩秒)以上的動作。舉例而言,此動作指示其中觸控開始時間與觸控結束時間之間的時間差大於所述臨界時間週期(例如,兩秒)的情形。為使使用者能夠判斷觸控輸入是敲擊還是長按,當觸控輸入保持臨界時間週期以上時,可在視覺、聽覺或觸覺上提供回饋訊號。所述臨界時間週期可根據示例性實施例而有所變化。 "Long press" means that the user keeps the touch input for a critical time period (for example, two seconds) or more after touching the screen with a fingertip or a touch tool (for example, an electronic pen). For example, the action indicates a situation in which the time difference between the touch start time and the touch end time is greater than the critical time period (eg, two seconds). In order to enable the user to determine whether the touch input is a tap or a long press, the feedback signal can be provided visually, audibly or tactilely when the touch input remains above a critical time period. The critical time period can vary depending on the exemplary embodiment.

「雙擊」表示使用者以指尖或觸控工具(例如,電子筆)快速觸控螢幕兩次的動作。 "Double click" means that the user can quickly touch the screen twice with a fingertip or a touch tool (for example, an electronic pen).

「拖動」表示使用者以指尖或觸控工具觸控螢幕並在觸控螢幕的同時將所述指尖或觸控工具移動至螢幕上的其他位置的動作。藉由拖動動作,物體可發生移動,或者可執行以下將闡述的平移動作。 "Drag" means that the user touches the screen with a fingertip or a touch tool and moves the fingertip or the touch tool to other positions on the screen while the screen is being touched. By dragging, the object can move, or the translational action that will be explained below can be performed.

「平移」表示使用者在未選擇任何物體的情況下執行拖動動作的動作。由於平移動作未選擇具體物體,故頁面中沒有物體發生移動。相反,整個頁面可於螢幕上移動,抑或一組物體可於頁面內移動。 "Pan" means that the user performs a drag action without selecting any object. Since no specific object is selected for the panning action, no objects in the page move. Instead, the entire page can be moved on the screen, or a group of objects can move within the page.

「滑動」表示使用者以指尖或觸控工具以臨界速度(例如,100畫素/秒)執行拖動動作的動作。可基於指尖或觸控工具的移動速度是否大於所述臨界速度(例如,100畫素/秒)而區分滑動動作與拖動(或平移)動作。 "Sliding" means that the user performs a dragging action at a critical speed (for example, 100 pixels/second) with a fingertip or a touch tool. The sliding motion and the drag (or translation) motion may be distinguished based on whether the moving speed of the fingertip or the touch tool is greater than the critical speed (eg, 100 pixels/second).

「拖放」表示使用者以指尖或觸控工具拖動物體並將物體放置於螢幕中的預定位置的動作。 "Drag and drop" means the action of the user dragging an object with a fingertip or a touch tool and placing the object at a predetermined position in the screen.

「縮放(pinch)」表示使用者以多個指尖或觸控工具觸控螢幕並在觸控螢幕的同時使所述多個指尖或觸控工具之間的距離變寬或變窄的動作。「放大(unpinching)」表示使用者以兩個手指(例如,拇指及食指)觸控螢幕並在觸控螢幕的同時使所述兩個手指之間的距離變寬的動作,而「捏縮(pinching)」表示使用者以兩個手指觸控螢幕並在觸控螢幕的同時使所述兩個手指之間的距離變窄的動作。可根據所述兩個手指之間的距離確定變寬值或變窄值。 "Pinch" means that the user touches the screen with a plurality of fingertips or a touch tool and widens or narrows the distance between the plurality of fingertips or the touch tool while touching the screen. . "Unpinching" means that the user touches the screen with two fingers (for example, the thumb and forefinger) and widens the distance between the two fingers while touching the screen, and "pinch ( Pinching) means an action in which the user touches the screen with two fingers and narrows the distance between the two fingers while touching the screen. A widening value or a narrowing value may be determined according to the distance between the two fingers.

「橫掃(swipe)」表示使用者在以指尖或觸控工具觸控螢幕上的物體的同時移動某一距離的動作。 "Swipe" means the user moves a certain distance while touching the object on the screen with a fingertip or a touch tool.

「運動輸入(motion input)」表示使用者向裝置100施加的用以控制裝置100的運動。舉例而言,運動輸入可為使用者旋轉裝置100、傾斜裝置100、抑或水平或垂直移動裝置100的輸入。裝置100可利用加速度感測器、傾斜感測器、陀螺儀感測器、三軸式磁性感測器等感測使用者預設的動作輸入。 "motion input" means the motion applied by the user to the device 100 to control the device 100. For example, the motion input can be an input to the user rotating device 100, tilting device 100, or horizontal or vertical mobile device 100. The device 100 can sense an action input preset by a user by using an acceleration sensor, a tilt sensor, a gyro sensor, a three-axis magnetic sensor, or the like.

「彎曲輸入(bending input)」表示在裝置100為撓性顯 示裝置時,使裝置100的一部分或整個裝置100彎曲以控制裝置100的使用者輸入。根據示例性實施例,裝置100可利用彎曲感測器感測例如彎曲位置(座標值)、彎曲方向、彎曲角度、彎曲速度、彎曲次數、彎曲發生的時間點、及彎曲保持的時間週期。 "Bending input" means that the device 100 is flexible When the device is shown, a portion of the device 100 or the entire device 100 is bent to control user input of the device 100. According to an exemplary embodiment, the device 100 may utilize a bending sensor to sense, for example, a bending position (coordinate value), a bending direction, a bending angle, a bending speed, a number of bending times, a time point at which the bending occurs, and a time period in which the bending is maintained.

「鍵輸入(key input)」表示利用附裝至裝置100的物理鍵而控制裝置100的使用者輸入。 "Key input" means that the user input of the device 100 is controlled by the physical keys attached to the device 100.

「多重輸入(multiple inputs)」表示至少兩種輸入方法的組合。舉例而言,裝置100可自使用者接收觸控輸入及運動輸入,或自使用者接收觸控輸入及語音輸入。作為另一選擇,裝置100可自使用者接收觸控輸入及眼球輸入。眼球輸入表示調整眨眼、注視位置、眼球移動速度等以便控制裝置100的使用者輸入。 "Multiple inputs" means a combination of at least two input methods. For example, the device 100 can receive touch input and motion input from a user, or receive touch input and voice input from a user. Alternatively, device 100 can receive touch input and eye input from a user. The eyeball input indicates adjustment of blinking, gaze position, eye movement speed, and the like to control user input of the device 100.

為便於解釋,現在將闡述其中使用者輸入為鍵輸入或觸控輸入的情形。 For ease of explanation, the case where the user input is a key input or a touch input will now be explained.

根據示例性實施例,裝置100可接收選擇預設按鈕的使用者輸入。預設按鈕可為附裝至裝置100的物理按鈕或具有圖形使用者介面(GUI)形式的虛擬按鈕。舉例而言,當使用者選擇第一按鈕(例如,主頁按鈕)及第二按鈕(例如,音量控制按鈕)二者時,裝置100可選擇螢幕上的部分區。 According to an exemplary embodiment, device 100 may receive a user input selecting a preset button. The preset button can be a physical button attached to device 100 or a virtual button in the form of a graphical user interface (GUI). For example, when the user selects both a first button (eg, a home button) and a second button (eg, a volume control button), device 100 can select a partial region on the screen.

裝置100可接收對顯示於螢幕上的影像的部分區進行觸摸的使用者輸入。舉例而言,裝置100可接收對所顯示影像的部分區進行觸摸達預定時間週期(例如,兩秒)以上、抑或對所述部分區觸摸預定次數以上(例如,雙擊)的輸入。然後,裝置100 可將包括所觸摸的部分區的物體或背景確定為感興趣區域。換言之,裝置100可選擇感興趣區域。 The device 100 can receive user input that touches a portion of the image displayed on the screen. For example, device 100 can receive an input that touches a portion of the displayed image for a predetermined period of time (eg, two seconds), or that touches the portion of the portion for a predetermined number of times (eg, a double tap). Then, the device 100 An object or background including the touched partial area may be determined as the area of interest. In other words, device 100 can select a region of interest.

裝置100可自影像確定感興趣區域。裝置100可利用影像分析資訊自影像確定感興趣區域。舉例而言,裝置100可偵測顯示於所觸摸的區上的物體的輪廓線。裝置100可將影像中所包含的物體的輪廓線與預定模板進行比較,並偵測所述物體的類型、名稱等。舉例而言,當物體的輪廓線類似於車輛模板時,裝置100可將影像中所包含的物體識別為車輛並將車輛影像確定為感興趣區域。 Device 100 can determine a region of interest from an image. The device 100 can use the image analysis information to determine the region of interest from the image. For example, device 100 can detect an outline of an object displayed on the touched area. The device 100 can compare the contour of the object contained in the image with a predetermined template, and detect the type, name, and the like of the object. For example, when the outline of the object is similar to the vehicle template, the device 100 can identify the object contained in the image as the vehicle and determine the vehicle image as the region of interest.

根據示例性實施例,裝置100可對影像中所包含的物體執行面部識別。面部偵測方法的實例可包括基於知識的方法、基於特徵的方法、模板匹配方法、及基於外觀的方法,但示例性實施例並非僅限於此。 According to an exemplary embodiment, the device 100 may perform face recognition on an object included in an image. Examples of the face detection method may include a knowledge based method, a feature based method, a template matching method, and an appearance based method, but the exemplary embodiments are not limited thereto.

可自所偵測的面部提取面部特徵(例如,作為面部主要部分的眼睛、鼻子、及嘴巴的形狀)。為自面部提取面部特徵,可使用賈柏濾波器(gabor filter)或局部二進制模式(local binary pattern,LBP),但示例性實施例並非僅限於此。 Facial features can be extracted from the detected face (eg, the shape of the eyes, nose, and mouth that are the major portions of the face). To extract facial features from the face, a gabor filter or a local binary pattern (LBP) may be used, but the exemplary embodiments are not limited thereto.

作為另一選擇,裝置100可藉由將影像的某一區與色彩圖(色彩直方圖)進行比較而提取視覺特徵(例如影像的色彩排列、圖案、及氛圍)作為影像分析資訊。 Alternatively, device 100 may extract visual features (eg, color arrangement, pattern, and ambience of the image) as image analysis information by comparing a region of the image to a color map (color histogram).

在操作S130中,裝置100向影像提供效果。裝置100可向影像的感興趣區域提供效果以便與先前所顯示的感興趣區域 完全不同地提供感興趣區域。可以各種方式提供效果。 In operation S130, the device 100 provides an effect to the image. The device 100 can provide an effect to the region of interest of the image to match the previously displayed region of interest The area of interest is provided completely differently. The effect can be provided in a variety of ways.

圖2說明根據示例性實施例,一種用於提供效果的圖形使用者介面(GUI)。 2 illustrates a graphical user interface (GUI) for providing effects, in accordance with an exemplary embodiment.

如圖2的200-1所示,裝置100可顯示任意螢幕影像。所述任意螢幕影像的實例可包括:藉由執行圖片冊應用程式而顯示的靜止影像、藉由執行攝影應用程式而顯示的即時取景影像、藉由執行移動圖片冊應用程式而顯示的移動圖片中的移動圖片訊框、以及包括用於執行應用程式的選單項的選單影像。裝置100可提供關於可在所述任意螢幕影像上使用的功能的功能視窗210。 As shown in 200-1 of Figure 2, device 100 can display any screen image. Examples of the arbitrary screen image may include: a still image displayed by executing a photo album application, a live view image displayed by executing a photography application, and a moving image displayed by executing a mobile photo album application; The mobile image frame and the menu image that includes the menu items used to execute the application. Device 100 can provide a function window 210 for functions that can be used on any of the screen images.

功能視窗210可提供代表可在任意螢幕影像上使用的功能的各種項。使用者可自功能視窗210選擇「編輯」項。當使用者選擇功能視窗210上的「編輯」項212時,裝置100可提供包括各種編輯項的編輯視窗220,如圖2的200-2所示。功能視窗210及編輯視窗220可為圖形使用者介面。 Function window 210 provides various items that represent functions that can be used on any screen image. The user can select the "Edit" item from the function window 210. When the user selects the "Edit" item 212 on the function window 210, the device 100 can provide an edit window 220 including various edit items, as shown at 200-2 of FIG. The function window 210 and the edit window 220 can be graphical user interfaces.

參照圖2的200-2,裝置100可於螢幕影像上顯示編輯視窗220以確定編輯方法。當使用者選擇編輯視窗220上的「效果編輯」項212時,裝置100可提供與現有部分影像完全不同地顯示影像的部分影像的效果。 Referring to 200-2 of FIG. 2, device 100 can display edit window 220 on the screen image to determine an editing method. When the user selects the "Effect Edit" item 212 on the edit window 220, the device 100 can provide an effect of displaying a partial image of the image completely different from the existing partial image.

現在,將詳細闡述向影像提供效果的實例。 An example of providing an effect to an image will now be elaborated.

圖3是根據示例性實施例,用於解釋一種向物體提供光環效果的方法的參考圖。如圖3的300-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一 個影像。裝置100可接收將所述至少一個影像上的物體310選擇為感興趣區域的使用者輸入。藉由以手指或觸控工具觸摸其中顯示物體310的區、然後快速地抬起手指或觸控工具而不移動所述手指的敲擊動作,使用者可選擇其中顯示物體310的區。裝置100可利用圖形切割方法、位準設定方法等將顯示於所觸摸區上的物體與影像進行區分。裝置100可將物體310確定為感興趣區域。 FIG. 3 is a reference diagram for explaining a method of providing a halo effect to an object, according to an exemplary embodiment. As shown in 300-1 of FIG. 3, the device 100 can display at least one while a specific application (eg, a photo album application) is being executed. Images. The device 100 can receive a user input that selects an object 310 on the at least one image as a region of interest. The user can select an area in which the object 310 is displayed by touching a region in which the object 310 is displayed with a finger or a touch tool and then quickly lifting the finger or the touch tool without moving the finger. The device 100 can distinguish an object displayed on the touched area from the image by a graphic cutting method, a level setting method, or the like. Device 100 can determine object 310 as a region of interest.

如圖3的300-2所示,因應於使用者的選擇,裝置100可藉由完全突出顯示物體310而顯示與圖3的300-1上所顯示的物體310完全不同的物體320。其中感興趣區域相較於先前所顯示的感興趣區域被完全突出顯示的影像處理可被稱為光環效果。可突出顯示感興趣區域的輪廓線,抑或可突出顯示整個感興趣區域。 As shown in 300-2 of FIG. 3, the device 100 can display an object 320 that is completely different from the object 310 displayed on 300-1 of FIG. 3 by fully highlighting the object 310 in response to user selection. Image processing in which the region of interest is fully highlighted compared to the previously displayed region of interest may be referred to as a halo effect. The outline of the area of interest can be highlighted, or the entire area of interest can be highlighted.

圖4是根據示例性實施例,用於解釋一種向物體提供模糊效果的方法的參考圖。如圖4的400-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收選擇所述至少一個影像上的物體410的使用者輸入。使用者可藉由在觸摸上面顯示有物體410的區(其為感興趣區域)的同時進行水平移動某一距離的橫掃動作而選擇物體410。然後,如圖4的400-2所示,因應於使用者的選擇,裝置100可藉由減小物體410中的畫素值之間的差異而顯示模糊的物體420。裝置100可根據橫掃時間週期或橫掃次數而改變模糊效果的程度。舉例而言,隨著橫掃時間週期或橫掃次數增加,模糊效果的程度可增大。 FIG. 4 is a reference diagram for explaining a method of providing a blurring effect to an object, according to an exemplary embodiment. As shown in 400-1 of FIG. 4, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. Device 100 can receive a user input selecting an object 410 on the at least one image. The user can select the object 410 by performing a sweeping motion that moves horizontally a certain distance while touching the area on which the object 410 is displayed, which is the region of interest. Then, as shown in 400-2 of FIG. 4, the device 100 can display the blurred object 420 by reducing the difference between the pixel values in the object 410 in response to the user's selection. The device 100 can vary the degree of blurring depending on the sweep time period or the number of sweeps. For example, as the sweep time period or the number of sweeps increases, the degree of blurring effect may increase.

圖5及圖6是根據示例性實施例,用於解釋一種向物體提供尺寸效果的方法的參考圖。 5 and 6 are reference diagrams for explaining a method of providing a size effect to an object, according to an exemplary embodiment.

如圖5的500-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收選擇所述至少一個影像上的物體510的使用者輸入。舉例而言,使用者可藉由在以兩個手指觸摸上面顯示有物體510的區(其為感興趣區域)的同時進行使所述兩個手指間的距離變寬的放大動作而選擇物體510。然後,裝置100可因應於對物體510的選擇而顯示放大的物體520,如圖5的500-2所示。所選擇的物體510被放大,然而未被選擇的物體及背景的尺寸不發生改變。放大率可取決於兩個手指之間的距離的變化。 As shown in 500-1 of FIG. 5, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. Device 100 can receive a user input selecting an object 510 on the at least one image. For example, the user can select the object 510 by performing an enlargement action of widening the distance between the two fingers while touching the area on which the object 510 is displayed (which is the region of interest) with two fingers. . The device 100 can then display the magnified object 520 in response to selection of the object 510, as shown at 500-2 of FIG. The selected object 510 is enlarged, but the size of the unselected object and background does not change. The magnification can depend on the change in distance between the two fingers.

如圖6的600-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收選擇所述至少一個影像上的物體610的使用者輸入。舉例而言,使用者可藉由在以兩個手指觸摸上面顯示有物體610的區(其為感興趣區域)的同時進行使所述兩個手指間的距離變窄的捏縮動作而選擇物體610。然後,裝置100可因應於對物體610的選擇而顯示尺寸縮小的物體620,如圖6的600-2所示。所選擇的物體610的尺寸被縮小,而未被選擇的物體及背景的尺寸不發生改變。然而,因縮小所選擇的物體的尺寸而在所選擇的物體與其他區之間產生的空間可利用例如鏡像技術(mirroring technique)而由未被選擇的物體及背景來填充。尺寸縮小率可取決於兩個手 指之間的距離的變化。 As shown in 600-1 of FIG. 6, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. Device 100 can receive a user input selecting an object 610 on the at least one image. For example, the user can select an object by performing a pinching action that narrows the distance between the two fingers while touching the area on which the object 610 is displayed (which is the region of interest) with two fingers. 610. The device 100 can then display the reduced size object 620 in response to the selection of the object 610, as shown at 600-2 of FIG. The size of the selected object 610 is reduced, and the size of the unselected object and background does not change. However, the space created between the selected object and other regions due to the size of the selected object can be filled with unselected objects and background using, for example, a mirroring technique. The size reduction rate can depend on two hands Refers to the change in distance between the fingers.

效果提供可以是調整或產生感興趣區域的深度。圖7及圖8是根據示例性實施例,用於解釋一種向物體提供深度效果的方法的參考圖。如圖7的700-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收選擇所述至少一個影像上的物體710的使用者輸入。舉例而言,使用者可藉由在觸摸上面顯示有物體710的區的同時抬高裝置100而將物體710選擇為感興趣區域。然後,如圖7的700-2所示,因應於對物體710的選擇,裝置100可顯示深度減小的物體720,使得物體720顯示於物體710之前(即,使得使用者感覺物體720相較於物體710更近)。 The effect provides a depth that can be adjusted or generated for the region of interest. 7 and 8 are reference diagrams for explaining a method of providing a depth effect to an object, according to an exemplary embodiment. As shown in 700-1 of FIG. 7, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. Device 100 can receive a user input selecting an object 710 on the at least one image. For example, the user can select the object 710 as the region of interest by raising the device 100 while touching the area on which the object 710 is displayed. Then, as shown in 700-2 of FIG. 7, in response to selection of the object 710, the device 100 can display the object 720 having a reduced depth such that the object 720 is displayed before the object 710 (ie, causing the user to feel the object 720 compared Closer to object 710).

如圖8的800-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收選擇所述至少一個影像上的物體810的使用者輸入。舉例而言,使用者可藉由在觸摸上面顯示有物體810的區的同時向下移動裝置100而將物體810選擇為感興趣區域。然後,如圖8的800-2所示,因應於對物體810的選擇,裝置100可顯示深度增加的物體820,使得物體820顯示於物體810之後(即,使得使用者感覺物體820遙遠)。 As shown in 800-1 of FIG. 8, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. Device 100 can receive a user input selecting an object 810 on the at least one image. For example, the user can select the object 810 as the region of interest by moving the device 100 down while touching the area on which the object 810 is displayed. Then, as shown in 800-2 of FIG. 8, in response to selection of object 810, device 100 can display object 820 with increased depth such that object 820 is displayed after object 810 (ie, causing the user to feel object 820 remote).

裝置100可基於使用者的手勢來判斷待提供的效果的類型,但亦可基於使用者自被提供的效果清單所選擇的效果項來判斷待提供的效果的類型。圖9是根據示例性實施例,用於解釋一 種顯示效果清單的方法的參考圖。 The device 100 may determine the type of effect to be provided based on the gesture of the user, but may also determine the type of effect to be provided based on the effect item selected by the user from the provided effect list. FIG. 9 is a diagram for explaining one according to an exemplary embodiment A reference diagram of a method of displaying a list of effects.

如圖9的900-1所示,在裝置100的模式被設定為效果模式時,裝置100可顯示影像。使用者可選擇影像的上面顯示有物體910的部分區。然後,裝置100可將物體910確定為感興趣區域,並顯示關於適用於所述感興趣區域的效果的效果清單920,如圖9的900-2所示。 As shown in 900-1 of FIG. 9, when the mode of the device 100 is set to the effect mode, the device 100 can display an image. The user can select a partial area of the object 910 on which the image is displayed. The device 100 can then determine the object 910 as the region of interest and display an effect list 920 regarding the effects applicable to the region of interest, as shown at 900-2 of FIG.

可以彈出視窗的形式顯示效果清單920,且可以正文的形式顯示效果清單920中所包含的效果項。舉例而言,所述效果項可包括:突出顯示感興趣區域的光環效果、減小感興趣區域的畫素值之間的差異的模糊效果、調整感興趣區域的尺寸的尺寸效果、以及調整感興趣區域的深度的深度效果。使用者可自所述效果項中選擇一者,且裝置100可因應於使用者輸入而向感興趣區域提供效果。 The effect list 920 can be displayed in the form of a pop-up window, and the effect items included in the effect list 920 can be displayed in the form of a text. For example, the effect item may include: a halo effect that highlights a region of interest, a blur effect that reduces a difference between pixel values of the region of interest, a size effect that adjusts a size of the region of interest, and a sense of adjustment The depth effect of the depth of the area of interest. The user can select one of the effect items, and the device 100 can provide an effect to the region of interest in response to user input.

至此,已闡述了一種選擇物體並向所選擇的物體提供效果的方法。然而,裝置100可向多個物體提供相同的效果,抑或向所述多個物體中的至少二者提供不同的效果。 So far, a method of selecting an object and providing an effect to the selected object has been described. However, device 100 can provide the same effect to multiple objects or provide different effects to at least two of the plurality of objects.

圖10是根據示例性實施例,用於解釋一種向影像中的多個物體提供效果的方法的參考圖。如圖10的1000-1所示,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收將所述至少一個影像上的第一物體1010選擇為感興趣區域的使用者輸入。舉例而言,使用者可觸摸所述至少一個影像的上面顯示有第一物體1010的部分 區。然後,裝置100可判斷已接收到將第一物體1010選擇為感興趣區域的使用者輸入,並可顯示具有光環效果的第一物體1012,如圖10的1000-2所示。裝置100可接收將所述至少一個影像上的第二物體1020選擇為感興趣區域的使用者輸入。舉例而言,藉由在以兩個手指觸摸上面顯示有第二物體1020的區的同時進行使所述兩個手指之間的距離變寬的放大動作,使用者可輸入用於選擇第二物體1020的命令。然後,裝置100可因應於對第二物體1020的選擇而放大第二物體1020並顯示被放大的第二物體1022,如圖10的1000-3所示。 FIG. 10 is a reference diagram for explaining a method of providing an effect to a plurality of objects in an image, according to an exemplary embodiment. As shown in 1000-1 of FIG. 10, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. The device 100 can receive a user input that selects the first object 1010 on the at least one image as the region of interest. For example, the user can touch a portion of the at least one image on which the first object 1010 is displayed. Area. The device 100 can then determine that a user input having selected the first object 1010 as the region of interest has been received and can display the first object 1012 having the halo effect, as shown at 1000-2 of FIG. The device 100 can receive a user input that selects the second object 1020 on the at least one image as the region of interest. For example, by performing an enlargement action of widening the distance between the two fingers while touching the area on which the second object 1020 is displayed with two fingers, the user can input for selecting the second object. 1020 command. The device 100 can then zoom in on the second object 1020 and display the enlarged second object 1022 in response to selection of the second object 1020, as shown in 1000-3 of FIG.

效果提供不僅可應用至影像中的物體,且亦可應用至影像中的背景。圖11是根據示例性實施例,用於解釋一種向背景提供效果的方法的參考圖。裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。如圖11的1100-1所示,裝置100可接收將所述至少一個影像上的背景1110選擇為感興趣區域的使用者輸入。使用者可藉由觸摸並橫掃所述至少一個影像的上面顯示有背景1110的部分區而選擇背景1110。然後,如圖11的1100-2所示,裝置100可因應於對背景1110的選擇藉由減小背景1110中的畫素值之間的差異而提供模糊效果,並可顯示模糊的背景1120。除模糊效果以外,亦可應用使得背景能夠相較於先前的背景被完全不同地顯示的其他類型的效果。 The effect provides not only the objects that can be applied to the image, but also the background in the image. FIG. 11 is a reference diagram for explaining a method of providing an effect to a background, according to an exemplary embodiment. The device 100 can display at least one image while a particular application (eg, a photo album application) is being executed. As shown at 1100-1 of Figure 11, device 100 can receive a user input that selects background 1110 on the at least one image as the region of interest. The user can select the background 1110 by touching and sweeping a portion of the at least one image on which the background 1110 is displayed. Then, as shown at 1100-2 of FIG. 11, the apparatus 100 may provide a blur effect by reducing the difference between the pixel values in the background 1110 in response to the selection of the background 1110, and may display the blurred background 1120. In addition to the blurring effect, other types of effects that enable the background to be displayed completely differently than the previous background can be applied.

可向背景及物體二者提供效果。圖12A是根據示例性實施例,用於解釋一種向物體及背景二者提供效果的方法的參考 圖。參照圖12A的1200-1,裝置100可在具體應用程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。裝置100可接收將所述至少一個影像上的第一物體1210選擇為感興趣區域的使用者輸入。舉例而言,使用者可藉由觸摸所述至少一個影像的上面顯示有第一物體1210的部分區而選擇第一物體1210。然後,裝置100可因應於對第一物體1210的選擇而向第一物體1210提供光環效果。光環效果是突出顯示使用者所選擇的物體的輪廓線的效果。 Provides effects to both the background and the object. FIG. 12A is a reference for explaining a method of providing an effect to both an object and a background, according to an exemplary embodiment. Figure. Referring to 1200-1 of FIG. 12A, the device 100 can display at least one image while a specific application (eg, a photo album application) is being executed. The device 100 can receive a user input that selects the first object 1210 on the at least one image as the region of interest. For example, the user can select the first object 1210 by touching a partial region of the at least one image on which the first object 1210 is displayed. Device 100 may then provide a halo effect to first object 1210 in response to selection of first object 1210. The halo effect is an effect that highlights the outline of the object selected by the user.

如圖12A的1200-2所示,裝置100可顯示具有光環效果的物體1212。裝置100可接收將所述至少一個影像上的背景1220選擇為感興趣區域的使用者輸入。舉例而言,使用者可藉由觸摸然後橫掃所述至少一個影像的上面顯示有背景1220的部分區而選擇背景1220。然後,如圖12A的1200-2所示,裝置100可因應於對背景1220的選擇藉由減小背景1220中的畫素值之間的差異而顯示模糊的背景1222。 As shown at 1200-2 of Figure 12A, device 100 can display an object 1212 having a halo effect. The device 100 can receive a user input that selects the background 1220 on the at least one image as the region of interest. For example, the user can select the background 1220 by touching and then sweeping a portion of the at least one image on which the background 1220 is displayed. Then, as shown at 1200-2 of FIG. 12A, device 100 may display a blurred background 1222 by reducing the difference between pixel values in background 1220 in response to selection of background 1220.

至此,已闡述了在因應於使用者輸入選擇感興趣區域時會提供預設效果。然而,示例性實施例並非僅限於此。選擇感興趣區域的使用者輸入與用於提供效果的使用者輸入可彼此分開。可連續地或以某一時間差接收多個使用者輸入。選擇感興趣區域的使用者輸入與用於提供效果的使用者輸入可彼此相同。 So far, it has been explained that a preset effect is provided when a region of interest is selected in response to user input. However, the exemplary embodiments are not limited thereto. The user input selecting the region of interest and the user input for providing the effect can be separated from each other. Multiple user inputs can be received continuously or with a certain time difference. The user input selecting the region of interest and the user input for providing the effect may be identical to each other.

圖12B是根據示例性實施例,用於解釋一種因應於多個使用者輸入來提供效果的方法的參考圖。裝置100可在具體應用 程式(例如,圖片冊應用程式)被執行的同時顯示至少一個影像。如圖12B的1200-4所示,裝置100可接收將所述至少一個影像上的背景1260選擇為感興趣區域的第一使用者輸入。使用者可觸摸所述至少一個影像的上面顯示有背景1260的部分區。然後,裝置100可接收所述觸摸作為第一使用者輸入並將背景1260確定為感興趣區域。裝置100可藉由自影像偵測物體的輪廓線而使物體與背景分離。裝置100可判斷所觸摸的區是上面顯示有物體的區還是上面顯示有背景的區。由於上面顯示有背景的區已在圖12B的1200-4中被觸摸,故裝置100可將背景1260確定為感興趣區域。 FIG. 12B is a reference diagram for explaining a method of providing an effect in response to a plurality of user inputs, according to an exemplary embodiment. Device 100 can be used in a specific application A program (eg, a photo album application) is executed while displaying at least one image. As shown at 1204-1 of FIG. 12B, device 100 can receive a first user input that selects background 1260 on the at least one image as the region of interest. The user can touch a partial area of the at least one image on which the background 1260 is displayed. Device 100 can then receive the touch as a first user input and determine background 1260 as the region of interest. The device 100 can separate the object from the background by detecting the contour of the object from the image. The device 100 can determine whether the area touched is the area on which the object is displayed or the area on which the background is displayed. Since the area on which the background is displayed has been touched in 1200-4 of FIG. 12B, the device 100 can determine the background 1260 as the region of interest.

如圖12B的1200-5所示,裝置100可提供突出顯示背景1260的邊界的指示器1270。使用者可藉由查看指示器1270而判斷是否已恰當地選擇感興趣區域。裝置100可根據使用者的設定選擇性地顯示指示器1270。裝置100可接收向背景提供效果的第二使用者輸入。舉例而言,使用者可在具體方向上拖動上面顯示有背景的區。第一使用者輸入及第二使用者輸入可被連續接收。舉例而言,使用者可觸摸(第一使用者輸入)背景1260,然後立即拖動(第二使用者輸入)背景1260。 As shown at 1200-5 of Figure 12B, device 100 can provide an indicator 1270 that highlights the boundaries of background 1260. The user can determine whether the region of interest has been properly selected by looking at the indicator 1270. The device 100 can selectively display the indicator 1270 according to the settings of the user. Device 100 can receive a second user input that provides an effect to the background. For example, the user can drag the area on which the background is displayed in a specific direction. The first user input and the second user input can be continuously received. For example, the user can touch (first user input) background 1260 and then immediately drag (second user input) background 1260.

然後,如圖12B的1200-6所示,裝置100可接收所述拖動作為第二使用者輸入,提供使背景1260在拖動方向上流動的流動效果,並顯示流動的背景1262。流動效果是使影像流動的效果,並與基於先前根據拖動方向所排列的畫素的畫素值而對畫素的畫素值作出的修正相對應。 Then, as shown at 1200-6 of FIG. 12B, the device 100 can receive the drag as a second user input, providing a flow effect that causes the background 1260 to flow in the drag direction and display the flowing background 1262. The flow effect is an effect of flowing the image and corresponds to a correction of the pixel value of the pixel based on the pixel value of the pixel previously arranged according to the drag direction.

藉由編輯一個影像,其他影像可被提供以相同的效果。圖13說明根據示例性實施例,用於向多個影像提供效果的圖形使用者介面。參照圖13的1300-1,裝置100已因應於使用者輸入而向影像的物體提供效果。裝置100可顯示詢問是否將向其他影像提供相同的效果的詢問視窗1310。 By editing an image, other images can be provided with the same effect. FIG. 13 illustrates a graphical user interface for providing effects to multiple images, in accordance with an exemplary embodiment. Referring to 1300-1 of Figure 13, device 100 has provided an effect to an object of the image in response to user input. Device 100 may display an inquiry window 1310 asking if it will provide the same effect to other images.

裝置100可接收請求將相同的效果應用至其他影像的使用者輸入。舉例而言,使用者可觸摸詢問視窗1310的上面顯示有「是」的區。然後,因應於使用者輸入,裝置100可顯示可被應用效果的影像的清單1320,如圖13的1300-2所示。當使用者自清單1320選擇具體影像時,裝置100可向所選擇的影像提供相同的效果。 Device 100 can receive user input requesting the same effect to be applied to other images. For example, the user can touch the area of the inquiry window 1310 on which the "Yes" is displayed. Then, in response to user input, device 100 can display a list 1320 of images that can be applied, as shown at 1300-2 of FIG. When the user selects a particular image from the list 1320, the device 100 can provide the same effect to the selected image.

為便於解釋,現在將所評論的用以判斷是否可為影像提供效果的影像稱為目標影像。將用以選擇感興趣區域的影像稱為第一影像,並將目標影像中被提供效果的影像或用於效果提供的影像稱為第二影像。 For ease of explanation, the image that is reviewed to determine whether an image can be rendered effective is now referred to as a target image. The image used to select the region of interest is referred to as the first image, and the image in the target image that is provided with the effect or the image used for the effect is referred to as the second image.

裝置100可自目標影像中搜尋或取得第二影像,以便向所述第二影像提供效果。裝置100可利用用以辨識第一影像的感興趣區域(即,物體或背景)的辨識資訊而搜尋第二影像。 The device 100 can search for or acquire a second image from the target image to provide an effect to the second image. The device 100 can search for the second image by using the identification information of the region of interest (ie, the object or the background) for identifying the first image.

「辨識資訊」表示用於辨識影像的關鍵詞、關鍵片語等,且可針對每一物體及每一背景定義辨識資訊。物體及背景可各自具有至少一條辨識資訊。根據示例性實施例,可利用影像的屬性資訊或影像的影像分析資訊取得辨識資訊。 "Identification Information" means keywords, key phrases, etc. used to identify images, and identification information can be defined for each object and each background. The object and the background may each have at least one piece of identification information. According to an exemplary embodiment, the identification information may be obtained by using the attribute information of the image or the image analysis information of the image.

圖14是根據示例性實施例,其中裝置100利用第一影像的辨識資訊向第二影像提供效果的方法的流程圖。 FIG. 14 is a flowchart of a method for the device 100 to provide an effect to a second image using identification information of the first image, according to an exemplary embodiment.

在操作S1410中,裝置100可自第一影像選擇感興趣區域。舉例而言,如上所述,裝置100可顯示第一影像,並因應於使用者輸入而將第一影像中的物體或背景選擇為感興趣區域。裝置100可向第一影像的感興趣區域提供效果,抑或可向第一影像的感興趣區域提供效果並隨後對第二影像提供效果。第一影像可為靜止影像、作為移動圖片的一部分的移動圖片訊框(即,移動圖片的靜止影像)、或即時取景影像。當第一影像為靜止影像或移動圖片的移動圖片訊框時,所述靜止影像或移動圖片可為預先儲存於裝置100中的影像,抑或可為儲存於外部裝置中並自外部裝置傳送的影像。當第一影像為即時取景影像時,所述即時取景影像可為由內置於裝置100中的相機所拍攝的影像,抑或由作為外部裝置的相機所拍攝並傳送的影像。 In operation S1410, the device 100 may select a region of interest from the first image. For example, as described above, the device 100 can display the first image and select an object or background in the first image as the region of interest in response to user input. The device 100 can provide an effect to the region of interest of the first image, or can provide an effect to the region of interest of the first image and then provide an effect on the second image. The first image may be a still image, a moving picture frame that is part of the moving picture (ie, a still image of the moving picture), or a live view image. When the first image is a still image or a moving picture frame of a moving picture, the still image or the moving picture may be an image stored in the device 100 in advance, or may be an image stored in the external device and transmitted from the external device. . When the first image is a live view image, the live view image may be an image captured by a camera built in the device 100 or an image captured and transmitted by a camera as an external device.

在操作S1420中,裝置100可判斷所選擇的感興趣區域中是否定義有辨識資訊。舉例而言,當儲存影像時,可將分別描述影像中所包含的物體及背景的辨識資訊與所述影像進行匹配並加以儲存。在此種情形中,裝置100可確定所選擇的感興趣區域中定義有辨識資訊。根據示例性實施例,可針對每一影像以元資料形式儲存分別對應於物體及背景的辨識資訊。 In operation S1420, the device 100 may determine whether identification information is defined in the selected region of interest. For example, when storing images, identification information describing objects and backgrounds contained in the images may be matched and stored. In such a case, device 100 can determine that identification information is defined in the selected region of interest. According to an exemplary embodiment, the identification information corresponding to the object and the background may be stored in the form of metadata for each image.

在操作S1430中,若所選擇的感興趣區域中未定義有辨識資訊,則裝置100可產生辨識資訊。舉例而言,裝置100可利 用以元資料形式儲存的屬性資訊或利用藉由對影像執行影像處理而取得的影像分析資訊來產生辨識資訊。以下將參照圖15更詳細地闡述操作S1430。 In operation S1430, if the identification information is not defined in the selected region of interest, the device 100 may generate the identification information. For example, the device 100 can benefit The identification information is generated by using attribute information stored in the form of metadata or using image analysis information obtained by performing image processing on the image. Operation S1430 will be explained in more detail below with reference to FIG.

在操作S1440中,裝置100可自目標影像搜尋具有辨識資訊的第二影像。目標影像可為例如根據使用者輸入儲存於裝置100中的靜止影像或移動圖片、或者儲存於外部裝置中的靜止影像或移動圖片。當自移動圖片搜尋第二影像時,裝置100可搜尋具有辨識資訊的移動圖片訊框。 In operation S1440, the device 100 may search for a second image with identification information from the target image. The target image may be, for example, a still image or a moving picture stored in the device 100 according to a user input, or a still image or a moving picture stored in the external device. When searching for the second image from the moving picture, the device 100 may search for a moving picture frame with the identification information.

辨識資訊可預先定義於目標影像中,抑或可不預先定義於目標影像中。若辨識資訊被預先定義於目標影像中,則裝置100可基於目標影像的辨識資訊是否與感興趣區域的辨識資訊相同而搜尋第二影像。若目標影像中未預先定義有辨識資訊(如在操作S1430中一般),則裝置100可產生目標影像的辨識資訊。裝置100可基於所產生的目標影像的辨識資訊是否與感興趣區域的辨識資訊相同而搜尋第二影像。 The identification information may be predefined in the target image or may not be predefined in the target image. If the identification information is predefined in the target image, the device 100 may search for the second image based on whether the identification information of the target image is the same as the identification information of the region of interest. If the identification information is not predefined in the target image (as generally done in operation S1430), the device 100 may generate identification information of the target image. The device 100 may search for the second image based on whether the generated identification information of the target image is the same as the identification information of the region of interest.

當針對第一影像的感興趣區域存在多條辨識資訊時,裝置100可利用所述多條辨識資訊中的至少某些辨識資訊而搜尋影像。作為另一選擇,裝置100可向使用者提供辨識資訊清單,因此使用者可選擇辨識資訊。裝置100可自辨識資訊清單接收至少一條辨識資訊。根據示例性實施例,裝置100可接收選擇所有辨識資訊的輸入或選擇某些辨識資訊的輸入。 When there is a plurality of pieces of identification information for the region of interest of the first image, the device 100 may search for the image by using at least some of the plurality of pieces of identification information. Alternatively, the device 100 can provide the user with a list of identification information so that the user can select the identification information. The device 100 can receive at least one piece of identification information from the identification information list. According to an exemplary embodiment, device 100 may receive an input to select all of the identification information or to select an input of certain identification information.

根據示例性實施例,選擇辨識資訊的使用者輸入可有所 變化。舉例而言,使用者輸入可為選自鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、及多重輸入中的至少一者。 According to an exemplary embodiment, the user input selecting the identification information may have Variety. For example, the user input can be at least one selected from the group consisting of a key input, a touch input, a motion input, a bending input, a voice input, and multiple inputs.

在操作S1450中,裝置100可向所找到的第二影像提供效果。裝置100可利用辨識資訊區分對應於感興趣區域的部分影像與第二影像,且可向所區分的部分影像提供與應用至第一影像的感興趣區域的效果相同的效果。在操作S1410中,裝置100可因應於使用者的選擇來區分感興趣區域與第一影像。在操作S1450中,裝置100可利用辨識資訊來區分對應於感興趣區域的部分影像與第二影像。 In operation S1450, the device 100 may provide an effect to the found second image. The device 100 can distinguish the partial image and the second image corresponding to the region of interest by using the identification information, and can provide the same effect to the segmented partial image as the effect applied to the region of interest of the first image. In operation S1410, the device 100 may distinguish the region of interest from the first image in response to the user's selection. In operation S1450, the device 100 may use the identification information to distinguish a partial image corresponding to the region of interest from the second image.

圖15是根據示例性實施例,其中裝置100產生辨識資訊的方法的流程圖。圖15說明其中第一影像中的感興趣區域的辨識資訊未被預先定義的情形。圖15所示的辨識資訊產生方法亦適用於其中產生目標影像的辨識資訊的情形。 15 is a flow diagram of a method in which device 100 generates identification information, in accordance with an exemplary embodiment. Fig. 15 illustrates a case where the identification information of the region of interest in the first image is not defined in advance. The method of generating the identification information shown in FIG. 15 is also applicable to the case where the identification information of the target image is generated.

在操作S1510中,裝置100可判斷是否存在對應於感興趣區域的屬性資訊。舉例而言,裝置100可檢查對應於感興趣區域的元資料。裝置100可自所述元資料提取感興趣區域的屬性資訊。 In operation S1510, the device 100 may determine whether there is attribute information corresponding to the region of interest. For example, device 100 can examine metadata corresponding to a region of interest. The device 100 may extract attribute information of the region of interest from the metadata.

根據示例性實施例,屬性資訊代表影像的屬性,且可包括與影像產生相關聯的上下文資訊及由使用者添加的註解資訊。 According to an exemplary embodiment, the attribute information represents an attribute of the image, and may include context information associated with the image generation and annotation information added by the user.

上下文資訊是在影像產生期間與影像相關聯的環境資訊。舉例而言,上下文資訊可包括以下資訊中的至少一者:關於影像的格式的資訊、關於影像的尺寸的資訊、關於用以產生影像 的裝置100的資訊、影像產生的時間資訊、影像產生的溫度資訊、以及影像的源資訊。裝置100可自動取得並儲存上下文資訊。 Contextual information is environmental information associated with an image during image generation. For example, the context information may include at least one of the following information: information about the format of the image, information about the size of the image, and information about the image used to generate the image The information of the device 100, the time information generated by the image, the temperature information generated by the image, and the source information of the image. Device 100 can automatically retrieve and store contextual information.

註解資訊是由使用者記錄的資訊,且可包括關於影像中所包含的物體的資訊(例如,物體的類型、名稱、及狀態)以及關於影像中所包含的背景的資訊(例如,位置資訊、時間資訊、及天氣資訊)。 Annotation information is information recorded by a user and may include information about objects contained in the image (eg, type, name, and state of the object) and information about the background contained in the image (eg, location information, Time information, and weather information).

在操作S1520及操作S1540中,裝置100可歸納影像的屬性資訊並產生辨識資訊。 In operation S1520 and operation S1540, the device 100 may summarize attribute information of the image and generate identification information.

歸納屬性資訊可意指基於字網(WordNet)(階層式術語參考系統)以上層語言來表達屬性資訊。 Inductive attribute information can mean that attribute information is expressed based on the upper layer language of WordNet (hierarchical term reference system).

「WordNet」是提供詞的定義或使用模式並在詞之間建立關係的資料庫。WordNet的基本結構包括多個具有一系列語義對等的詞的被稱為同義詞集的邏輯組、以及該些同義詞集之間的語義關係。所述語義關係包括上義詞、下義詞、部分詞(meronym)、及整體詞(holonym)。WordNet中所包含的名詞具有作為最上層詞的實體,並藉由根據意義擴展所述實體而形成下義詞。因此,WordNet亦可被稱為藉由對概念性詞彙進行分類及定義而具有階層式結構的本體論(ontology)。 "WordNet" is a database that provides definitions or usage patterns of words and establishes relationships between words. The basic structure of WordNet includes a logical group called a synonym set with a series of semantically equivalent words, and a semantic relationship between the synonym sets. The semantic relationship includes an upper meaning word, a lower meaning word, a partial word (meronym), and a whole word (holonym). The nouns contained in WordNet have the entity as the top-level word and form a sub-word by extending the entity according to the meaning. Therefore, WordNet can also be called ontology with a hierarchical structure by classifying and defining conceptual vocabulary.

「本體論」表示對共用概念化(shared conceptualization)的正式及明確的具體說明。本體論可被認為是一種由詞及關係構成的字典。在本體論中,分階層地表達與具體領域相關聯的詞,且包括用於擴展詞的推理規則。 "Ontology" represents a formal and unambiguous specification of shared conceptualization. Ontology can be thought of as a dictionary of words and relationships. In ontology, words associated with a particular domain are expressed hierarchically, and inference rules for extended words are included.

舉例而言,當感興趣區域為背景時,裝置100可將屬性資訊中所包含的位置資訊歸類為上層資訊並產生辨識資訊。舉例而言,裝置100可表達全球定位系統(global positioning system,GPS)座標值(緯度:37.4872222,經度:127.0530792)作為上級概念,例如地帶、建築、位址、區域名稱、城市名稱、或國家名稱。在此種情形中,建築、區域名稱、城市名稱、國家名稱等可被產生作為背景的辨識資訊。 For example, when the region of interest is the background, the device 100 may classify the location information included in the attribute information into upper layer information and generate identification information. For example, the device 100 can express a global positioning system (GPS) coordinate value (latitude: 37.4872222, longitude: 127.0530792) as a superior concept, such as a zone, a building, a address, a zone name, a city name, or a country name. . In this case, the building, the area name, the city name, the country name, and the like can be generated as identification information for the background.

在操作S1530及操作S1540中,若不存在對應於感興趣區域的屬性資訊,則裝置100可取得感興趣區域的影像分析資訊並利用所述影像分析資訊產生感興趣區域的辨識資訊。 In operation S1530 and S1540, if there is no attribute information corresponding to the region of interest, the device 100 may obtain image analysis information of the region of interest and use the image analysis information to generate identification information of the region of interest.

根據示例性實施例,影像分析資訊是與經由影像處理取得的資料的分析結果相對應的資訊。舉例而言,影像分析資訊可包括關於顯示於影像上的物體的資訊(例如,物體的類型、狀態、及名稱)、關於顯示於影像上的位置的資訊、關於顯示於影像上的季節或時間的資訊、以及關於顯示於影像上的氛圍或情緒的資訊,但示例性實施例並非僅限於此。 According to an exemplary embodiment, the image analysis information is information corresponding to an analysis result of data obtained through image processing. For example, the image analysis information may include information about an object displayed on the image (eg, type, state, and name of the object), information about a position displayed on the image, and a season or time displayed on the image. Information, as well as information about the atmosphere or mood displayed on the image, but the exemplary embodiments are not limited thereto.

舉例而言,當感興趣區域為物體時,裝置100可偵測影像中所包含的物體的輪廓線。根據示例性實施例,裝置100可將影像中所包含的物體的輪廓線與預定模板進行比較,並取得所述物體的類型、名稱等。舉例而言,當物體的輪廓線類似於車輛模板時,裝置100可將影像中所包含的物體識別為車輛。在此種情形中,裝置100可利用關於影像中所包含的物體的資訊來顯示辨 識資訊「汽車」。 For example, when the region of interest is an object, the device 100 can detect the contour of the object contained in the image. According to an exemplary embodiment, the device 100 may compare an outline of an object included in an image with a predetermined template, and obtain a type, a name, and the like of the object. For example, when the outline of the object is similar to the vehicle template, the device 100 can identify the object contained in the image as the vehicle. In this case, the device 100 can display information using information about objects contained in the image. Know the information "car".

作為另一選擇,裝置100可對影像中所包含的物體執行面部識別。舉例而言,裝置100可自影像偵測人的面部區域。面部區域偵測方法的實例可包括基於知識的方法、基於特徵的方法、模板匹配方法、及基於外觀的方法,但示例性實施例並非僅限於此。 Alternatively, device 100 may perform facial recognition of objects contained in the image. For example, the device 100 can detect a person's facial area from the image. Examples of the face area detecting method may include a knowledge based method, a feature based method, a template matching method, and an appearance based method, but the exemplary embodiments are not limited thereto.

裝置100可自所偵測的面部區域提取面部特徵(例如,作為面部主要部分的眼睛、鼻子、及嘴巴的形狀)。為自面部區域提取面部特徵,可使用賈柏濾波器、局部二進制模式(LBP)等,但示例性實施例並非僅限於此。 The device 100 can extract facial features (eg, the shape of the eyes, nose, and mouth that are the major portions of the face) from the detected facial regions. To extract facial features from the face region, a Jaber filter, a local binary mode (LBP), or the like can be used, but the exemplary embodiments are not limited thereto.

裝置100可將自影像中的面部區域提取的面部特徵與預先註冊的使用者的面部特徵進行比較。舉例而言,當所提取的面部特徵類似於預先註冊的第一註冊者的面部特徵時,裝置100可確定第一使用者作為部分影像包含於所選擇的影像中。在此種情形中,裝置100可基於面部識別的結果產生辨識資訊「第一使用者」。 The device 100 can compare the facial features extracted from the facial regions in the image with the facial features of the pre-registered user. For example, when the extracted facial feature is similar to the facial feature of the first registered registrant, the device 100 may determine that the first user is included as a partial image in the selected image. In this case, the device 100 may generate the identification information "first user" based on the result of the face recognition.

根據示例性實施例,裝置100可將影像的某一區與色彩圖(色彩直方圖)進行比較並提取視覺特徵(例如影像的色彩排列、圖案、及氛圍)作為影像分析資訊。裝置100可利用影像的視覺特徵產生辨識資訊。舉例而言,當影像包括天空背景時,裝置100可利用天空背景的視覺特徵產生辨識資訊「天空」。 According to an exemplary embodiment, the device 100 may compare a certain region of the image with a color map (color histogram) and extract visual features (eg, color arrangement, pattern, and atmosphere of the image) as image analysis information. The device 100 can utilize the visual features of the image to generate identification information. For example, when the image includes a sky background, the device 100 can utilize the visual features of the sky background to generate the identification information "sky."

根據示例性實施例,裝置100可以區為單位劃分影像, 搜尋最類似於每一區的叢集(cluster),並產生與所找到的叢集相關連的辨識資訊。 According to an exemplary embodiment, the device 100 may divide an image into units, Search for the cluster most similar to each zone and generate identification information associated with the found cluster.

若不存在對應於影像的屬性資訊,則裝置100可取得影像的影像分析資訊,並利用所述影像分析資訊產生影像的辨識資訊。 If there is no attribute information corresponding to the image, the device 100 can obtain the image analysis information of the image, and use the image analysis information to generate the image identification information.

圖15說明其中裝置100在不存在影像的屬性資訊時取得影像的影像分析資訊的示例性實施例,但示例性實施例並非僅限於此。 FIG. 15 illustrates an exemplary embodiment in which the device 100 acquires image analysis information of an image when there is no attribute information of the image, but the exemplary embodiment is not limited thereto.

舉例而言,裝置100可僅利用影像分析資訊或屬性資訊而產生辨識資訊。作為另一選擇,即使在存在屬性資訊時,裝置100亦可更取得影像分析資訊。在此種情形中,裝置100可利用屬性資訊及影像分析資訊二者來產生辨識資訊。 For example, the device 100 may generate identification information using only image analysis information or attribute information. Alternatively, the device 100 may obtain image analysis information even when attribute information is present. In this case, the device 100 can use both the attribute information and the image analysis information to generate the identification information.

根據示例性實施例,裝置100可將基於屬性資訊產生的辨識資訊與基於影像分析資訊產生的辨識資訊進行比較,並將共同辨識資訊確定為最終辨識資訊。相較於非共同辨識資訊,共同辨識資訊可具有較高的可靠性。可靠性表示自影像提取的辨識資訊被信任為適當辨識資訊的程度。 According to an exemplary embodiment, the device 100 may compare the identification information generated based on the attribute information with the identification information generated based on the image analysis information, and determine the common identification information as the final identification information. Compared with non-common identification information, the common identification information can have higher reliability. Reliability indicates that the identification information extracted from the image is trusted to the extent that the information is properly identified.

圖16說明根據示例性實施例的影像的屬性資訊。如圖16所示,影像的屬性資訊可以元資料形式儲存。舉例而言,針對每一影像,例如類型1610、時間1611、GPS 1612、解析度1613、尺寸1614、及收集裝置1617等資料可被儲存作為屬性資訊。 FIG. 16 illustrates attribute information of an image according to an exemplary embodiment. As shown in FIG. 16, the attribute information of the image can be stored in the form of metadata. For example, for each image, for example, type 1610, time 1611, GPS 1612, resolution 1613, size 1614, and collection device 1617 may be stored as attribute information.

根據示例性實施例,在影像產生期間使用的上下文資訊 亦可以元資料形式儲存。舉例而言,當裝置100產生第一影像1601時,裝置100可在第一影像1601被產生的時刻自天氣應用程式收集天氣資訊(例如,多雲)、溫度資訊(例如,20℃)等。裝置100可儲存天氣資訊1615及溫度資訊1616作為第一影像1601的屬性資訊。裝置100可在第一影像1601被產生的時刻自排程表應用程式收集事件資訊(圖中未示出)。在此種情形中,裝置100可儲存所述事件資訊作為第一影像1601的屬性資訊。 Context information used during image generation, according to an exemplary embodiment It can also be stored in the form of metadata. For example, when the device 100 generates the first image 1601, the device 100 may collect weather information (eg, cloudy), temperature information (eg, 20 ° C), and the like from the weather application at the time the first image 1601 is generated. The device 100 can store the weather information 1615 and the temperature information 1616 as attribute information of the first image 1601. The device 100 can collect event information (not shown) from the schedule application at the moment the first image 1601 is generated. In this case, the device 100 may store the event information as attribute information of the first image 1601.

根據示例性實施例,由使用者輸入的使用者添加資訊1618亦可以元資料形式儲存。舉例而言,使用者添加資訊1618可包括由使用者輸入以解釋影像的註解資訊,以及由使用者解釋的關於物體的資訊。 According to an exemplary embodiment, the user added information 1618 input by the user may also be stored in the form of metadata. For example, the user added information 1618 may include annotation information input by the user to interpret the image, and information about the object interpreted by the user.

根據示例性實施例,作為關於影像的影像處理結果而取得的影像分析資訊(例如,物體資訊1619)可以元資料形式儲存。舉例而言,裝置100可儲存關於第一影像1601中所包含的物體(例如,使用者1、使用者2、我、以及椅子)的資訊作為第一影像1601的屬性資訊。 According to an exemplary embodiment, image analysis information (for example, object information 1619) obtained as a result of image processing on an image may be stored in the form of metadata. For example, the device 100 may store information about objects (eg, user 1, user 2, me, and chair) included in the first image 1601 as attribute information of the first image 1601.

圖17是用於解釋其中裝置100基於影像的屬性資訊而產生影像的辨識資訊的實例的參考圖。 17 is a reference diagram for explaining an example in which the device 100 generates identification information of an image based on attribute information of an image.

根據示例性實施例,裝置100可基於使用者輸入而將影像1710的背景1712選擇為感興趣區域。在此種情形中,裝置100可於影像1710的屬性資訊1720中查看所選擇的背景1712的屬性資訊。裝置100可利用所選擇的背景1712的屬性資訊來偵測辨識 資訊1730。 According to an exemplary embodiment, device 100 may select background 1712 of image 1710 as the region of interest based on user input. In this case, the device 100 can view the attribute information of the selected background 1712 in the attribute information 1720 of the image 1710. The device 100 can detect the identification by using the attribute information of the selected background 1712. Information 1730.

舉例而言,當被選擇為感興趣區域的區域為背景時,裝置100可自屬性資訊1720偵測與背景相關聯的資訊。裝置100可利用屬性資訊1720中的位置資訊(例如,緯度:37;25;26.928...,經度:126;35;31.235...)產生辨識資訊「公園」,抑或利用屬性資訊1720中的天氣資訊(例如,雲)產生辨識資訊「多雲」。裝置100可組合多條屬性資訊來產生新的辨識資訊。舉例而言,當屬性資訊1720中的時間資訊為2012.5.3.15:13且其中的位置資訊為緯度:37;25;26.928...及經度:126;35;31.235...時,裝置100可利用所述位置資訊確定影像1710上所顯示的區域,且亦可除位置資訊以外更利用時間資訊而確定影像1710上所顯示的季節。舉例而言,當位置資訊為「韓國」時,裝置100可利用時間資訊產生關於季節的辨識資訊「春季」。作為另一實例,裝置100可利用基於位置資訊及時間資訊而產生的關於季節的辨識資訊、以及天氣資訊來產生辨識資訊「春季雨天」。 For example, when the area selected as the region of interest is the background, the device 100 may detect the information associated with the background from the attribute information 1720. The device 100 can generate the identification information "park" by using the location information (for example, latitude: 37; 25; 26.928..., longitude: 126; 35; 31.235...) in the attribute information 1720, or use the attribute information 1720 Weather information (eg, the cloud) generates identification information that is "cloudy." The device 100 can combine a plurality of attribute information to generate new identification information. For example, when the time information in the attribute information 1720 is 2012.5.3.15:13 and the position information therein is latitude: 37; 25; 26.928... and longitude: 126; 35; 31.235... The location information is used to determine the area displayed on the image 1710, and the time displayed on the image 1710 can be determined using time information in addition to the location information. For example, when the location information is "Korea", the device 100 can use the time information to generate identification information "spring" about the season. As another example, the device 100 may use the identification information about the season generated based on the location information and the time information, and the weather information to generate the identification information "spring rainy day".

作為另一選擇,裝置100可自使用者所添加的註解資訊產生對應於物體資訊的辨識資訊「微笑」及「使用者1」。 Alternatively, the device 100 may generate identification information "smile" and "user 1" corresponding to the object information from the annotation information added by the user.

當上下文資訊與註解資訊彼此矛盾時,裝置100可利用影像分析資訊產生辨識資訊。舉例而言,當上下文資訊中所包含的天氣資訊為雨天而註解資訊中所包含的天氣資訊為多雲時,裝置100可利用影像分析資訊判斷天氣資訊是雨天還是多雲。然而,示例性實施例並非僅限於此。當上下文資訊與註解資訊彼此矛盾 時,裝置100可優先考慮註解資訊並利用註解資訊產生辨識資訊。 When the context information and the annotation information contradict each other, the device 100 can generate the identification information by using the image analysis information. For example, when the weather information included in the context information is rainy and the weather information contained in the annotation information is cloudy, the device 100 can use the image analysis information to determine whether the weather information is rainy or cloudy. However, the exemplary embodiments are not limited thereto. When context information and annotation information contradict each other The device 100 may prioritize the annotation information and generate the identification information using the annotation information.

圖18是用於解釋其中裝置100利用影像分析資訊產生辨識資訊的實例的參考圖。根據示例性實施例,裝置100可基於使用者輸入將影像1810的第一物體1812選擇為感興趣區域。在此種情形中,裝置100可藉由對第一物體1812執行影像分析而產生闡述第一物體1812的辨識資訊(例如,人及笑臉)。 FIG. 18 is a reference diagram for explaining an example in which the device 100 generates identification information using image analysis information. According to an exemplary embodiment, device 100 may select first object 1812 of image 1810 as a region of interest based on user input. In this case, the device 100 can generate identification information (eg, a person and a smile) that illustrates the first object 1812 by performing image analysis on the first object 1812.

舉例而言,裝置100可自感興趣區域偵測面部區域。裝置100可自所偵測的面部區域提取面部特徵。裝置100可將所提取的面部特徵與預先註冊的使用者的面部特徵進行比較,並產生代表所選擇的第一物體1812是使用者1的辨識資訊。裝置100亦可基於所偵測的面部區域中所包含的嘴唇形狀而產生辨識資訊「微笑」。然後,裝置100可自辨識資訊1820取得「使用者1」及「微笑」。 For example, device 100 can detect a facial region from a region of interest. The device 100 can extract facial features from the detected facial regions. The device 100 can compare the extracted facial features with pre-registered facial features of the user and generate identification information representative of the selected first object 1812 being the user 1. The device 100 can also generate the identification information "smile" based on the shape of the lips included in the detected face region. Then, the device 100 can obtain "user 1" and "smile" from the identification information 1820.

當存在感興趣區域的多條辨識資訊時,裝置100可顯示辨識資訊清單以使得使用者可選擇辨識資訊。圖19說明根據示例性實施例,其中裝置100顯示辨識資訊清單的實例。 When there is a plurality of pieces of identification information of the region of interest, the device 100 may display the identification information list to enable the user to select the identification information. FIG. 19 illustrates an example in which device 100 displays a list of identification information, in accordance with an exemplary embodiment.

參照圖19的1900-1,裝置100可基於使用者輸入將第一影像1910的第一物體1912選擇為感興趣區域。根據示例性實施例,裝置100可取得闡述第一物體1912的辨識資訊。舉例而言,裝置100可取得辨識資訊,例如微笑、母親、及眨眼。 Referring to 1900-1 of FIG. 19, device 100 may select first object 1912 of first image 1910 as a region of interest based on user input. According to an exemplary embodiment, device 100 may obtain identification information that illustrates first object 1912. For example, device 100 may obtain identification information such as a smile, a mother, and a wink.

參照圖19的1900-2,裝置100可顯示所取得的辨識資訊的辨識資訊清單1920。在此種情形中,裝置100可接收自辨識 資訊清單1920選擇至少某些辨識資訊的使用者輸入。舉例而言,裝置100可接收選擇母親1922的使用者輸入。裝置100可自目標影像(例如,圖片冊)搜尋具有使用者所選擇的辨識資訊(例如,母親)的第二影像,向第二影像中對應於母親的部分影像提供效果,然後顯示已對對應於母親的部分影像應用效果的第二影像1930,如圖19的1900-3所示。 Referring to 1900-2 of FIG. 19, the device 100 can display the identification information list 1920 of the acquired identification information. In this case, the device 100 can receive self-identification The information list 1920 selects at least some user input for identifying information. For example, device 100 can receive a user input selecting a mother 1922. The device 100 may search for a second image having identification information (eg, a mother) selected by the user from the target image (eg, a photo album), provide an effect to a portion of the image corresponding to the mother in the second image, and then display the corresponding image. A second image 1930 of the application effect of the partial image of the mother is shown in 1900-3 of FIG.

當存在多個被應用效果的第二影像時,裝置100可產生資料夾(以下稱為效果資料夾),並將所述被應用效果的第二影像(以下稱為效果影像)儲存於效果資料夾中。每一效果影像可包括選自第一影像及第二影像中的至少一者。儘管裝置100可將效果影像儲存於效果資料夾中,但裝置100亦可將效果影像的鏈接資訊儲存於效果資料夾中。 When there are a plurality of second images to which the effect is applied, the device 100 may generate a data folder (hereinafter referred to as an effect folder), and store the second image (hereinafter referred to as an effect image) to which the effect is applied in the effect data. In the folder. Each effect image may include at least one selected from the group consisting of a first image and a second image. Although the device 100 can store the effect image in the effect folder, the device 100 can also store the link information of the effect image in the effect folder.

圖20說明其中裝置100顯示效果資料夾的實例。當完成對第二影像的搜尋時,裝置100可向第二影像提供效果。如圖20的2000-1所示,裝置2010可顯示效果資料夾2010。效果資料夾2010可儲存效果影像。在其他示例性實施例中,效果影像的鏈接資訊可儲存於效果資料夾2010中。 FIG. 20 illustrates an example in which the device 100 displays an effect folder. When the searching for the second image is completed, the device 100 can provide an effect to the second image. As shown in 2000-1 of FIG. 20, the device 2010 can display the effect folder 2010. The effect folder 2010 can store effect images. In other exemplary embodiments, the link information of the effect image may be stored in the effect folder 2010.

使用者可輸入用於選擇效果資料夾2010的命令。因應於使用者輸入,裝置100可顯示至少一個效果影像2020,如圖20的2000-2所示。 The user can enter a command for selecting the effect folder 2010. In response to user input, device 100 can display at least one effect image 2020, as shown in 2000-2 of FIG.

根據示例性實施例,裝置100可基於選自影像產生時間資訊、影像產生位置資訊、影像的容量資訊、及影像的解析度資 訊中的至少一者而排列效果資料夾2010中所包含的至少一個效果影像。 According to an exemplary embodiment, the device 100 may be based on selected from image generation time information, image generation location information, image capacity information, and image resolution. At least one effect image included in the effect folder 2010 is arranged in at least one of the messages.

當存在不同類型的效果影像時,裝置100可根據效果影像的類型產生效果資料夾,並將同一類型的效果影像儲存於單個效果資料夾中。裝置100可在效果資料夾中所包含的影像上選擇新的感興趣區域,並向所選擇的感興趣區域提供新的效果。當在效果資料夾中包含許多已被應用新的效果的效果影像時,可在所述效果資料夾中產生新的效果資料夾。 When there are different types of effect images, the device 100 may generate an effect folder according to the type of the effect image, and store the same type of effect image in a single effect folder. The device 100 can select a new region of interest on the image contained in the effect folder and provide a new effect to the selected region of interest. When the effect folder contains many effect images to which a new effect has been applied, a new effect folder can be created in the effect folder.

如上所述,不僅可向儲存於裝置100中的影像提供效果,而且亦可向儲存於外部裝置中的影像提供效果。外部裝置可為例如社交網路服務(SNS)伺服器、雲端伺服器或遠端伺服器、或另一使用者所使用的裝置100。圖21是根據示例性實施例,其中裝置向儲存於外部裝置中的影像提供效果的方法的流程圖。 As described above, it is possible to provide effects not only to images stored in the device 100 but also to images stored in the external devices. The external device can be, for example, a social network service (SNS) server, a cloud server or a remote server, or a device 100 used by another user. 21 is a flowchart of a method in which a device provides an effect to an image stored in an external device, according to an exemplary embodiment.

在操作S2110中,外部裝置200可儲存至少一個影像。儲存於外部裝置200中的影像中的一者可為第一影像。外部裝置200可為向經由網路與所述外部裝置連接的裝置100提供SNS的伺服器,可為經由網路與裝置100連接的可攜式終端,或者可為雲端伺服器。SNS表示使得在線使用者能夠新建人際關係或增強現有人際關係的服務。 In operation S2110, the external device 200 may store at least one image. One of the images stored in the external device 200 may be the first image. The external device 200 may be a server that provides an SNS to the device 100 connected to the external device via a network, may be a portable terminal connected to the device 100 via a network, or may be a cloud server. SNS represents a service that enables online users to create new relationships or enhance existing relationships.

根據示例性實施例,外部裝置200可儲存自若干使用者的裝置100上傳的影像。 According to an exemplary embodiment, the external device 200 may store images uploaded from the devices 100 of several users.

在操作S2120中,裝置100可連接至外部裝置200。裝 置100可藉由執行登入而連接至外部裝置200。登入可為取得針對儲存於外部裝置200中的影像的存取權限的程序。舉例而言,裝置100可請求外部裝置200執行使用者授權,同時將使用者的辨識資訊(例如,電子郵件帳戶資訊)及使用者的鑒別資訊(例如,密碼)傳送至外部裝置200。當使用者被辨識為授權使用者時,裝置100可被容許連接外部裝置並存取儲存於所述外部裝置中的影像。 The device 100 may be connected to the external device 200 in operation S2120. Loading The setting 100 can be connected to the external device 200 by performing login. The login may be a program that acquires access rights to images stored in the external device 200. For example, the device 100 may request the external device 200 to perform user authorization, and transmit the user's identification information (eg, email account information) and the user's authentication information (eg, password) to the external device 200. When the user is identified as an authorized user, device 100 can be allowed to connect to an external device and access images stored in the external device.

在操作S2130中,裝置100可接收儲存於外部裝置200中的影像中的第一影像。裝置100可請求將儲存於外部裝置200中的影像中的一者作為第一影像。因應於此請求,外部裝置200可將第一影像傳送至裝置100。第一影像可包括物體及背景。第一影像可為靜止影像、移動圖片訊框、或即時取景影像。 In operation S2130, the device 100 may receive the first image in the image stored in the external device 200. The device 100 may request one of the images stored in the external device 200 as the first image. In response to this request, the external device 200 can transmit the first image to the device 100. The first image may include an object and a background. The first image may be a still image, a moving picture frame, or a live view image.

在操作S2140中,裝置100可自第一影像選擇感興趣區域。舉例而言,裝置100可接收自第一影像選擇部分區域的使用者輸入,偵測環繞所選擇的部分區域的輪廓線,並將被所述輪廓線環繞的所述部分區域選擇為感興趣區域。被輪廓線環繞的區域可為物體或背景。 In operation S2140, the device 100 may select a region of interest from the first image. For example, the device 100 can receive a user input from the first image selection portion region, detect an outline around the selected partial region, and select the partial region surrounded by the contour as the region of interest. . The area surrounded by the outline can be an object or a background.

根據示例性實施例,選擇感興趣區域的使用者輸入可有所變化。使用者輸入可為例如鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、或多重輸入。舉例而言,裝置100可接收觸摸儲存於外部裝置200中的多個影像中的具體內容達預定時間週期(例如,兩秒)以上、抑或觸摸所述具體內容預定次數以上(例 如,雙擊)的輸入。 According to an exemplary embodiment, the user input selecting the region of interest may vary. The user input can be, for example, a key input, a touch input, a motion input, a curved input, a voice input, or multiple inputs. For example, the device 100 can receive the specific content of the plurality of images stored in the external device 200 for a predetermined time period (for example, two seconds) or more, or touch the specific content for a predetermined number of times or more (for example) For example, double-click on the input.

在操作S2150中,裝置100可詢問外部裝置20關於辨識所選擇的感興趣區域的辨識資訊。在操作S2160中,裝置100可自外部裝置200接收關於辨識資訊的回應。裝置100可詢問是否預先定義了關於對應於感興趣區域的物體或背景的辨識資訊。當所述辨識資訊被預先定義時,外部裝置200可將關於感興趣區域的辨識資訊傳送至裝置100。 In operation S2150, the device 100 may query the external device 20 for identifying information identifying the selected region of interest. In operation S2160, the device 100 may receive a response regarding the identification information from the external device 200. The device 100 can ask whether identification information about an object or background corresponding to the region of interest is predefined. When the identification information is predefined, the external device 200 may transmit identification information about the region of interest to the device 100.

在某些示例性實施例中,當外部裝置200中未預先定義關於物體或背景的辨識資訊時,外部裝置200可判斷外部裝置200是否能夠產生辨識資訊。若確定外部裝置200能夠產生辨識資訊,則外部裝置200可產生關於感興趣區域的辨識資訊並將其傳送至裝置100。當外部裝置200產生辨識資訊時,外部裝置200可利用感興趣區域的屬性資訊及影像分析資訊中的至少一者。另一方面,若確定外部裝置200無法產生辨識資訊,則外部裝置200可僅將外部裝置200所具有的關於感興趣區域的資訊傳送至裝置100。在某些示例性實施例中,外部裝置200可將指示外部裝置200無法產生辨識資訊的回應傳送至裝置100。 In some exemplary embodiments, when the identification information about the object or the background is not predefined in the external device 200, the external device 200 may determine whether the external device 200 can generate the identification information. If it is determined that the external device 200 can generate the identification information, the external device 200 can generate identification information about the region of interest and transmit it to the device 100. When the external device 200 generates the identification information, the external device 200 may utilize at least one of the attribute information of the region of interest and the image analysis information. On the other hand, if it is determined that the external device 200 cannot generate the identification information, the external device 200 may transmit only the information about the region of interest that the external device 200 has to the device 100. In some exemplary embodiments, the external device 200 may transmit a response indicating that the external device 200 is unable to generate the identification information to the device 100.

在操作S2170中,裝置100可基於外部裝置200的回應而取得感興趣區域的辨識資訊。裝置100可自外部裝置200接收感興趣區域的辨識資訊,抑或可利用選自感興趣區域的屬性資訊及影像分析資訊中的至少一者而產生感興趣區域的辨識資訊。 In operation S2170, the device 100 may obtain identification information of the region of interest based on the response of the external device 200. The device 100 may receive the identification information of the region of interest from the external device 200, or may generate the identification information of the region of interest by using at least one of the attribute information and the image analysis information selected from the region of interest.

在操作S2180中,裝置100可搜尋具有辨識資訊的第二 影像。裝置100可自目標影像搜尋具有辨識資訊的第二影像。目標影像可為儲存於裝置100中的影像。作為另一選擇,目標影像可為儲存於外部裝置200中的影像。作為另一選擇,目標影像可為儲存於不同於圖21所示外部裝置200的外部裝置中的影像。在搜尋第二影像時,裝置100可利用針對目標影像所儲存的辨識資訊。當未預先定義辨識資訊時,裝置100可利用目標影像的屬性資訊或影像分析資訊產生目標影像的辨識資訊,然後搜尋具有共同辨識資訊的第二影像。 In operation S2180, the device 100 may search for a second with identification information. image. The device 100 can search for a second image with identification information from the target image. The target image may be an image stored in the device 100. Alternatively, the target image may be an image stored in the external device 200. Alternatively, the target image may be an image stored in an external device different from the external device 200 shown in FIG. When searching for the second image, the device 100 can utilize the identification information stored for the target image. When the identification information is not defined in advance, the device 100 may generate the identification information of the target image by using the attribute information or the image analysis information of the target image, and then search for the second image with the common identification information.

當存在多條辨識資訊時,裝置100可因應於使用者輸入而利用所述多條辨識資訊中的至少某些辨識資訊來搜尋第二影像。 When there is a plurality of pieces of identification information, the device 100 may search for the second image by using at least some of the pieces of identification information according to user input.

在操作S2190中,裝置100可向所找到的第二影像提供效果。 In operation S2190, the device 100 may provide an effect to the found second image.

儘管在圖21中利用儲存於外部裝置中的第一影像的感興趣區域向儲存於裝置100或外部裝置200中的第二影像提供效果,但示例性實施例並非僅限於此。可利用儲存於裝置100中的第一影像的感興趣區域向儲存於外部裝置200中的第二影像提供效果。 Although the region of interest of the first image stored in the external device is provided to the second image stored in the device 100 or the external device 200 in FIG. 21, the exemplary embodiment is not limited thereto. The region of interest of the first image stored in device 100 can be utilized to provide an effect to the second image stored in external device 200.

裝置100可與外部裝置200共用效果影像。圖22是根據示例性實施例,其中裝置與外部裝置共用效果影像的方法的流程圖。 The device 100 can share an effect image with the external device 200. 22 is a flowchart of a method in which a device shares an effect image with an external device, according to an exemplary embodiment.

在操作S2210中,裝置100可顯示效果影像。舉例而言, 裝置100可藉由向第一影像上使用者所選擇的感興趣區域提供效果而產生效果影像。裝置100可顯示效果影像。裝置100可藉由利用第一影像的感興趣區域向第二影像提供效果而產生效果影像。 In operation S2210, the device 100 may display an effect image. For example, The device 100 can generate an effect image by providing an effect to the region of interest selected by the user on the first image. The device 100 can display an effect image. The device 100 can generate an effect image by providing an effect to the second image using the region of interest of the first image.

在操作S2220中,裝置100可接收請求將效果影像共用的使用者輸入。 In operation S2220, the device 100 may receive a user input requesting sharing of the effect images.

根據示例性實施例,請求將效果影像共用的使用者輸入可有所變化。舉例而言,使用者輸入可為鍵輸入、語音輸入、觸控輸入、或彎曲輸入,但示例性實施例並非僅限於此。 According to an exemplary embodiment, the user input requesting sharing of the effect images may vary. For example, the user input can be a key input, a voice input, a touch input, or a curved input, although the exemplary embodiments are not limited thereto.

根據示例性實施例,裝置100可經由使用者輸入接收關於將共用效果影像的外部裝置200的資訊。外部裝置200可為選自連接至裝置100的雲端伺服器、SNS伺服器、使用者的另一裝置、另一使用者的裝置、及穿戴式裝置中的至少一者,但示例性實施例並非僅限於此。 According to an exemplary embodiment, the device 100 may receive information about the external device 200 that will share the effect image via user input. The external device 200 may be at least one selected from the group consisting of a cloud server connected to the device 100, an SNS server, another device of the user, another user's device, and a wearable device, but the exemplary embodiment is not Limited to this.

舉例而言,使用者可輸入雲端儲存器的帳戶資訊、使用者的SNS帳戶資訊、用於傳送第一資料夾中所包含的所有影像的朋友裝置的辨識資訊(例如,電話號碼或MAC位址)、及朋友的電子郵件帳戶資訊。 For example, the user can input the account information of the cloud storage, the user's SNS account information, and the identification information of the friend device for transmitting all the images included in the first folder (for example, a phone number or a MAC address). ), and friends' email account information.

在操作S2230中,裝置100可與外部裝置共用效果影像。 In operation S2230, the device 100 may share an effect image with an external device.

舉例而言,裝置100可將效果影像的鏈接資訊(例如,儲存位置資訊或URL)傳送至外部裝置200。裝置100可將效果影像傳送至外部裝置200。根據示例性實施例,裝置100可將效果 影像上傳至具體伺服器,並授權外部裝置200存取所述具體伺服器。 For example, the device 100 can transmit link information (eg, storage location information or URL) of the effect image to the external device 200. The device 100 can transmit the effect image to the external device 200. According to an exemplary embodiment, device 100 may effect The image is uploaded to a specific server, and the external device 200 is authorized to access the specific server.

儘管在圖22中裝置100與外部裝置200共用效果影像,但示例性實施例並非僅限於此。裝置100可與外部裝置200共用效果資料夾。裝置100可與外部裝置200共用至少一個效果資料夾。 Although the device 100 shares the effect image with the external device 200 in FIG. 22, the exemplary embodiment is not limited thereto. The device 100 can share an effect folder with the external device 200. The device 100 can share at least one effect folder with the external device 200.

圖23說明其中裝置與外部裝置共用效果影像的實例。參照圖23的2300-1,裝置100可因應於使用者輸入而產生並顯示效果資料夾2310。效果資料夾2310可儲存至少一個效果影像。效果資料夾2310可儲存效果影像或效果影像的鏈接資訊。 Figure 23 illustrates an example in which a device shares an effect image with an external device. Referring to 2300-1 of FIG. 23, device 100 can generate and display effect folder 2310 in response to user input. The effect folder 2310 can store at least one effect image. The effect folder 2310 can store link information of the effect image or the effect image.

在此種情形中,裝置100可接收選擇效果資料夾2310的使用者輸入。舉例而言,裝置100可接收觸摸效果資料夾2310達預定時間週期(例如,兩秒)以上的輸入。裝置100可因應於使用者輸入而提供包括例如資料夾搜尋、書籤添加、及傳送項2322等各個項的選單視窗2320。 In this case, device 100 can receive user input to select effect folder 2310. For example, device 100 can receive touch effect folder 2310 for input for a predetermined period of time (eg, two seconds). The device 100 can provide a menu window 2320 including various items such as a folder search, a bookmark addition, and a transmission item 2322 in response to user input.

當使用者選擇選單視窗2320上的傳送項2322時,裝置100可提供選擇視窗2330,經由選擇視窗2330可選擇接收裝置,如圖23的2300-2所示。裝置100可接收選擇選擇視窗2330上的聯絡人2332的使用者輸入。使用者可自聯絡人選擇具體朋友。在此種情形中,裝置100可與所述具體朋友的裝置100共用效果資料夾2310。 When the user selects the transfer item 2322 on the menu window 2320, the device 100 can provide a selection window 2330 via which the receiving device can be selected, as shown at 2300-2 of FIG. Device 100 can receive user input to contact 2332 on selection selection window 2330. The user can select a specific friend from the contact person. In this case, the device 100 can share the effect folder 2310 with the device 100 of the particular friend.

舉例而言,裝置100可將效果資料夾2310中所包含的 效果影像傳送至所述具體朋友的裝置100。在其他示例性實施例中,裝置100可將效果資料夾2310中所包含的效果影像的鏈接資訊傳送至所述具體朋友的裝置100。 For example, the device 100 can include the effect clip 2310 The effect image is transmitted to the device 100 of the specific friend. In other exemplary embodiments, the device 100 may transmit link information of the effect image included in the effect folder 2310 to the device 100 of the specific friend.

根據示例性實施例,裝置100可經由電子郵件或正文訊息將效果資料夾2310中所包含的效果影像(或效果影像的鏈接資訊)傳送至所述具體朋友的裝置100。 According to an exemplary embodiment, the device 100 may transmit the effect image (or link information of the effect image) included in the effect folder 2310 to the device 100 of the specific friend via an email or a body message.

圖24是根據示例性實施例的影像管理系統的示意圖。 FIG. 24 is a schematic diagram of an image management system, according to an exemplary embodiment.

如圖24所示,影像管理系統可包括裝置100及雲端伺服器210。在某些示例性實施例中,雲端伺服器可指代遠端伺服器。可由相較於圖24中所說明者更多或更少的組件構成影像管理系統。 As shown in FIG. 24, the image management system may include the device 100 and the cloud server 210. In some exemplary embodiments, a cloud server may refer to a remote server. The image management system can be constructed from more or less components than those illustrated in FIG.

可以各種類型達成根據示例性實施例的裝置100。舉例而言,裝置100可為桌上型電腦、行動電話、智慧型電話、膝上型電腦、平板個人電腦(PC)、電子書終端、數位廣播終端、個人數位助理(PDA)、可攜式多媒體播放機(PMP)、導航機、MP3播放機、數位相機160、網際網路電視(IPTV)、數位電視(DTV)、消費性電子產品(CE)設備(例如,各自包括顯示器的冰箱及空調機)等中的至少一者,但示例性實施例並非僅限於此。裝置100亦可為可由使用者穿戴的裝置。舉例而言,裝置100可為選自手錶、眼鏡、戒指、手環、及項鏈中的至少一者。 The apparatus 100 according to an exemplary embodiment may be achieved in various types. For example, the device 100 can be a desktop computer, a mobile phone, a smart phone, a laptop, a tablet personal computer (PC), an e-book terminal, a digital broadcast terminal, a personal digital assistant (PDA), and a portable device. Multimedia Player (PMP), Navigator, MP3 Player, Digital Camera 160, Internet TV (IPTV), Digital TV (DTV), Consumer Electronics (CE) devices (eg, refrigerators and air conditioners each including a display) At least one of the machines and the like, but the exemplary embodiments are not limited thereto. Device 100 can also be a device that can be worn by a user. For example, device 100 can be at least one selected from the group consisting of a watch, glasses, a ring, a bracelet, and a necklace.

由於裝置100相同於上述裝置100,故此處將不再對其予以贅述。為便於解釋,現在將闡述其中裝置100為第一裝置、 第二裝置至第N裝置中的一者的情形。 Since the device 100 is identical to the device 100 described above, it will not be described again here. For ease of explanation, it will now be explained that the device 100 is the first device, The case of one of the second device to the Nth device.

雲端伺服器210可連接至裝置100並因此與裝置100通訊。舉例而言,雲端伺服器210可經由帳戶資訊連接至裝置100。 The cloud server 210 can be connected to the device 100 and thus communicate with the device 100. For example, the cloud server 210 can be connected to the device 100 via account information.

根據示例性實施例,雲端伺服器210可將資料傳送至裝置100或自裝置100接收資料。舉例而言,裝置100可將至少一個影像上傳至雲端伺服器210。裝置100可自雲端伺服器200接收關於影像的屬性資訊、影像分析資訊、辨識資訊等。 According to an exemplary embodiment, the cloud server 210 may transmit data to or receive data from the device 100. For example, the device 100 can upload at least one image to the cloud server 210. The device 100 can receive attribute information, image analysis information, identification information, and the like about the image from the cloud server 200.

雲端伺服器210可包括智慧引擎,且可經由所述智慧引擎分析由裝置100控制的影像。舉例而言,雲端伺服器210可自影像的屬性資訊產生辨識資訊,並藉由對影像執行影像處理而取得影像分析資訊。雲端伺服器210可分析由裝置100所產生的事件資訊,並推斷使用者的狀態、裝置100的情況等。雲端伺服器210如同上述裝置100一樣可因應於使用者輸入而產生選自效果影像及效果資料夾中的至少一者。 The cloud server 210 can include a smart engine and can analyze images controlled by the device 100 via the smart engine. For example, the cloud server 210 can generate identification information from the attribute information of the image, and obtain image analysis information by performing image processing on the image. The cloud server 210 can analyze the event information generated by the device 100 and infer the state of the user, the condition of the device 100, and the like. The cloud server 210, like the device 100 described above, can generate at least one selected from the group consisting of an effect image and an effect folder in response to user input.

以上已闡述了效果影像是藉由向單個影像的物體或背景提供效果而獲得的影像。已利用感興趣區域的影像資料提供了效果。換言之,光環效果、模糊效果等調整感興趣區域的畫素值,而尺寸效果將感興趣區域的畫素值應用至相對寬的區或相對窄的區。深度效果利用感興趣區域的畫素值產生三維(3D)影像(例如,左眼影像及右眼影像)。 The effect image has been described above as an image obtained by providing an effect to an object or background of a single image. Image data from the region of interest has been used to provide results. In other words, the halo effect, the blur effect, and the like adjust the pixel values of the region of interest, and the size effect applies the pixel values of the region of interest to a relatively wide region or a relatively narrow region. The depth effect uses the pixel values of the region of interest to produce three-dimensional (3D) images (eg, left eye images and right eye images).

在效果提供期間,可利用另一影像的影像資料。換言之,裝置100可利用第二影像的影像資料向第一影像提供效果。 現在,將利用另一影像的影像資料向一個影像的感興趣區域提供效果稱為藉由使多個影像彼此組合而提供效果。圖25是根據示例性實施例,一種藉由使多個影像彼此組合而提供效果影像的方法的流程圖。 Image data of another image may be utilized during the performance of the effect. In other words, the device 100 can provide an effect to the first image by using the image data of the second image. Now, providing an effect of using image data of another image to a region of interest of an image is said to provide an effect by combining a plurality of images with each other. FIG. 25 is a flowchart of a method of providing an effect image by combining a plurality of images with each other, according to an exemplary embodiment.

在操作S2510中,裝置100可自第一影像選擇感興趣區域。舉例而言,如上所述,裝置100可顯示儲存於裝置100或外部裝置中的靜止影像或移動圖片訊框作為第一影像。作為另一選擇,裝置100可顯示由裝置100或外部裝置所拍攝的即時取景影像作為第一影像。因應於使用者輸入,裝置100可選擇第一影像的物體或背景作為感興趣區域。 In operation S2510, the device 100 may select a region of interest from the first image. For example, as described above, the device 100 can display a still image or a moving picture frame stored in the device 100 or an external device as the first image. Alternatively, the device 100 may display the live view image captured by the device 100 or an external device as the first image. In response to user input, device 100 may select an object or background of the first image as the region of interest.

在操作S2520中,裝置100可取得所選擇的感興趣區域的辨識資訊。舉例而言,在儲存第一影像時,可將分別闡述第一影像中所包含的物體及背景的辨識資訊與影像進行匹配並加以儲存。根據示例性實施例,分別對應於物體及背景的辨識資訊可以元資料形式儲存。在此種情形中,裝置100可確定已針對所選擇的感興趣區域定義辨識資訊。裝置100可藉由讀取預先儲存的辨識資訊而取得辨識資訊。 In operation S2520, the device 100 may obtain identification information of the selected region of interest. For example, when storing the first image, the identification information of the object and the background respectively included in the first image may be matched and stored. According to an exemplary embodiment, the identification information corresponding to the object and the background, respectively, may be stored in the form of metadata. In such a case, device 100 may determine that identification information has been defined for the selected region of interest. The device 100 can obtain the identification information by reading the pre-stored identification information.

當所選擇的感興趣區域的辨識資訊未被定義時,裝置100可藉由產生辨識資訊而取得辨識資訊。舉例而言,裝置100可基於以元資料形式儲存的第一影像的屬性資訊、或者利用藉由對第一影像執行影像處理而取得的影像分析資訊而產生辨識資訊。 When the identification information of the selected region of interest is not defined, the device 100 can obtain the identification information by generating the identification information. For example, the device 100 may generate the identification information based on the attribute information of the first image stored in the form of metadata or the image analysis information obtained by performing image processing on the first image.

在操作S2530中,裝置100可自目標影像搜尋與感興趣區域具有相同辨識資訊的第二影像。目標影像可為將於其中搜尋第二影像的至少一個影像,且可為儲存於裝置100或外部裝置中的靜止影像或移動圖片。 In operation S2530, the device 100 may search for a second image having the same identification information as the region of interest from the target image. The target image may be at least one image in which the second image is to be searched, and may be a still image or a moving picture stored in the device 100 or an external device.

當感興趣區域具有一條辨識資訊時,裝置100可自目標影像搜尋具有辨識資訊的第二影像。另一方面,當感興趣區域具有多條辨識資訊時,裝置100可搜尋具有所述多條辨識資訊中的所有者的影像。然而,示例性實施例並非僅限於此。裝置100可搜尋具有所述多條辨識資訊中的某些辨識資訊的影像。裝置100可向使用者提供辨識資訊清單,並接收自所述辨識資訊清單選擇至少一條辨識資訊的使用者輸入。根據示例性實施例,裝置100可接收選擇所有辨識資訊的使用者輸入或選擇某些辨識資訊的使用者輸入。 When the region of interest has an identification information, the device 100 may search for the second image with the identification information from the target image. On the other hand, when the region of interest has a plurality of pieces of identification information, the device 100 may search for an image having the owner of the plurality of pieces of identification information. However, the exemplary embodiments are not limited thereto. The device 100 may search for an image having some of the plurality of pieces of identification information. The device 100 can provide the user with a list of identification information and receive user input for selecting at least one piece of identification information from the list of identification information. According to an exemplary embodiment, the device 100 may receive user input selecting all of the identification information or user input selecting certain identification information.

根據示例性實施例,選擇辨識資訊的使用者輸入可有所變化。舉例而言,使用者輸入可為選自鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入及多重輸入中的至少一者。 According to an exemplary embodiment, the user input selecting the identification information may vary. For example, the user input can be at least one selected from the group consisting of a key input, a touch input, a motion input, a bending input, a voice input, and a multiple input.

在操作S2540中,裝置100可自至少一個所找到的第二影像中選擇一者。裝置100可顯示所找到的至少一個第二影像。所述第二影像可以縮圖(thumbnail)的形式顯示。當找到多個第二影像時,裝置100可以搜尋次序或以第二影像被產生的次序顯示所述第二影像。可根據使用者輸入設定顯示第二影像的次序。裝置100可因應於在至少一個第二影像被顯示時作出的使用者輸 入而自所述至少一個所找到的第二影像中選擇一者。當找到一個第二影像時,裝置100可選擇所述所找到的第二影像而無論使用者輸入如何。 In operation S2540, the device 100 may select one of the at least one of the found second images. The device 100 can display the found at least one second image. The second image may be displayed in the form of a thumbnail. When a plurality of second images are found, the device 100 may display the order or display the second images in the order in which the second images were generated. The order in which the second images are displayed may be set according to user input settings. The device 100 can respond to user input when at least one second image is displayed And selecting one of the at least one of the found second images. When a second image is found, device 100 can select the second image found regardless of user input.

在操作S2550中,裝置100藉由將第一影像與第二影像進行組合而產生效果影像。裝置100可藉由使感興趣區域自第一影像及第二影像分離並使第二影像中對應於感興趣區域的部分影像與第一影像的感興趣區域所處的區進行組合而產生效果影像。作為另一選擇,裝置100可藉由以第二影像中對應於感興趣區域的部分影像的影像資料替代感興趣區域的影像資料而產生效果影像。 In operation S2550, the device 100 generates an effect image by combining the first image and the second image. The device 100 can generate an effect image by separating the region of interest from the first image and the second image and combining a partial image of the second image corresponding to the region of interest with a region of the region of interest of the first image. . Alternatively, the device 100 may generate an effect image by replacing the image data of the region of interest with the image data of the partial image corresponding to the region of interest in the second image.

圖26A至圖26C說明根據示例性實施例,利用多個影像向物體提供效果的實例。如圖26A的2600-1所示,在裝置100的模式被設定為效果模式時,裝置100可顯示第一影像2610。舉例而言,第一影像2610可為靜止影像、移動圖片訊框、或即時取景影像。裝置100可自第一影像2610選擇感興趣區域。舉例而言,使用者可觸摸第一影像2610的上面顯示有物體2612的部分區,且裝置100可將包括所觸摸的區並藉由基於所觸摸的區執行影像處理而獲得的物體2612選擇為感興趣區域。 26A-26C illustrate an example of providing an effect to an object using a plurality of images, according to an exemplary embodiment. As shown in 2600-1 of FIG. 26A, when the mode of the device 100 is set to the effect mode, the device 100 can display the first image 2610. For example, the first image 2610 can be a still image, a moving picture frame, or a live view image. Device 100 can select a region of interest from first image 2610. For example, the user can touch a partial region of the first image 2610 on which the object 2612 is displayed, and the device 100 can select the object 2612 including the touched region and obtained by performing image processing based on the touched region as Area of interest.

裝置100可顯示包括可被提供至物體的效果的效果清單2620,如圖26A的2600-2所示。效果清單2620可與所顯示的第一影像2610交疊。使用者可執行自效果清單2620選擇一個效果的輸入。舉例而言,使用者可藉由觸摸項「使用另一影像」而執 行輸入。 The device 100 can display a list of effects 2620 including effects that can be provided to the object, as shown at 2600-2 of Figure 26A. The effect list 2620 can overlap the displayed first image 2610. The user can perform an input of an effect from the effect list 2620. For example, the user can execute by touching the item "Use another image" Line input.

裝置100可取得用於辨識作為感興趣區域的物體2612的辨識資訊。當辨識資訊被預先儲存時,裝置100可藉由讀取預先儲存的辨識資訊而取得辨識資訊。當辨識資訊未被預先儲存時,裝置100可藉由利用選自物體2612的屬性資訊及影像分析資訊中的至少一者產生辨識資訊而取得辨識資訊。如圖26B的2600-3所示,裝置100可顯示辨識資訊清單2630。辨識資訊清單2630亦可與所顯示的第一影像2610交疊。藉由自辨識資訊清單2630選擇至少某些辨識資訊的使用者輸入,裝置100可將所述至少某些辨識資訊確定為用於搜尋的辨識資訊。 The device 100 can obtain identification information for identifying an object 2612 as a region of interest. When the identification information is stored in advance, the device 100 can obtain the identification information by reading the pre-stored identification information. When the identification information is not stored in advance, the device 100 may obtain the identification information by generating the identification information by using at least one of the attribute information selected from the object 2612 and the image analysis information. As shown at 2600-3 of Figure 26B, device 100 can display an identification information list 2630. The identification information list 2630 can also overlap the displayed first image 2610. By selecting at least some user input of the identification information from the identification information list 2630, the device 100 may determine the at least some identification information as identification information for searching.

如圖26B的2600-4所示,裝置100可顯示代表關於目標影像的資訊的目標影像清單2640。藉由自目標影像清單2640選擇至少一個影像的使用者輸入,裝置100可確定目標影像。 As shown at 2600-4 of Figure 26B, device 100 can display a list of target images 2640 that represent information about the target image. By selecting user input of at least one image from the target image list 2640, the device 100 can determine the target image.

裝置100可自目標影像搜尋具有所選擇的辨識資訊的第二影像。當目標影像為靜止影像時,裝置100可以靜止影像為單位搜尋第二影像。當目標影像為移動圖片時,裝置100可以移動圖片訊框為單位搜尋第二影像。 The device 100 can search for a second image having the selected identification information from the target image. When the target image is a still image, the device 100 may search for the second image in units of still images. When the target image is a moving picture, the device 100 may search for the second image by moving the picture frame as a unit.

可如圖26C的2600-5所示顯示所找到的第二影像2650。第一影像2610與第二影像2650可分別顯示於單獨的區上。當找到多個第二影像2650時,裝置100可以搜尋次序等依序排列所述多個第二影像2650。 The found second image 2650 can be displayed as shown at 2600-5 of Figure 26C. The first image 2610 and the second image 2650 can be displayed on separate regions, respectively. When a plurality of second images 2650 are found, the device 100 may sequentially arrange the plurality of second images 2650 in a search order or the like.

因應於自所述多個第二影像2650選擇第二影像2660的 使用者輸入,裝置100可顯示藉由使第一影像2610與所選擇的第二影像2660進行組合而獲得的效果影像2670,如圖26C的2600-6所示。裝置100可藉由以所選擇的第二影像2660的物體2622替代第一影像2610的所選擇物體2612而產生效果影像2670。 Selecting the second image 2660 from the plurality of second images 2650 Upon user input, the device 100 can display an effect image 2670 obtained by combining the first image 2610 with the selected second image 2660, as shown at 2600-6 of FIG. 26C. The device 100 can generate the effect image 2670 by replacing the selected object 2612 of the first image 2610 with the object 2622 of the selected second image 2660.

第二影像2660的物體2662的尺寸及形狀可不同於第一影像2610的物體2612的尺寸及形狀。裝置100可在使第一影像2612與第二影像2660進行組合時使用復原技術(restoration technique)。圖27是根據示例性實施例,用於解釋一種組合多個影像的方法的參考圖。裝置100可藉由自第一影像2710中排除作為感興趣區域的物體2712而獲得影像2714(以下,稱為第一部分影像)。裝置100可利用例如影像邊緣特性使第一部分影像2714自第一影像2710分離。裝置100亦可利用例如影像邊緣特性使對應於感興趣區域的物體2722(以下,稱為第二部分影像)自第二影像2720分離。 The size and shape of the object 2662 of the second image 2660 can be different than the size and shape of the object 2612 of the first image 2610. The device 100 can use a restoration technique when combining the first image 2612 with the second image 2660. FIG. 27 is a reference diagram for explaining a method of combining a plurality of images, according to an exemplary embodiment. The device 100 can obtain an image 2714 (hereinafter, referred to as a first partial image) by excluding an object 2712 as a region of interest from the first image 2710. The device 100 can separate the first portion of image 2714 from the first image 2710 using, for example, image edge characteristics. The device 100 can also separate an object 2722 (hereinafter referred to as a second partial image) corresponding to the region of interest from the second image 2720 by, for example, image edge characteristics.

裝置100使第一部分影像2714與第二部分影像2722進行組合,以使得第一部分影像2714與第二部分影像2722彼此最低程度地交疊。裝置100可藉由自第一部分影像2714與第二部分影像2722彼此交疊的區2732刪除第一部分影像2714的一部分並藉由利用第一部分影像2714來復原第一部分影像2714與第二部分影像2722皆未在上面顯示的區2734而產生效果影像2730。 The device 100 combines the first partial image 2714 with the second partial image 2722 such that the first partial image 2714 and the second partial image 2722 overlap each other to a minimum. The device 100 can delete a portion of the first partial image 2714 by using a region 2732 from which the first partial image 2714 and the second partial image 2722 overlap each other and restore the first partial image 2714 and the second partial image 2722 by using the first partial image 2714. The effect image 2730 is not generated in the area 2734 shown above.

圖28A至圖28C說明根據示例性實施例,利用多個影像向背景提供效果的實例。如圖28A的2800-1所示,在裝置100的 模式被設定為效果模式時,裝置100可顯示第一影像2810。舉例而言,第一影像2810可為靜止影像、移動圖片訊框、或即時取景影像。裝置100可自第一影像2810選擇感興趣區域。舉例而言,使用者可執行觸摸第一影像2810的上面顯示有背景2814的部分區的輸入,且裝置100可因應於使用者輸入藉由對所觸摸的區執行影像處理而將背景2814選擇為感興趣區域。 28A through 28C illustrate an example of providing effects to a background using a plurality of images, according to an exemplary embodiment. As shown in 2800-1 of Figure 28A, at device 100 When the mode is set to the effect mode, the device 100 can display the first image 2810. For example, the first image 2810 can be a still image, a moving picture frame, or a live view image. Device 100 can select a region of interest from first image 2810. For example, the user can perform an input of touching a partial region of the first image 2810 on which the background 2814 is displayed, and the device 100 can select the background 2814 by performing image processing on the touched region in response to user input. Area of interest.

裝置100可顯示包括可被提供至背景2814的效果的效果清單2820,如圖28A的2820-2所示。效果清單2820可與所顯示的第一影像2810交疊。使用者可執行自效果清單2820選擇一個效果的輸入。舉例而言,使用者可藉由觸摸項「使用另一影像」而執行輸入。 The device 100 can display an effect list 2820 that includes effects that can be provided to the background 2814, as shown at 2820-2 of Figure 28A. The effect list 2820 can overlap the displayed first image 2810. The user can perform an input of an effect from the effect list 2820. For example, the user can perform an input by touching the item "Use another image."

裝置100可取得用於辨識作為感興趣區域的背景2814的辨識資訊。當辨識資訊被預先儲存時,裝置100可藉由讀取預先儲存的辨識資訊而取得辨識資訊。當辨識資訊未被預先儲存時,裝置100可藉由利用選自背景2814的屬性資訊及影像分析資訊中的至少一者產生辨識資訊而取得辨識資訊。如圖28B的2800-3所示,裝置100可顯示辨識資訊清單2830。辨識資訊清單2830亦可與所顯示的第一影像2810交疊。藉由自辨識資訊清單2830選擇至少某些辨識資訊的使用者輸入,裝置100可將所述至少某些辨識資訊確定為用於搜尋的辨識資訊。 The device 100 can obtain identification information for identifying the background 2814 as the region of interest. When the identification information is stored in advance, the device 100 can obtain the identification information by reading the pre-stored identification information. When the identification information is not stored in advance, the device 100 may obtain the identification information by generating the identification information by using at least one of the attribute information and the image analysis information selected from the background 2814. As shown at 2800-3 of Figure 28B, device 100 can display an identification information list 2830. The identification information list 2830 can also overlap the displayed first image 2810. By selecting at least some user input of the identification information from the identification information list 2830, the device 100 may determine the at least some identification information as identification information for searching.

如圖28B的2800-4所示,裝置100可顯示代表關於目標影像的資訊的目標影像清單2840。藉由自目標影像清單2840 選擇至少一個影像的使用者輸入,裝置100可確定目標影像。 As shown at 2800-4 of Figure 28B, device 100 can display a list of target images 2840 that represent information about the target image. By the target image list 2840 The user input of at least one image is selected and the device 100 can determine the target image.

裝置100可自目標影像搜尋具有用於搜尋的辨識資訊的第二影像。當目標影像為靜止影像時,裝置100可以靜止影像為單位搜尋第二影像。當目標影像為移動圖片時,裝置100可以移動圖片訊框為單位搜尋第二影像。 The device 100 can search for a second image having identification information for searching from the target image. When the target image is a still image, the device 100 may search for the second image in units of still images. When the target image is a moving picture, the device 100 may search for the second image by moving the picture frame as a unit.

可如圖28C的2800-5所示顯示所找到的第二影像2850。第一影像2810與第二影像2850可分別顯示於單獨的區上。當找到多個第二影像2850時,裝置100可以搜尋次序等依序排列所述多個第二影像2850。 The found second image 2850 can be displayed as shown at 2800-5 of Figure 28C. The first image 2810 and the second image 2850 can be displayed on separate regions, respectively. When a plurality of second images 2850 are found, the device 100 may sequentially arrange the plurality of second images 2850 in a search order or the like.

因應於自所述多個第二影像2850選擇第二影像2860的使用者輸入,裝置100可顯示藉由使第一影像2810與所選擇的第二影像2860進行組合而獲得的效果影像2860,如圖28C的2800-6所示。裝置100可藉由使第二影像2840的背景2864與第一影像2810的感興趣區域進行組合而產生效果影像2860。 In response to the user input selecting the second image 2860 from the plurality of second images 2850, the device 100 can display the effect image 2860 obtained by combining the first image 2810 with the selected second image 2860, such as 2800-6 of Figure 28C. The device 100 can generate an effect image 2860 by combining the background 2864 of the second image 2840 with the region of interest of the first image 2810.

第二影像2860的背景2864的尺寸及形狀可略微不同於第一影像2810的背景2814的尺寸及形狀。裝置100可在使第一影像2810與第二影像2850進行組合時使用復原技術。圖29是根據另一示例性實施例,用於解釋一種組合多個影像的方法的參考圖。裝置100可藉由自第一影像2910中排除作為感興趣區域的背景而獲得影像2912(以下,稱為第三部分影像)。裝置100可利用例如影像邊緣特性使第三部分影像2912自第一影像2910分離。裝置100亦可利用例如影像邊緣特性使對應於感興趣區域的部分 影像2924(以下,稱為第四部分影像)自第二影像2920分離。 The size and shape of the background 2864 of the second image 2860 may be slightly different than the size and shape of the background 2814 of the first image 2810. The device 100 can use a restoration technique when combining the first image 2810 with the second image 2850. FIG. 29 is a reference diagram for explaining a method of combining a plurality of images, according to another exemplary embodiment. The device 100 can obtain an image 2912 (hereinafter, referred to as a third partial image) by excluding a background as a region of interest from the first image 2910. The device 100 can separate the third portion of the image 2912 from the first image 2910 using, for example, image edge characteristics. The device 100 can also utilize portions such as image edge characteristics to correspond to regions of interest. The image 2924 (hereinafter referred to as a fourth partial image) is separated from the second image 2920.

裝置100可藉由以預定畫素值填充第四部分影像2924中的不存在畫素資訊的區2932而產生背景影像2930。在產生背景影像2930時,裝置100可利用鏡像技術利用區2932周圍的區的畫素值來確定不具有畫素資訊的區2932的畫素值。裝置100可藉由將第三部分影像2912與背景影像2930進行組合而產生效果影像2940。裝置100可利用第一影像2910中的第三部分影像2912的位置資訊而將第三部分影像2912與背景影像2930進行組合。可刪除背景影像2930的與第三部分影像2912交疊的部分。 The device 100 may generate the background image 2930 by filling the region 2932 of the fourth partial image 2924 in which the pixel information is absent with a predetermined pixel value. In generating the background image 2930, the device 100 can utilize the mirroring technique to utilize the pixel values of the regions surrounding the region 2932 to determine the pixel values of the region 2932 that does not have pixel information. The device 100 can generate an effect image 2940 by combining the third partial image 2912 with the background image 2930. The device 100 can combine the third partial image 2912 with the background image 2930 by using the position information of the third partial image 2912 in the first image 2910. The portion of the background image 2930 that overlaps the third partial image 2912 can be deleted.

可提供效果影像2940作為即時取景影像。即時取景影像可指由相機所拍攝並顯示於裝置上的影像,且為在接收到儲存命令之前的影像。相機可為內置於裝置100中的相機,抑或可為外部裝置。如上所述,裝置100可因應於使用者輸入而將即時取景影像的物體或背景選擇為感興趣區域,並與尚未被選擇的感興趣區域完全不同地顯示所選擇的感興趣區域。舉例而言,裝置100可向感興趣區域提供光環效果、模糊效果、尺寸效果、或深度效果。當裝置100接收到儲存命令時,裝置100可儲存已被提供效果的即時取景影像。裝置100可根據裝置100的攝影模式將已被提供效果的即時取景影像儲存為靜止影像或移動圖片。 The effect image 2940 can be provided as a live view image. A live view image may refer to an image captured by a camera and displayed on the device, and is an image before the storage command is received. The camera may be a camera built into the device 100 or may be an external device. As described above, the device 100 can select an object or background of the live view image as the region of interest in response to user input, and display the selected region of interest completely differently from the region of interest that has not been selected. For example, device 100 can provide a halo effect, a blur effect, a size effect, or a depth effect to a region of interest. When the device 100 receives the save command, the device 100 can store the live view image that has been rendered effective. The device 100 can store the live view image that has been provided as a still image or a moving picture according to the shooting mode of the device 100.

裝置100可自即時取景影像提取多個影像,並提供效果影像。圖30是根據示例性實施例,一種利用即時取景影像來提供效果影像的方法的流程圖。在操作S3010中,裝置100可顯示即 時取景影像。即時取景影像為由相機所拍攝並被顯示的影像,且為在接收到儲存命令之前的影像。相機可為內置於裝置100中的相機,抑或可為外部裝置。為便於解釋,現在將在接收到儲存輸入之後所產生的影像稱為所拍攝影像。 The device 100 can extract a plurality of images from the live view image and provide an effect image. FIG. 30 is a flowchart of a method of providing an effect image using a live view image, according to an exemplary embodiment. In operation S3010, the device 100 can display Take a picture of the scene. The live view image is an image taken by the camera and displayed, and is an image before the storage command is received. The camera may be a camera built into the device 100 or may be an external device. For ease of explanation, the image produced after receiving the stored input will now be referred to as the captured image.

在操作S3020中,裝置100選擇感興趣區域。使用者可輸入用於選擇即時取景影像上的部分區域的命令,且裝置100可將包括所選擇的部分區域的物體或背景確定為感興趣區域。 In operation S3020, the device 100 selects a region of interest. The user can input a command for selecting a partial region on the live view image, and the device 100 can determine an object or background including the selected partial region as the region of interest.

選擇部分區域的使用者輸入可有所變化。舉例而言,使用者輸入可為選自鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、及多重輸入中的至少一者。 The user input for selecting a partial area may vary. For example, the user input can be at least one selected from the group consisting of a key input, a touch input, a motion input, a bending input, a voice input, and multiple inputs.

在操作S3030及操作S3040中,裝置100可自即時取景影像產生臨時影像。臨時影像為包括感興趣區域的影像,且為在接收到儲存影像的使用者輸入之前所臨時產生的影像。 In operation S3030 and operation S3040, the device 100 may generate a temporary image from the live view image. The temporary image is an image including a region of interest, and is an image temporarily generated before receiving a user inputting the image.

臨時影像可為感興趣區域的部分影像。舉例而言,臨時影像可為僅包括被選擇為感興趣區域的物體或背景的部分影像。作為另一選擇,臨時影像可為即時取景影像中包括感興趣區域的訊框影像。 The temporary image can be a partial image of the region of interest. For example, the temporary image may be a partial image including only an object or background selected as the region of interest. Alternatively, the temporary image may be a frame image including a region of interest in the live view image.

臨時影像可被臨時產生並儲存於緩衝器中,抑或可被臨時產生並顯示於裝置100的顯示區上。 The temporary image may be temporarily generated and stored in the buffer, or may be temporarily generated and displayed on the display area of the device 100.

臨時影像可產生於選擇感興趣區域的時刻與接收到儲存輸入的時刻之間。舉例而言,可在選擇感興趣區域的時刻產生一個臨時影像,且可在接收到儲存輸入的時刻產生另一個臨時影 像,因此可產生總共兩個臨時影像。作為另一選擇,可在選擇感興趣區域的時刻之後且接收到儲存輸入之前以預定時間間隔(例如,3秒)產生臨時影像。作為另一選擇,在選擇感興趣區域之後且接收到儲存輸入之前,每當感興趣區域中的變化等於或大於參考值時便可產生臨時影像。 The temporary image can be generated between the time at which the region of interest is selected and the time at which the stored input is received. For example, a temporary image can be generated at the time of selecting the region of interest, and another temporary image can be generated at the time of receiving the stored input. Like, so a total of two temporary images can be generated. Alternatively, the temporary image may be generated at a predetermined time interval (eg, 3 seconds) after the time at which the region of interest is selected and before the storage input is received. Alternatively, a temporary image may be generated each time the region of interest is selected and before the storage input is received, whenever the change in the region of interest is equal to or greater than the reference value.

在選擇感興趣區域的時刻產生的臨時影像被稱為初始臨時影像,而在接收到儲存輸入的時刻產生的臨時影像被稱為最終臨時影像。因應於儲存輸入,裝置100可取得包括初始臨時影像及最終臨時影像的多個臨時影像。在所述多個臨時影像中,僅由感興趣區域構成的臨時影像可為感興趣影像。裝置100可自包括初始臨時影像及最終臨時影像的所述多個臨時影像中的一者產生感興趣影像。 The temporary image generated at the time of selecting the region of interest is referred to as an initial temporary image, and the temporary image generated at the time of receiving the storage input is referred to as a final temporary image. In response to storing the input, the device 100 can obtain a plurality of temporary images including the initial temporary image and the final temporary image. Among the plurality of temporary images, the temporary image composed only of the region of interest may be the image of interest. The device 100 can generate an image of interest from one of the plurality of temporary images including an initial temporary image and a final temporary image.

用於儲存影像的使用者輸入可有所變化。舉例而言,使用者輸入可為選自鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、及多重輸入中的至少一者。 User input for storing images may vary. For example, the user input can be at least one selected from the group consisting of a key input, a touch input, a motion input, a bending input, a voice input, and multiple inputs.

在操作S3050中,當裝置100接收用於儲存的使用者輸入時,裝置100可利用所述多個臨時影像產生效果影像。換言之,裝置100可藉由讀取臨時儲存於緩衝器中的多個臨時影像並使所讀取的多個臨時影像彼此進行組合而產生效果影像。裝置100可儲存所產生的效果影像。儲存於緩衝器中的臨時影像可被刪除。裝置100可藉由使初始臨時影像的作為感興趣區域的物體或背景與最終臨時影像進行組合而產生效果影像。作為另一選擇,裝置 100可藉由使所述多個臨時影像中的對應於感興趣區域的部分影像與最終臨時影像進行組合而產生效果影像。 In operation S3050, when the device 100 receives a user input for storage, the device 100 may generate an effect image using the plurality of temporary images. In other words, the device 100 can generate an effect image by reading a plurality of temporary images temporarily stored in the buffer and combining the read plurality of temporary images with each other. The device 100 can store the resulting effect image. Temporary images stored in the buffer can be deleted. The device 100 can generate an effect image by combining an object or background of the initial temporary image as a region of interest with the final temporary image. As another option, the device The effect image may be generated by combining a partial image of the plurality of temporary images corresponding to the region of interest with the final temporary image.

圖31是根據示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。如圖31的3100-1所示,在裝置100的模式被設定為攝影模式時,裝置100可顯示即時取景影像3110。裝置100可自即時取景影像3110選擇感興趣區域。舉例而言,使用者可觸摸即時取景影像3110的上面顯示有物體3112的部分區。然後,裝置100可將物體3112選擇為感興趣區域。當物體3112被選擇為感興趣區域時,裝置100可產生包括物體3112的初始臨時影像。裝置100可自初始臨時影像產生僅由感興趣區域構成的感興趣影像3120。如圖31的3100-2所示,裝置100可顯示另一即時取景影像3130。因應於用於儲存的使用者輸入,如圖31的3100-3所示,裝置100可藉由將感興趣影像3120與作為最終臨時影像的即時取景影像3130進行組合而產生影像3140,並儲存影像3140。感興趣影像3120的位置可固定至顯示區的產生初始臨時影像的位置。 FIG. 31 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to an exemplary embodiment. As shown in 3100-1 of FIG. 31, when the mode of the device 100 is set to the shooting mode, the device 100 can display the live view image 3110. The device 100 can select a region of interest from the live view image 3110. For example, the user can touch a partial region of the live view image 3110 on which the object 3112 is displayed. Device 100 can then select object 3112 as the region of interest. When object 3112 is selected as the region of interest, device 100 can generate an initial temporary image that includes object 3112. The device 100 can generate an image of interest 3120 consisting only of the region of interest from the initial temporary image. As shown in 3100-2 of FIG. 31, the device 100 can display another live view image 3130. In response to the user input for storage, as shown in 3100-3 of FIG. 31, the device 100 can generate the image 3140 by combining the image of interest 3120 with the live view image 3130 as the final temporary image, and store the image. 3140. The position of the image of interest 3120 can be fixed to the position of the display area where the initial temporary image is generated.

當臨時影像固定至顯示區時,裝置100可產生各種形狀的影像。圖32是根據示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。如圖32的3200-1所示,在裝置100的模式被設定為攝影模式時,裝置100可顯示即時取景影像3210。裝置100可選擇即時取景影像3210上的感興趣區域。舉例而言,使用者可觸摸即時取景影像3210的上面顯示有第一物體 3212的部分區。然後,裝置100可將第一物體3212選擇為感興趣區域。當第一物體3212被選擇為感興趣區域時,裝置100可產生包括第一物體3212的初始臨時影像,並自所述初始臨時影像產生感興趣影像3220。感興趣影像3220可藉由與即時取景影像3210交疊而顯示於裝置100上。 When the temporary image is fixed to the display area, the device 100 can produce images of various shapes. FIG. 32 is a reference diagram for explaining a method of generating an effect image from a live view image, according to an exemplary embodiment. As shown in 3200-1 of FIG. 32, when the mode of the device 100 is set to the shooting mode, the device 100 can display the live view image 3210. Device 100 may select a region of interest on live view image 3210. For example, the user can touch the live view image 3210 and display the first object on the top surface. Part of the 3212 area. The device 100 can then select the first object 3212 as the region of interest. When the first object 3212 is selected as the region of interest, the device 100 can generate an initial temporary image including the first object 3212 and generate an image of interest 3220 from the initial temporary image. The image of interest 3220 can be displayed on the device 100 by overlapping with the live view image 3210.

由於感興趣影像3220的位置固定至產生第一臨時影像的位置,故顯示於顯示區上的感興趣影像3220即使在攝影角度或相機160的位置發生變化時仍可保持固定。當使用者將相機160旋轉90度時,如圖32的3200-2所示,固定至顯示區的感興趣影像3220亦旋轉90度。然而,即時取景影像3230不發生旋轉。因應於儲存命令,如圖32的3200-3所示,裝置100可藉由將感興趣影像3220與作為最終臨時影像的即時取景影像3230進行組合而產生效果影像3240,並儲存效果影像3240。最終臨時影像3230為在接收到儲存輸入之後所產生的臨時影像。最終臨時影像3230亦可包括第一物體3212。裝置100可藉由刪除最終臨時影像3230中所包含的第一物體3212並將感興趣影像3220與已從中刪除第一物體3212的最終臨時影像3230進行組合而產生效果影像3240。可藉由影像復原技術復原效果影像3240中不具有畫素資訊的區。 Since the position of the image of interest 3220 is fixed to the position at which the first temporary image is generated, the image of interest 3220 displayed on the display area can remain fixed even when the photographing angle or the position of the camera 160 changes. When the user rotates the camera 160 by 90 degrees, as shown in 3200-2 of FIG. 32, the image of interest 3220 fixed to the display area is also rotated by 90 degrees. However, the live view image 3230 does not rotate. In response to the storage command, as shown in 3200-3 of FIG. 32, the device 100 can generate the effect image 3240 by combining the image of interest 3220 with the live view image 3230 as the final temporary image, and store the effect image 3240. The final temporary image 3230 is a temporary image generated after receiving the storage input. The final temporary image 3230 can also include a first object 3212. The device 100 can generate the effect image 3240 by deleting the first object 3212 included in the final temporary image 3230 and combining the image of interest 3220 with the final temporary image 3230 from which the first object 3212 has been deleted. The area of the effect image 3240 that does not have the pixel information can be restored by the image restoration technique.

儘管以上已闡述將感興趣影像固定至顯示區,但示例性實施例並非僅限於此。感興趣影像的位置可根據使用者輸入而改變。舉例而言,當感興趣影像與即時取景影像交疊顯示時,使用 者可執行觸摸上面顯示有感興趣影像的部分區、然後拖動所述部分區的操作。然後,裝置100可根據使用者輸入將感興趣影像的位置改變至拖動結束的位置。裝置100亦可根據使用者輸入改變感興趣影像的尺寸等。 Although the image of interest is fixed to the display area as explained above, the exemplary embodiment is not limited thereto. The location of the image of interest can be changed based on user input. For example, when the image of interest overlaps with the live view image, use The person can perform an operation of touching a partial area on which the image of interest is displayed, and then dragging the partial area. Then, the device 100 can change the position of the image of interest to the position where the drag ends according to the user input. The device 100 can also change the size of the image of interest and the like according to user input.

感興趣影像的位置可改變成對應於即時取景影像的感興趣區域的位置。圖33是根據另一示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。如圖33的3300-1所示,在裝置100的模式被設定為攝影模式時,裝置100可顯示即時取景影像3310。裝置100可選擇即時取景影像3310上的感興趣區域。舉例而言,使用者可觸摸即時取景影像3310的上面顯示有第一物體3312的部分區。然後,裝置100可將第一物體3312選擇為感興趣區域。當第一物體3312被選擇為感興趣區域時,裝置100可產生包括第一物體3312的感興趣影像3320。 The position of the image of interest can be changed to the position of the region of interest corresponding to the live view image. FIG. 33 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to another exemplary embodiment. As shown in 3300-1 of FIG. 33, when the mode of the device 100 is set to the shooting mode, the device 100 can display the live view image 3310. The device 100 can select an area of interest on the live view image 3310. For example, the user can touch a partial region of the live view image 3310 on which the first object 3312 is displayed. The device 100 can then select the first object 3312 as the region of interest. When the first object 3312 is selected as the region of interest, the device 100 can generate the image of interest 3320 that includes the first object 3312.

如圖33的3300-2所示,裝置100可在顯示區的預定位置處顯示感興趣影像3320。換言之,裝置100可不在產生感興趣影像3320的位置處顯示感興趣影像3320,而是可在由裝置100所預先確定的位置處顯示感興趣影像3320。顯示於裝置100上的感興趣影像3320可與即時取景影像3330交疊。圖33的3300-2所示的即時取景影像3330可不同於圖33的3300-1所示的即時取景影像3310。 As shown in 3300-2 of FIG. 33, the device 100 can display the image of interest 3320 at a predetermined position of the display area. In other words, the device 100 may not display the image of interest 3320 at a location where the image of interest 3320 is generated, but may display the image of interest 3320 at a location predetermined by the device 100. The image of interest 3320 displayed on device 100 may overlap with live view image 3330. The live view image 3330 shown in 3300-2 of FIG. 33 may be different from the live view image 3310 shown by 3300-1 of FIG.

如圖33的3300-3所示,裝置100可因應於儲存輸入而產生效果影像3340。裝置100可藉由將感興趣影像3320移動至與 即時取景影像3330(其為最終臨時影像)的感興趣區域相對應的第一物體3332而產生效果影像3340。在圖33中,在接收到儲存輸入時,感興趣影像被移動至最終臨時影像的第一物體的位置並與所述最終臨時影像進行組合。然而,示例性實施例並非僅限於此。在顯示即時取景影像時,裝置100可將感興趣影像移動至對應於感興趣影像的區(即,上面顯示有第一物體的區)並即時地顯示即時取景影像。 As shown in 3300-3 of FIG. 33, device 100 can generate an effect image 3340 in response to a stored input. The device 100 can move the image of interest 3320 to The effect image 3340 is generated by the first object 3332 corresponding to the region of interest of the live view image 3330 (which is the final temporary image). In FIG. 33, upon receiving the storage input, the image of interest is moved to the position of the first object of the final temporary image and combined with the final temporary image. However, the exemplary embodiments are not limited thereto. When the live view image is displayed, the device 100 can move the image of interest to an area corresponding to the image of interest (ie, the area on which the first object is displayed) and display the live view image on the fly.

以上已闡述了利用當在即時取景影像上選擇感興趣區域時的初始臨時影像以及最終臨時影像產生效果影像。然而,示例性實施例並非僅限於此。裝置100可產生除初始臨時影像及最終臨時影像以外的至少一個臨時影像。裝置100可利用所述至少一個臨時影像產生效果影像。圖34是根據另一示例性實施例,一種自即時取景影像產生效果影像的方法的流程圖。 It has been explained above that the effect image is generated using the initial temporary image and the final temporary image when the region of interest is selected on the live view image. However, the exemplary embodiments are not limited thereto. The device 100 can generate at least one temporary image other than the initial temporary image and the final temporary image. The device 100 can generate an effect image using the at least one temporary image. FIG. 34 is a flowchart of a method of generating an effect image from an instant view image, according to another exemplary embodiment.

在操作S3410中,裝置100可選擇即時取景影像上的感興趣區域。在裝置100的模式被設定為攝影模式時,裝置100可顯示即時取景影像。使用者可觸摸即時取景影像的上面顯示有物體或背景的部分區。然後,裝置100可將在包含所觸摸的部分區的區上所顯示的物體或背景選擇為感興趣區域。 In operation S3410, the device 100 may select an area of interest on the live view image. When the mode of the device 100 is set to the shooting mode, the device 100 can display a live view image. The user can touch a partial area of the live view image on which the object or background is displayed. The device 100 can then select an object or background displayed on the area containing the touched partial area as the area of interest.

在操作S3420及操作S3430中,當感興趣區域中的變化等於或大於參考值時,裝置100可產生臨時影像。 In operation S3420 and operation S3430, when the change in the region of interest is equal to or greater than the reference value, the device 100 may generate a temporary image.

臨時影像可為被顯示為即時取景影像的螢幕影像,抑或可為僅由感興趣區域構成的感興趣影像。在選擇感興趣區域時, 裝置100產生初始臨時影像。所述裝置可計算先前產生的臨時影像的感興趣區域的畫素值與即時取景影像中所包含的感興趣區域的畫素值之間的差。若計算所得的差等於或大於參考值,則裝置100可產生即時取景影像作為臨時影像。 The temporary image may be a screen image displayed as a live view image, or may be an image of interest composed only of the region of interest. When selecting a region of interest, Device 100 produces an initial temporary image. The apparatus may calculate a difference between a pixel value of a region of interest of the previously generated temporary image and a pixel value of the region of interest included in the live view image. If the calculated difference is equal to or greater than the reference value, the device 100 may generate a live view image as a temporary image.

可由於以下原因而產生感興趣區域中的變化:對應於感興趣區域的物體或背景的移動、物體或背景的尺寸的變化、或物體或背景的畫素值(即,光量)的變化。 A change in the region of interest may occur due to: movement of an object or background corresponding to the region of interest, a change in the size of the object or background, or a change in the pixel value (ie, amount of light) of the object or background.

裝置100可產生臨時影像直至接收到儲存輸入。換言之,裝置100可儲存其中感興趣區域的變化等於或大於參考值的影像作為臨時影像,直至接收到儲存輸入。臨時影像可包括初始臨時影像及最終臨時影像。 Device 100 can generate a temporary image until a storage input is received. In other words, the device 100 may store an image in which the change in the region of interest is equal to or greater than the reference value as a temporary image until a storage input is received. The temporary image may include an initial temporary image and a final temporary image.

當裝置100在操作S3440中接收到儲存輸入時,裝置100可在操作S3450中產生效果影像。裝置100可藉由將多個臨時影像彼此進行組合而產生效果影像。作為另一選擇,裝置100可因應於選擇多個臨時影像中的一者的使用者輸入而產生效果影像。在嚴格意義上而言,與選擇多個臨時影像中的一者的使用者輸入對應的效果影像可不被視為效果影像。然而,為便於解釋,將與選擇多個臨時影像中的一者的使用者輸入對應的效果影像稱為效果影像,此乃因裝置100可顯示所述多個臨時影像以使得使用者可從中選擇一個臨時影像,且所選擇的臨時影像可不被儲存為靜止影像等直至接收到選擇所述多個臨時影像中的一者的使用者輸入。 When the device 100 receives the storage input in operation S3440, the device 100 may generate an effect image in operation S3450. The device 100 can generate an effect image by combining a plurality of temporary images with each other. Alternatively, device 100 may generate an effect image in response to user input selecting one of a plurality of temporary images. In a strict sense, an effect image corresponding to a user input selecting one of a plurality of temporary images may not be regarded as an effect image. However, for convenience of explanation, the effect image corresponding to the user input selecting one of the plurality of temporary images is referred to as an effect image, because the device 100 can display the plurality of temporary images so that the user can select from the image. A temporary image, and the selected temporary image may not be stored as a still image or the like until a user input selecting one of the plurality of temporary images is received.

圖35是根據示例性實施例,用於解釋一種自即時取景影像產生效果影像的方法的參考圖。如圖35的3500-1所示,在裝置100的模式被設定為靜止影像攝影模式時,裝置100可顯示即時取景影像3510。裝置100可選擇即時取景影像3510上的感興趣區域。舉例而言,使用者可觸摸即時取景影像3510的上面顯示有第一物體3512的部分區。然後,裝置100可將第一物體3512選擇為感興趣區域。當第一物體3512被選擇為感興趣區域時,裝置100可產生包括第一物體3512的臨時影像。 FIG. 35 is a reference diagram for explaining a method of generating an effect image from an instant view image, according to an exemplary embodiment. As shown in 3500-1 of FIG. 35, when the mode of the device 100 is set to the still image shooting mode, the device 100 can display the live view image 3510. The device 100 can select an area of interest on the live view image 3510. For example, the user can touch a partial region of the live view image 3510 on which the first object 3512 is displayed. The device 100 can then select the first object 3512 as the region of interest. When the first object 3512 is selected as the region of interest, the device 100 can generate a temporary image that includes the first object 3512.

裝置100可以預定時間間隔產生臨時影像3520,直至裝置100接收到儲存輸入。作為另一選擇,當第一物體3512中的變化等於或大於參考值時,裝置100可產生臨時影像3520。臨時影像3520可為被顯示為即時取景影像3510的螢幕影像,抑或可為僅由第一物體3512構成的感興趣影像。在圖35中,臨時影像3520為被顯示為即時取景影像3510的螢幕影像。 The device 100 can generate the temporary image 3520 at predetermined time intervals until the device 100 receives the storage input. Alternatively, device 100 may generate temporary image 3520 when the change in first object 3512 is equal to or greater than the reference value. The temporary image 3520 may be a screen image displayed as the live view image 3510, or may be an image of interest composed only of the first object 3512. In FIG. 35, the temporary image 3520 is a screen image displayed as the live view image 3510.

如圖35的3500-2所示,裝置100可因應於儲存輸入而產生最終臨時影像3530及其他臨時影像3520。最終臨時影像3530與臨時影像3520可分別顯示於單獨的區上。當找到多個第二影像3520時,裝置100可以所述多個臨時影像3520的產生次序而依序排列所述多個臨時影像3520。 As shown in 3500-2 of FIG. 35, device 100 can generate final temporary image 3530 and other temporary images 3520 in response to storage inputs. The final temporary image 3530 and the temporary image 3520 can be displayed on separate areas, respectively. When a plurality of second images 3520 are found, the device 100 may sequentially arrange the plurality of temporary images 3520 in the order in which the plurality of temporary images 3520 are generated.

因應於自所述多個臨時影像3520選擇臨時影像3540的使用者輸入,裝置100可顯示藉由將最終臨時影像3530與所選擇的臨時影像3540的感興趣區域進行組合而獲得的效果影像 3550,如圖35的3500-3所示。裝置100可藉由以所選擇的臨時影像3540的第一物體3542替代最終臨時影像3530的第一物體3532而產生效果影像3550。以上已闡述一種利用物體作為即時取景影像中的感興趣區域來產生效果影像的方法。以上為便於解釋,使用物體作為感興趣區域,而所述利用物體作為感興趣區域來產生效果影像的方法可同樣適用於利用背景作為感興趣區域的情形。 In response to user input selecting the temporary image 3540 from the plurality of temporary images 3520, the device 100 may display an effect image obtained by combining the final temporary image 3530 with the region of interest of the selected temporary image 3540. 3550, as shown in Figure 3, 3500-3. The device 100 can generate the effect image 3550 by replacing the first object 3532 of the final temporary image 3530 with the first object 3542 of the selected temporary image 3540. A method of generating an effect image using an object as a region of interest in a live view image has been described above. The above is for convenience of explanation, using an object as a region of interest, and the method of using the object as a region of interest to generate an effect image is equally applicable to a case where a background is used as a region of interest.

裝置100可因感興趣區域中的變化而產生移動圖片。圖36是根據示例性實施例,一種自即時取景影像產生移動圖片的方法的流程圖。 Device 100 may generate a moving picture due to changes in the region of interest. FIG. 36 is a flowchart of a method of generating a moving picture from an live view image, according to an exemplary embodiment.

在操作S3610中,裝置100可選擇即時取景影像上的感興趣區域。在裝置100的模式被設定為移動圖片產生模式時,裝置100可顯示即時取景影像。使用者可觸摸即時取景影像的上面顯示有物體或背景的部分區。然後,裝置100可將在包含所觸摸的部分區的區上所顯示的物體或背景選擇為感興趣區域。 In operation S3610, the device 100 may select an area of interest on the live view image. When the mode of the device 100 is set to the moving picture generation mode, the device 100 can display the live view image. The user can touch a partial area of the live view image on which the object or background is displayed. The device 100 can then select an object or background displayed on the area containing the touched partial area as the area of interest.

在操作S3620中,裝置100可接收產生移動圖片的使用者輸入。使用者輸入可有所變化。舉例而言,使用者輸入可為選自鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、及多重輸入中的至少一者。 In operation S3620, the device 100 can receive a user input that generates a moving picture. User input can vary. For example, the user input can be at least one selected from the group consisting of a key input, a touch input, a motion input, a bending input, a voice input, and multiple inputs.

在操作S3630中,裝置100判斷感興趣區域中的變化是否等於或大於參考值。若感興趣區域中的變化等於或大於參考值,則在操作S3640中,裝置100可產生移動圖片訊框。裝置100 產生在接收到用於產生移動圖片的使用者輸入的時刻產生的即時取景影像作為移動圖片訊框。每當感興趣區域中的變化等於或大於參考值時,裝置100便可產生移動圖片訊框。移動圖片訊框可為被顯示為即時取景影像的螢幕影像、或可為代表先前移動圖片訊框中的變化的資訊。可產生移動圖片訊框直至在操作S3660中接收到結束產生移動圖片的使用者輸入。 In operation S3630, the device 100 determines whether the change in the region of interest is equal to or greater than a reference value. If the change in the region of interest is equal to or greater than the reference value, the device 100 may generate a moving picture frame in operation S3640. Device 100 A live view image generated at a time when the user input for generating the moving picture is received is generated as a moving picture frame. The device 100 can generate a moving picture frame whenever the change in the region of interest is equal to or greater than the reference value. The moving picture frame may be a screen image displayed as a live view image, or may be information representing a change in a previously moved picture frame. A moving picture frame may be generated until a user input that ends the generation of the moving picture is received in operation S3660.

當感興趣區域為物體時,感興趣區域中的變化可為例如物體的移動的變化、物體尺寸的變化、或代表物體的畫素值的變化。當感興趣區域為背景時,感興趣區域中的變化可為例如背景的變化或代表背景的畫素值的變化。 When the region of interest is an object, the change in the region of interest may be, for example, a change in the movement of the object, a change in the size of the object, or a change in the pixel value representing the object. When the region of interest is the background, the change in the region of interest may be, for example, a change in the background or a change in the pixel value representing the background.

在操作S3660中,因應於結束產生移動圖片的使用者輸入,裝置100可自移動圖片訊框產生移動圖片檔案。 In operation S3660, the device 100 may generate a moving picture file from the moving picture frame in response to the user input that ends the generation of the moving picture.

裝置100可利用其中感興趣區域中的變化等於或大於參考值的移動圖片訊框而再現移動圖片。圖37是根據示例性實施例,一種再現移動圖片的方法的流程圖。 The device 100 may reproduce the moving picture using a moving picture frame in which the change in the region of interest is equal to or greater than the reference value. FIG. 37 is a flowchart of a method of reproducing a moving picture, according to an exemplary embodiment.

在操作S3710中,裝置100可選擇移動圖片訊框上的感興趣區域。在裝置100的模式被設定為移動圖片再現模式時,裝置100可顯示移動圖片。使用者可輸入停止移動圖片的命令。因應於使用者輸入,裝置100可顯示移動圖片訊框,移動圖片訊框為移動圖片的靜止影像。使用者可觸摸移動圖片訊框的上面顯示有物體或背景的部分區。然後,裝置100可將在包含所觸摸的部分區的區上所顯示的物體或背景選擇為感興趣區域。 In operation S3710, the device 100 may select a region of interest on the moving picture frame. When the mode of the device 100 is set to the moving picture reproduction mode, the device 100 can display a moving picture. The user can enter a command to stop moving the picture. In response to user input, the device 100 can display a moving picture frame, and move the picture frame as a still picture of the moving picture. The user can touch a partial area of the moving picture frame on which the object or background is displayed. The device 100 can then select an object or background displayed on the area containing the touched partial area as the area of interest.

在操作S3720中,裝置100判斷感興趣區域中的變化是否等於或大於參考值。當感興趣區域中的變化等於或大於參考值時,裝置100可在操作S3730中顯示移動圖片訊框。裝置100可將所顯示的移動圖片訊框(以下稱為當前訊框)與將在再現當前訊框之後被再現的移動圖片訊框(以下稱為第一下一訊框)進行比較。裝置100可計算所述兩個訊框的感興趣區域之間的變化。當計算所得的變化等於或大於參考值時,裝置100可再現並顯示第一下一訊框。 In operation S3720, the device 100 determines whether the change in the region of interest is equal to or greater than a reference value. When the change in the region of interest is equal to or greater than the reference value, the device 100 may display the moving picture frame in operation S3730. The device 100 can compare the displayed moving picture frame (hereinafter referred to as the current frame) with the moving picture frame (hereinafter referred to as the first next frame) to be reproduced after the reproduction of the current frame. The device 100 can calculate a change between regions of interest of the two frames. When the calculated change is equal to or greater than the reference value, the device 100 can reproduce and display the first next frame.

另一方面,當計算所得的變化小於參考值時,裝置100不顯示第一下一訊框。裝置100可再次計算當前訊框的感興趣區域與將在再現第一下一訊框之後被再現的移動圖片訊框(以下稱為第二下一訊框)的感興趣區域之間的變化。當計算所得的變化等於或大於參考值時,裝置100可再現並顯示第二下一訊框。另一方面,當計算所得的變化小於參考值時,裝置100不顯示第二下一訊框。可重複操作S3720及操作S3730,直至在操作S3740中結束移動圖片的再現。換言之,裝置100可重複執行操作S3720及操作S3730,直至在S3740中接收到結束移動圖片的再現的使用者輸入或者完成移動圖片的再現。以上已參照圖37闡述了一種再現移動圖片的方法。然而,示例性實施例並非僅限於此。所述方法亦可適用於以幻燈片放映方法再現靜止圖片的情形。 On the other hand, when the calculated change is less than the reference value, the device 100 does not display the first next frame. The device 100 may again calculate a change between the region of interest of the current frame and the region of interest of the moving picture frame (hereinafter referred to as the second next frame) to be reproduced after the reproduction of the first next frame. When the calculated change is equal to or greater than the reference value, the device 100 can reproduce and display the second next frame. On the other hand, when the calculated change is less than the reference value, the device 100 does not display the second next frame. S3720 and S3730 may be repeatedly performed until the reproduction of the moving picture is ended in operation S3740. In other words, the device 100 may repeatedly perform the operations S3720 and S3730 until the user input that ends the reproduction of the moving picture is received or the reproduction of the moving picture is completed in S3740. A method of reproducing a moving picture has been explained above with reference to FIG. However, the exemplary embodiments are not limited thereto. The method can also be applied to the case of reproducing a still picture in a slide show method.

至此已闡述了一種利用作為影像的部分區的感興趣區域來產生、再現、及顯示影像的方法。裝置100可利用感興趣區 域提供各種選單影像。選單影像可包括用於執行具體應用程式的選單項。選單項可為物體,且選單影像的不是選單項的區可被定義為背景。 A method of generating, reproducing, and displaying an image using a region of interest as a partial region of an image has been described so far. Device 100 can utilize a region of interest The field provides a variety of menu images. The menu image can include menu items for executing a particular application. The menu item can be an object, and the area of the menu image that is not a menu item can be defined as the background.

圖38是根據示例性實施例,用於解釋一種在選單影像上顯示效果的方法的參考圖。首先,如圖38的3800-1所示,在裝置100的模式被設定為用於選單影像的效果模式時,裝置100可顯示選單影像3810。使用者可選擇選單影像3810上的作為感興趣區域的選單項3812。當已選擇選單項3812時,如圖38的3800-2所示,裝置100可顯示包括可應用於選單項3812的效果的效果清單3820。當使用者自效果清單3820選擇效果項3822時,如圖38的3800-3所示,裝置100可顯示已被提供效果的選單項3830。 FIG. 38 is a reference diagram for explaining a method of displaying an effect on a menu image, according to an exemplary embodiment. First, as shown in 3800-1 of FIG. 38, when the mode of the device 100 is set to the effect mode for the menu image, the device 100 can display the menu image 3810. The user can select menu item 3812 as the region of interest on menu image 3810. When the menu item 3812 has been selected, as shown at 3800-2 of FIG. 38, the device 100 can display an effect list 3820 that includes effects that can be applied to the menu item 3812. When the user selects the effect item 3822 from the effect list 3820, as shown in 3800-3 of FIG. 38, the device 100 can display the menu item 3830 that has been provided with the effect.

可根據對應於選單項的應用程式被執行的次數向選單項提供效果。圖39是根據示例性實施例,一種根據對應於選單項的應用程式被執行的次數而向選單項提供效果的方法的流程圖。 The effect can be provided to the menu item based on the number of times the application corresponding to the menu item is executed. FIG. 39 is a flowchart of a method of providing an effect to a menu item according to the number of times an application corresponding to a menu item is executed, according to an exemplary embodiment.

在操作S3910中,裝置100可判斷對應於選單項的應用程式被執行的次數。裝置100可判斷對應於選單項的應用程式在預設時間週期內被執行的次數。 In operation S3910, the device 100 may determine the number of times the application corresponding to the menu item is executed. The device 100 can determine the number of times the application corresponding to the menu item is executed within a preset time period.

若對應於選單項的應用程式被執行的次數在操作S3910中等於或大於第一值,則裝置100可在操作S3930中向選單項提供積極效果(positive effect)。積極效果可為增強選單項的獨特性(distinctiveness)的效果,且可為例如光環效果、尺寸增大效果、或深度減小效果。 If the number of times the application corresponding to the menu item is executed is equal to or greater than the first value in operation S3910, the device 100 may provide a positive effect to the menu item in operation S3930. The positive effect may be an effect of enhancing the uniqueness of the menu item, and may be, for example, a halo effect, a size increasing effect, or a depth reducing effect.

若對應於選單項的應用程式被執行的次數在操作S3910中小於第一值而在操作S3920中所述次數等於或大於第二值,則裝置100可不向選單項提供效果。換言之,裝置100可在操作S3940中以預設狀態顯示選單項。第二值可小於第一值。 If the number of times the application corresponding to the menu item is executed is smaller than the first value in operation S3910 and the number of times is equal to or greater than the second value in operation S3920, the device 100 may not provide an effect to the menu item. In other words, the device 100 may display the menu item in a preset state in operation S3940. The second value can be less than the first value.

若對應於選單項的應用程式被執行的次數在操作S3910中小於第一值且在操作S3920中亦小於第二值,則裝置100可在操作S3950中向選單項提供消極效果(negative effect)。消極效果可為減弱選單項的獨特性的效果,且可為例如模糊效果、尺寸減小效果、或深度增加效果。在某些示例性實施例中,若對應於選單項的應用程式被執行的次數小於第三值,則裝置100可自選單影像刪除選單項。 If the number of times the application corresponding to the menu item is executed is less than the first value in operation S3910 and is also smaller than the second value in operation S3920, the device 100 may provide a negative effect to the menu item in operation S3950. The negative effect may be an effect of weakening the uniqueness of the menu item, and may be, for example, a blur effect, a size reduction effect, or a depth increase effect. In some exemplary embodiments, if the number of times the application corresponding to the menu item is executed is less than the third value, the device 100 may delete the menu item from the single-selection image.

圖40說明根據示例性實施例,顯示選單影像的實例,在所述選單影像中已根據對應於選單項的應用程式被執行的次數而向選單項提供效果。如圖40所示,裝置100可顯示選單影像4010。裝置100可將第一選單項4012顯示為大於其他選單項。此意味著第一選單項4012相較於其他選單項已被更頻繁地執行。相較於其他選單項,使用者在將來更有可能選擇第一選單項4012。由於第一選單項4012被放大顯示且因此變得突出,故使用者能夠相較於其他選單項更輕易地找到第一選單項4012。裝置100可將第二選單項4014顯示為小於其他選單項。此意味著第二選單項4014相較於其他選單項被執行的頻率較低。使用者在將來選擇第二選單項4014的可能性低。 FIG. 40 illustrates an example of displaying a menu image in which an effect has been provided to a menu item according to the number of times an application corresponding to a menu item is executed, according to an exemplary embodiment. As shown in FIG. 40, device 100 can display menu image 4010. The device 100 can display the first menu item 4012 as being larger than other menu items. This means that the first menu item 4012 has been executed more frequently than other menu items. Compared to other menu items, the user is more likely to select the first menu item 4012 in the future. Since the first menu item 4012 is displayed in an enlarged manner and thus becomes prominent, the user can find the first menu item 4012 more easily than other menu items. The device 100 can display the second menu item 4014 as being smaller than other menu items. This means that the second menu item 4014 is executed less frequently than other menu items. The user is less likely to select the second menu item 4014 in the future.

圖41至圖45是根據示例性實施例的裝置100的方框圖。 41 through 45 are block diagrams of an apparatus 100, in accordance with an exemplary embodiment.

參照圖41,裝置100可包括使用者輸入110、控制器120、以及顯示器130。舉例而言,裝置100可向顯示於顯示器130上的靜止影像、移動圖片訊框、即時取景影像、或螢幕影像提供效果。 Referring to FIG. 41, device 100 can include user input 110, controller 120, and display 130. For example, device 100 can provide effects to still images, moving picture frames, live view images, or screen images displayed on display 130.

參照圖42,在其他示例性實施例中,裝置100可包括使用者輸入110、控制器120、顯示器130、以及記憶體140。裝置100可向儲存於記憶體140中的靜止影像或移動圖片提供效果。 Referring to FIG. 42, in other exemplary embodiments, device 100 may include user input 110, controller 120, display 130, and memory 140. The device 100 can provide effects to still images or moving pictures stored in the memory 140.

參照圖43,在其他示例性實施例中,裝置100可包括使用者輸入110、控制器120、顯示器130、以及通訊器150。裝置100可向儲存於外部裝置中的靜止影像或移動圖片、抑或由外部裝置所拍攝的即時取景影像提供效果。 Referring to FIG. 43, in other exemplary embodiments, device 100 may include user input 110, controller 120, display 130, and communicator 150. The device 100 can provide effects to still images or moving pictures stored in the external device, or to live view images captured by the external device.

參照圖44,在其他示例性實施例中,裝置100可包括使用者輸入110、控制器120、顯示器130、以及相機160。裝置100可向由相機160所拍攝的即時取景影像提供效果。然而,並非所有所說明的組件皆為必不可少的。裝置100可由相較於圖41、圖42、圖43、或圖44中所說明者更多或更少的組件實作,抑或由包括圖41、圖42、圖43、或圖44中所說明者的任意組件組合實作。 Referring to FIG. 44, in other exemplary embodiments, device 100 may include user input 110, controller 120, display 130, and camera 160. The device 100 can provide an effect to the live view image captured by the camera 160. However, not all of the illustrated components are essential. The device 100 can be implemented by more or less components than those illustrated in FIG. 41, FIG. 42, FIG. 43, or FIG. 44, or by including FIG. 41, FIG. 42, FIG. 43, or FIG. Any combination of components is implemented.

舉例而言,如圖45所說明,除圖41至圖44所示裝置100中的每一者所包括的組件以外,裝置100可更包括輸出器170、通訊器140、感測器180、以及麥克風190。 For example, as illustrated in FIG. 45, in addition to the components included in each of the devices 100 illustrated in FIGS. 41-44, the device 100 may further include an outputter 170, a communicator 140, a sensor 180, and Microphone 190.

現在將詳細闡述上述組件。 The above components will now be explained in detail.

使用者輸入110表示供使用者輸入資料以控制裝置100的單元。舉例而言,使用者輸入110可為(但不僅限於)小鍵盤、薄膜開關(dome switch)、觸控墊(例如,電容覆蓋型、電阻覆蓋型、紅外光束型、整體應變計型、表面聲波型、壓電型等)、滾輪(jog wheel)、或輕搖開關(jog switch)。 User input 110 represents a unit for the user to enter data to control device 100. For example, the user input 110 can be, but is not limited to, a keypad, a dome switch, a touch pad (eg, a capacitive overlay, a resistive overlay, an infrared beam, an overall strain gauge, a surface acoustic wave). Type, piezoelectric type, etc.), jog wheel, or jog switch.

使用者輸入110可接收選擇影像上的感興趣區域的使用者輸入。根據示例性實施例,選擇感興趣區域的使用者輸入可有所變化。舉例而言,使用者輸入可為鍵輸入、觸控輸入、運動輸入、彎曲輸入、語音輸入、或多重輸入。 User input 110 can receive user input selecting a region of interest on the image. According to an exemplary embodiment, the user input selecting the region of interest may vary. For example, the user input can be a key input, a touch input, a motion input, a curved input, a voice input, or multiple inputs.

根據示例性實施例,使用者輸入110可接收自多個影像選擇第一影像及第二影像的輸入。 According to an exemplary embodiment, user input 110 may receive input of selecting a first image and a second image from a plurality of images.

使用者輸入110可接收自辨識資訊清單選擇至少一條辨識資訊的輸入。 The user input 110 can receive an input of at least one piece of identification information from the list of identification information.

控制器120通常控制裝置100的所有操作。舉例而言,控制器120可藉由執行儲存於記憶體140中的程式而控制使用者輸入110、輸出器170、通訊器150、感測器180、及麥克風190。 Controller 120 typically controls all operations of device 100. For example, the controller 120 can control the user input 110, the outputter 170, the communicator 150, the sensor 180, and the microphone 190 by executing a program stored in the memory 140.

控制器120可取得至少一條辨識所選擇的感興趣區域的辨識資訊。舉例而言,控制器120可藉由檢查所選擇的感興趣區域的屬性資訊並歸納所述屬性資訊而產生辨識資訊。控制器120可利用關於所選擇的感興趣區域的影像分析資訊而偵測辨識資訊。除感興趣區域的辨識資訊以外,控制器120亦可取得第二影像的辨識資訊。 The controller 120 can obtain at least one piece of identification information identifying the selected region of interest. For example, the controller 120 may generate the identification information by checking the attribute information of the selected region of interest and summarizing the attribute information. The controller 120 can detect the identification information by using image analysis information about the selected region of interest. In addition to the identification information of the region of interest, the controller 120 may also obtain the identification information of the second image.

控制器120可向感興趣區域提供效果以使得與先前所顯示的物體或背景完全不同地顯示對應於感興趣區域的物體或背景。所述效果可為例如:突出顯示感興趣區域的光環效果、減小感興趣區域的畫素值之間的差異的模糊效果、調整感興趣區域的尺寸的尺寸效果、以及改變感興趣區域的深度資訊的深度效果。 The controller 120 can provide an effect to the region of interest such that an object or background corresponding to the region of interest is displayed completely differently than the previously displayed object or background. The effect may be, for example, highlighting a halo effect of the region of interest, reducing a blur effect of a difference between pixel values of the region of interest, adjusting a size effect of a size of the region of interest, and changing a depth of the region of interest The depth effect of the information.

控制器120可藉由使對應於第一影像的感興趣區域的部分影像自第二影像分離並將所分離的部分影像與第一影像的感興趣區域進行組合而向第一影像提供效果。 The controller 120 can provide an effect to the first image by separating a partial image of the region of interest corresponding to the first image from the second image and combining the separated partial image with the region of interest of the first image.

顯示器130可顯示由裝置100所處理的資訊。舉例而言,顯示器130可顯示靜止影像、移動圖片、或即時取景影像。顯示器130亦可顯示辨識感興趣區域的辨識資訊。顯示器130亦可顯示效果影像,並可顯示包括效果影像的效果資料夾。 Display 130 can display information processed by device 100. For example, the display 130 can display a still image, a moving picture, or a live view image. The display 130 can also display identification information identifying the region of interest. The display 130 can also display an effect image and can display an effect folder including the effect image.

當顯示器130與觸控墊一起形成層結構以構建觸控螢幕時,顯示器130可用作輸入裝置及輸出裝置。顯示器130可包括選自液晶顯示器(liquid crystal display,LCD)、薄膜電晶體-液晶顯示器(thin film transistor-liquid crystal display,TFT-LCD)、有機發光二極體(organic light-emitting diode,OLED)、撓性顯示器、3D顯示器、及電泳顯示器中的至少一者。根據裝置100的示例性實施例,裝置100可包括至少兩個顯示器130。 When the display 130 and the touch pad form a layer structure to construct a touch screen, the display 130 can be used as an input device and an output device. The display 130 may include a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT-LCD), and an organic light-emitting diode (OLED). At least one of a flexible display, a 3D display, and an electrophoretic display. According to an exemplary embodiment of device 100, device 100 may include at least two displays 130.

記憶體140可儲存由控制器120用以執行處理及控制的程式,且亦可儲存輸入/輸出資料(舉例而言,多個影像、多個資料夾、及偏好資料夾清單)。 The memory 140 can store programs used by the controller 120 to perform processing and control, and can also store input/output data (for example, multiple images, multiple folders, and a list of preference folders).

記憶體140可包括選自下列的至少一種儲存媒體:快閃記憶體型記憶體、硬碟型記憶體、多媒體卡微型記憶體、卡型記憶體(例如,安全數位(secure digital,SD)或極端數位(extreme digital,XD)記憶體)、隨機存取記憶體(random access memory,RAM)、靜態隨機存取記憶體(static random access memory,SRAM)、唯讀記憶體(read-only memory,ROM)、電可抹除可程式化唯讀記憶體(electrically erasable programmable ROM,EEPROM)、可程式化唯讀記憶體(programmable ROM,PROM)、磁性記憶體、磁碟、及光碟。裝置100可在網際網路上運作用於執行記憶體140的儲存功能的網頁儲存器。 The memory 140 may include at least one storage medium selected from the group consisting of: flash memory type memory, hard disk type memory, multimedia card micro memory, card type memory (for example, secure digital (SD) or extreme Digital (XD) memory, random access memory (RAM), static random access memory (SRAM), read-only memory (ROM) ), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (ROMM), magnetic memory, disk, and optical disk. The device 100 can operate a web page store on the Internet for performing the storage function of the memory 140.

儲存於記憶體140中的程式可根據其功能而被歸類為多種模組,例如使用者介面(user interface,UI)模組141、通知模組142、及影像處理模組143。 The programs stored in the memory 140 can be classified into a plurality of modules according to their functions, such as a user interface (UI) module 141, a notification module 142, and an image processing module 143.

UI模組141可提供專用於每一應用程式的使用者介面、圖形使用者介面等並與裝置100交互操作。通知模組142可產生用於通知已於裝置100中產生事件的訊號。通知模組142可經由顯示器130以視訊訊號形式、經由音訊輸出器172以音訊訊號形式、或經由振動馬達173以振動訊號形式輸出通知訊號。 The UI module 141 can provide a user interface, a graphical user interface, etc. dedicated to each application and interact with the device 100. The notification module 142 can generate a signal for notifying that an event has been generated in the device 100. The notification module 142 can output the notification signal in the form of a video signal via the display 130, in the form of an audio signal via the audio output 172, or in the form of a vibration signal via the vibration motor 173.

影像處理模組143可藉由分析所拍攝的影像而取得所拍攝的影像中所包含的物體資訊、邊緣資訊、氛圍資訊、色彩資訊等。 The image processing module 143 can obtain object information, edge information, atmosphere information, color information, and the like included in the captured image by analyzing the captured image.

根據示例性實施例,影像處理模組143可偵測所拍攝的 影像中所包含的物體的輪廓線。根據示例性實施例,影像處理模組143可藉由將影像中所包含的物體的輪廓線與預定義模板進行比較而取得所述物體的類型、名稱等。舉例而言,當物體的輪廓線類似於車輛的模板時,影像處理模組143可將影像中所包含的物體識別為車輛。 According to an exemplary embodiment, the image processing module 143 can detect the captured image. The outline of the object contained in the image. According to an exemplary embodiment, the image processing module 143 may obtain the type, name, and the like of the object by comparing the contour of the object included in the image with a predefined template. For example, when the outline of the object is similar to the template of the vehicle, the image processing module 143 can recognize the object contained in the image as the vehicle.

根據示例性實施例,影像處理模組143可對影像中所包含的物體執行面部識別。舉例而言,影像處理模組143可自影像偵測人的面部區域。面部區域偵測方法的實例可包括基於知識的方法、基於特徵的方法、模板匹配方法、及基於外觀的方法,但示例性實施例並非僅限於此。 According to an exemplary embodiment, the image processing module 143 may perform face recognition on an object included in the image. For example, the image processing module 143 can detect a person's face area from the image. Examples of the face area detecting method may include a knowledge based method, a feature based method, a template matching method, and an appearance based method, but the exemplary embodiments are not limited thereto.

影像處理模組143可自所偵測的面部區域提取面部特徵(例如,作為面部主要部分的眼睛、鼻子、及嘴巴的形狀)。為自面部區域提取面部特徵,可使用賈柏濾波器、局部二進制模式等,但示例性實施例並非僅限於此。 The image processing module 143 can extract facial features (eg, the shape of the eyes, nose, and mouth that are the major portions of the face) from the detected facial regions. To extract facial features from the face region, a Jaber filter, a local binary mode, or the like can be used, but the exemplary embodiments are not limited thereto.

影像處理模組143可將自影像中的面部區域提取的面部特徵與預先註冊的使用者的面部特徵進行比較。舉例而言,當所提取的面部特徵類似於預先註冊的第一註冊者(例如,Tom)的面部特徵時,影像處理模組143可確定第一使用者的影像包含於所述影像中。 The image processing module 143 can compare the facial features extracted from the facial regions in the image with the facial features of the pre-registered user. For example, when the extracted facial features are similar to the facial features of a pre-registered first registrant (eg, Tom), the image processing module 143 may determine that the image of the first user is included in the image.

根據示例性實施例,影像處理模組143可將影像的某一區與色彩圖(色彩直方圖)進行比較並提取視覺特徵(例如影像的色彩排列、圖案、及氛圍)作為影像分析資訊。 According to an exemplary embodiment, the image processing module 143 may compare a certain region of the image with a color map (color histogram) and extract visual features (eg, color arrangement, pattern, and atmosphere of the image) as image analysis information.

通訊器150可包括使裝置100能夠與雲端伺服器、外部裝置、SNS伺服器、或外部穿戴式裝置進行資料通訊的至少一個組件。舉例而言,通訊器150可包括短程無線通訊器151、行動通訊器152、及廣播接收器153。 The communicator 150 can include at least one component that enables the device 100 to communicate with a cloud server, an external device, an SNS server, or an external wearable device. For example, the communicator 150 can include a short range wireless communicator 151, a mobile communicator 152, and a broadcast receiver 153.

短程無線通訊器151可包括(但不限於)藍牙通訊器、藍牙低能量(Bluetooth Low Energy,BLE)通訊器、近場通訊(near field communication,NFC)通訊器、無線局部區域網路(wireless local area network,WLAN)(例如,Wi-Fi)通訊器、ZigBee通訊器、紅外資料協會(infrared Data Association,IrDA)通訊器、Wi-Fi直接聯接(Wi-Fi Direct,WFD)通訊器、超寬頻(ultra wideband,UWB)通訊器、Ant+通訊器等。 The short-range wireless communicator 151 may include (but is not limited to) a Bluetooth communicator, a Bluetooth Low Energy (BLE) communicator, a near field communication (NFC) communicator, and a wireless local area network (wireless local Area network, WLAN) (eg Wi-Fi) communicator, ZigBee communicator, infrared data association (IrDA) communicator, Wi-Fi Direct (WFD) communicator, ultra-wideband (ultra wideband, UWB) communicator, Ant+ communicator, etc.

行動通訊器152可與選自行動通訊網路上的基地台、外部終端、及伺服器中的至少一者交換無線訊號。無線訊號的實例可包括語音呼叫訊號、視訊呼叫訊號、及在短訊息服務(short message service,SMS)/多媒體傳訊服務(multimedia messaging service,MMS)期間產生的各種類型的資料。 The mobile communicator 152 can exchange wireless signals with at least one of a base station, an external terminal, and a server selected from the mobile communication network. Examples of wireless signals may include voice call signals, video call signals, and various types of data generated during short message service (SMS)/multimedia messaging service (MMS).

廣播接收器153經由廣播通道自外部源接收廣播訊號及/或廣播相關資訊。所述廣播通道可為衛星通道、地波通道等。根據示例性實施例,裝置100可不包括廣播接收器153。 The broadcast receiver 153 receives broadcast signals and/or broadcast associated information from an external source via a broadcast channel. The broadcast channel can be a satellite channel, a ground wave channel, or the like. According to an exemplary embodiment, the device 100 may not include the broadcast receiver 153.

通訊器150可與外部裝置共用選自第一影像、第二影像、效果影像、效果資料夾、及辨識資訊中的至少一者。外部裝置可為選自連接至裝置100的雲端伺服器、SNS伺服器、同一使 用者的另一裝置100、及另一使用者的裝置100中的至少一者,但示例性實施例並非僅限於此。 The communicator 150 can share at least one selected from the group consisting of a first image, a second image, an effect image, an effect folder, and identification information with an external device. The external device may be selected from a cloud server connected to the device 100, an SNS server, and the same device. At least one of the other device 100 of the user and the device 100 of another user, but the exemplary embodiment is not limited thereto.

舉例而言,通訊器150可將效果影像或效果資料夾提供至外部裝置。通訊器150可自外部裝置接收儲存於外部裝置中的靜止影像或移動圖片、或由外部裝置所拍攝的即時取景影像。 For example, the communicator 150 can provide an effect image or effect folder to an external device. The communicator 150 can receive a still image or a moving picture stored in the external device or a live view image taken by the external device from the external device.

由相機160獲得的影像訊框可儲存於記憶體140中或經由通訊器150傳送至外界。根據裝置100的示例性實施例,可包括至少兩個相機160。 The image frame obtained by the camera 160 can be stored in the memory 140 or transmitted to the outside via the communication device 150. According to an exemplary embodiment of device 100, at least two cameras 160 may be included.

輸出器170輸出音訊訊號、視訊訊號、或振動訊號,且可包括音訊輸出器172及振動馬達173。 The output unit 170 outputs an audio signal, a video signal, or a vibration signal, and may include an audio outputter 172 and a vibration motor 173.

音訊輸出器172可輸出自通訊器150接收或儲存於記憶體140中的音訊資料。音訊輸出器172亦可輸出與裝置100的功能相關的音訊訊號(例如,呼叫訊號接收聲音、訊息接收聲音、通知聲音)。音訊輸出器172可包括揚聲器、蜂鳴器等。 The audio outputter 172 can output audio data received from the communicator 150 or stored in the memory 140. The audio outputter 172 can also output audio signals related to the functions of the device 100 (e.g., call signal reception sound, message reception sound, notification sound). The audio output 172 can include a speaker, a buzzer, and the like.

振動馬達173可輸出振動訊號。舉例而言,振動馬達173可輸出與音訊資料或視訊資料的輸出對應的振動訊號(例如,呼叫訊號接收聲音或訊息接收聲音)。振動馬達173亦可在觸控螢幕被觸摸時輸出振動訊號。 The vibration motor 173 can output a vibration signal. For example, the vibration motor 173 can output a vibration signal (for example, a call signal reception sound or a message reception sound) corresponding to the output of the audio material or the video material. The vibration motor 173 can also output a vibration signal when the touch screen is touched.

感測器180可感測裝置100的狀態、裝置100周圍的狀態、或穿戴裝置100的使用者的狀態,且可將對應於所感測到的狀態的資訊傳送至控制器120。 The sensor 180 may sense the state of the device 100, the state around the device 100, or the state of the user wearing the device 100, and may transmit information corresponding to the sensed state to the controller 120.

感測器180可包括(但不限於)選自以下各項中的至少 一者:磁性感測器181、加速度感測器182、傾斜感測器183、紅外線感測器184、陀螺儀感測器185、位置感測器(例如,GPS)186、大氣壓感測器187、接近感測器188、及光學感測器189。感測器180可包括例如溫度感測器、照度感測器、壓力感測器、及虹膜識別感測器。對於大多數感測器而言,此項技術中具有通常知識者根據其名稱將可本能地理解其功能,因此本文中將不再對其予以贅述。 The sensor 180 can include, but is not limited to, at least one selected from the group consisting of One: magnetic sensor 181, acceleration sensor 182, tilt sensor 183, infrared sensor 184, gyro sensor 185, position sensor (eg, GPS) 186, atmospheric pressure sensor 187 The proximity sensor 188 and the optical sensor 189. The sensor 180 can include, for example, a temperature sensor, an illuminance sensor, a pressure sensor, and an iris recognition sensor. For most sensors, those of ordinary skill in the art will instinctively understand their function according to their names, and therefore will not be described in detail herein.

可包括麥克風190作為音訊/視訊(A/V)輸入單元。 The microphone 190 can be included as an audio/video (A/V) input unit.

麥克風190接收外部音訊訊號,並將所述外部音訊訊號轉換成電性音訊資訊。舉例而言,麥克風190可自外部裝置或正在講話的人接收音訊訊號。麥克風190可利用各種雜訊去除演算法來去除在接收外部音訊訊號時所產生的雜訊。 The microphone 190 receives an external audio signal and converts the external audio signal into electrical audio information. For example, the microphone 190 can receive audio signals from an external device or a person who is speaking. The microphone 190 can utilize various noise removal algorithms to remove noise generated when receiving external audio signals.

圖46是根據示例性實施例,遠端伺服器或雲端伺服器200的結構的方框圖。 FIG. 46 is a block diagram showing the structure of a remote server or cloud server 200, according to an exemplary embodiment.

參照圖46,雲端伺服器200可包括通訊器210、控制器220、及儲存器230。然而,並非所有所說明的組件皆為必不可少的。雲端伺服器200可由相較於圖46中所說明者更多或更少的組件實作。 Referring to FIG. 46, the cloud server 200 may include a communicator 210, a controller 220, and a storage 230. However, not all of the illustrated components are essential. Cloud server 200 can be implemented by more or fewer components than those illustrated in FIG.

現在將詳細闡述上述組件。 The above components will now be explained in detail.

通訊器210可包括使雲端伺服器200與裝置100之間能夠通訊的至少一個組件。通訊器210可包括接收器及傳送器。 The communicator 210 can include at least one component that enables communication between the cloud server 200 and the device 100. The communicator 210 can include a receiver and a transmitter.

通訊器210可將儲存於雲端伺服器200中的影像或影像 清單傳送至裝置100。舉例而言,當通訊器210自經由具體帳戶所連接的裝置100接收對影像清單的請求時,通訊器210可將儲存於雲端伺服器200中的影像清單傳送至裝置100。 The communicator 210 can store images or images stored in the cloud server 200 The list is transmitted to device 100. For example, when the communicator 210 receives a request for an image list from the device 100 connected via a specific account, the communicator 210 can transmit the image list stored in the cloud server 200 to the device 100.

通訊器210可將儲存於雲端伺服器200中或由雲端伺服器200所產生的辨識資訊傳送至裝置100。 The communicator 210 can transmit the identification information stored in the cloud server 200 or generated by the cloud server 200 to the device 100.

控制器220控制雲端伺服器200的所有操作。舉例而言,控制器220可取得用於辨識影像的多條辨識資訊。根據示例性實施例,所述多條辨識資訊可為用於辨識影像的至少兩個核心詞或核心片語。 The controller 220 controls all operations of the cloud server 200. For example, the controller 220 can obtain a plurality of pieces of identification information for identifying an image. According to an exemplary embodiment, the plurality of pieces of identification information may be at least two core words or core phrases for identifying images.

舉例而言,當在影像的元資料中預先定義有多條辨識資訊時,控制器220可自影像的元資料取得多條辨識資訊。雲端伺服器200可利用選自影像的屬性資訊及影像分析資訊中的至少一者而取得用於辨識影像的多條辨識資訊。 For example, when a plurality of pieces of identification information are pre-defined in the metadata of the image, the controller 220 may obtain a plurality of pieces of identification information from the metadata of the image. The cloud server 200 can obtain a plurality of pieces of identification information for identifying the image by using at least one of the attribute information and the image analysis information selected from the image.

儲存器230可儲存由控制器220用以執行處理的程式,抑或可儲存輸入/輸出資料。舉例而言,雲端伺服器200可建立影像資料庫(DB)、裝置的資料庫、使用者的面部特徵資訊的資料庫、及物體模板資料庫。 The storage 230 can store programs used by the controller 220 to perform processing, or can store input/output data. For example, the cloud server 200 can establish a video database (DB), a database of devices, a database of facial feature information of the user, and an object template database.

儲存器230可儲存多個影像。舉例而言,儲存器230可儲存自裝置100上傳的影像。在此種情形中,儲存器230可使裝置100的辨識資訊與影像映射並對其進行儲存。 The storage 230 can store a plurality of images. For example, the storage 230 can store images uploaded from the device 100. In this case, the storage 230 can map and store the identification information of the device 100 with the image.

一種根據示例性實施例的方法可被實施為可由各種電腦機構執行的程式命令,且可記錄於電腦可讀取記錄媒體上。電 腦可讀取記錄媒體可單獨或以組合形式包括程式命令、資料檔案、資料結構等。將被記錄於電腦可讀取記錄媒體上的程式命令可針對示例性實施例而被專門設計及配置,抑或可被電腦軟體技術中具有通常知識者所習知並可被電腦軟體技術中具有通常知識者所使用。電腦可讀取記錄媒體的實例包括:磁性媒體,例如硬碟、軟碟、或磁帶;光學媒體,例如光碟唯讀記憶體(compact disk-read-only memory,CD-ROM)或數位多功能光碟(digital versatile disk,DVD);磁光媒體,例如軟光碟;以及專門用於儲存並執行程式命令的硬體裝置,例如唯讀記憶體、隨機存取記憶體(random-access memory,RAM)、或快閃記憶體。程式命令的實例為可由電腦利用解譯器等執行的高級語言代碼、以及由編譯器製作的機器語言代碼。 A method in accordance with an exemplary embodiment can be implemented as program commands that can be executed by various computer mechanisms and can be recorded on a computer readable recording medium. Electricity The brain readable recording medium may include program commands, data files, data structures, and the like, alone or in combination. The program commands to be recorded on the computer readable recording medium can be specially designed and configured for the exemplary embodiments, or can be known by those having ordinary knowledge in computer software technology and can be commonly used in computer software technology. Used by the knowledge. Examples of computer readable recording media include: magnetic media such as hard disks, floppy disks, or magnetic tapes; optical media such as compact disk-read-only memory (CD-ROM) or digital versatile discs. (digital versatile disk, DVD); magneto-optical media, such as floppy disks; and hardware devices dedicated to storing and executing program commands, such as read-only memory, random-access memory (RAM), Or flash memory. Examples of program commands are high-level language code that can be executed by a computer using an interpreter or the like, and machine language code that is produced by a compiler.

示例性實施例應被理解為僅具有闡述意義而並非用於限制目的。對每一示例性實施例中的特徵或態樣的說明通常應被理解為可用於其他示例性實施例中的其他類似特徵或態樣。 The exemplary embodiments are to be considered in a Descriptions of features or aspects in each exemplary embodiment should be understood as being able to be used in other similar features or aspects in other exemplary embodiments.

儘管已具體顯示並闡述了各示例性實施例,但此項技術中具有通常知識者應理解,在不背離以下申請專利範圍的精神及範圍的條件下可對本發明作出各種形式及細節上的變化。 While various exemplary embodiments have been shown and described, the embodiments of the present invention may .

Claims (15)

一種影像提供方法,包括:顯示第一影像,所述第一影像為包括物體及背景的移動影像;接收將所述物體或所述背景選擇為感興趣區域的使用者輸入;基於所述第一影像的第一屬性資訊來取得與所述感興趣區域相關聯的第一辨識資訊;自目標影像取得第二影像,所述第二影像包括第二辨識資訊,所述第二辨識資訊相同於所述第一辨識資訊;以及藉由提供效果到所述第一影像的所述感興趣區域以在所述第二影像中產生效果影像,並且藉由提供所述第二影像的部分影像以在所述第二影像中產生另一效果影像,其中所述第二影像的所述部分影像與提供到所述第一影像的所述感興趣區域具備相同效果,且所述部分影像對應於所述第一辨識資訊,其中,因應於接收到所述使用者輸入,自動執行所述產生所述效果影像、所述取得所述第一辨識資訊、所述取得所述第二影像以及所述產生所述另一效果影像。 An image providing method includes: displaying a first image, the first image being a moving image including an object and a background; receiving a user input selecting the object or the background as a region of interest; based on the first The first attribute information of the image is used to obtain the first identification information associated with the region of interest; the second image is obtained from the target image, the second image includes second identification information, and the second identification information is the same as Determining the first identification information; and generating an effect image in the second image by providing an effect to the region of interest of the first image, and by providing a partial image of the second image Generating another effect image in the second image, wherein the partial image of the second image has the same effect as the region of interest provided to the first image, and the partial image corresponds to the first image An identification information, wherein the generating the effect image, the obtaining the first identification information, and the obtaining location are automatically performed in response to receiving the user input Generating said second image and said another image effect. 如申請專利範圍第1項所述的影像提供方法,其中所述第一屬性資訊包括與所述第一影像的產生相關聯的上下文資訊與關於所述第一影像的註解資訊中的至少一者,所述註解資訊是由使用者添加。 The image providing method of claim 1, wherein the first attribute information includes at least one of context information associated with generation of the first image and annotation information regarding the first image The annotation information is added by the user. 如申請專利範圍第1項所述的影像提供方法,其中所 述第一辨識資訊是藉由基於字網來歸納所述第一屬性資訊而取得。 The image providing method according to claim 1, wherein the image providing method The first identification information is obtained by summarizing the first attribute information based on a word network. 如申請專利範圍第1項所述的影像提供方法,其中所述取得所述第二影像的步驟包括:利用所述第二影像的第二屬性資訊與所述第二影像的影像分析資訊中的至少一者來取得所述第二影像的第二辨識資訊。 The image providing method of claim 1, wherein the obtaining the second image comprises: using second attribute information of the second image and image analysis information of the second image At least one of the second identification information of the second image is obtained. 如申請專利範圍第1項所述的影像提供方法,其中所述感興趣區域的所述第一辨識資訊是自所述第一屬性資訊取得,所述第一屬性資訊包括所述第一影像的多個屬性。 The image providing method of claim 1, wherein the first identification information of the region of interest is obtained from the first attribute information, and the first attribute information includes the first image Multiple attributes. 如申請專利範圍第5項所述的影像提供方法,更包括:接收使用者輸入,所述使用者輸入選擇所述第一影像的所述多個屬性中的至少一者;以及基於所述所選擇的至少一個屬性來產生所述第一辨識資訊,且其中所述取得所述第二影像的步驟包括:將所述第一辨識資訊與所述目標影像的第三辨識資訊進行比較。 The image providing method of claim 5, further comprising: receiving user input, the user input selecting at least one of the plurality of attributes of the first image; and based on the Selecting at least one attribute to generate the first identification information, and wherein the step of acquiring the second image comprises: comparing the first identification information with third identification information of the target image. 如申請專利範圍第1項所述的影像提供方法,其中所述產生所述效果影像的步驟包括:顯示所述第二影像的所述部分影像,所述部分影像對應於所述第一辨識資訊。 The image providing method of claim 1, wherein the generating the effect image comprises: displaying the partial image of the second image, the partial image corresponding to the first identification information . 如申請專利範圍第1項所述的影像提供方法,其中所述效果影像是藉由將所述第二影像的所述部分影像與所述第一影像的所述感興趣區域進行組合而獲得,其中所述部分影像對應於 所述第一辨識資訊。 The image providing method according to claim 1, wherein the effect image is obtained by combining the partial image of the second image with the region of interest of the first image, Where the partial image corresponds to The first identification information. 如申請專利範圍第1項所述的影像提供方法,其中所述第一影像是即時取景影像。 The image providing method according to claim 1, wherein the first image is a live view image. 如申請專利範圍第9項所述的影像提供方法,其中所述第二影像是在接收到用於儲存影像的使用者輸入之前自所述即時取景影像產生的臨時影像。 The image providing method of claim 9, wherein the second image is a temporary image generated from the live view image before receiving a user input for storing the image. 一種提供影像的裝置,包括:顯示器,用以顯示第一影像,所述第一影像為包括物體及背景的移動影像;使用者輸入,用以接收將所述物體或所述背景選擇為感興趣區域的使用者輸入;控制器,用以基於所述第一影像的第一屬性資訊來取得所述感興趣區域的第一辨識資訊,並自目標影像取得第二影像,其中所述第二影像包括第二辨識資訊,且所述第二辨識資訊相同於所述第一辨識資訊,所述控制器藉由提供效果到所述第一影像的所述感興趣區域以在所述第二影像中產生效果影像,並且藉由提供所述第二影像的部分影像以在所述第二影像中產生另一效果影像,其中所述第二影像的所述部分影像與提供到所述第一影像的所述感興趣區域具備相同效果,且所述部分影像對應於所述第一辨識資訊,其中,因應於接收到所述使用者輸入,自動執行所述產生所述效果影像、所述取得所述第一辨識資訊、所述取得所述第二影 像以及所述產生所述另一效果影像。 An apparatus for providing an image, comprising: a display for displaying a first image, the first image being a moving image including an object and a background; and a user input for receiving the object or the background selected as being of interest a user input of the area; the controller is configured to obtain the first identification information of the region of interest based on the first attribute information of the first image, and obtain a second image from the target image, wherein the second image The second identification information is included, and the second identification information is the same as the first identification information, and the controller provides an effect to the region of interest of the first image to be in the second image. Generating an effect image and generating another effect image in the second image by providing a partial image of the second image, wherein the partial image of the second image is provided to the first image The region of interest has the same effect, and the partial image corresponds to the first identification information, wherein the generating station is automatically executed in response to receiving the user input The resultant image, the acquisition of the first identification information, the acquisition of the second Movies Producing the other effect image as and as described. 如申請專利範圍第11項所述的提供影像的裝置,其中所述控制器還用以基於所述第一影像與所述第二影像中的至少一者而產生效果影像。 The apparatus for providing an image according to claim 11, wherein the controller is further configured to generate an effect image based on at least one of the first image and the second image. 如申請專利範圍第12項所述的提供影像的裝置,其中所述效果影像是藉由將所述第二影像的所述部分影像與所述第一影像的所述感興趣區域進行組合而獲得,且其中所述部分影像是所述第二影像的對應於所述第一辨識資訊的部分。 The apparatus for providing an image according to claim 12, wherein the effect image is obtained by combining the partial image of the second image with the region of interest of the first image. And wherein the partial image is a portion of the second image corresponding to the first identification information. 如申請專利範圍第12項所述的提供影像的裝置,其中所述第一屬性資訊包括與所述第一影像的產生相關聯的上下文資訊與關於所述第一影像的註解資訊中的至少一者,所述註解資訊是由使用者添加。 The apparatus for providing an image according to claim 12, wherein the first attribute information includes at least one of context information associated with generation of the first image and annotation information about the first image. The annotation information is added by the user. 如申請專利範圍第11項所述的提供影像的裝置,其中所述控制器還用以藉由將所述第二影像的所述部分影像與所述第一影像的所述感興趣區域進行組合而產生所述效果影像,其中所述部分影像與所述第一辨識資訊相關聯。 The apparatus for providing an image according to claim 11, wherein the controller is further configured to combine the partial image of the second image with the region of interest of the first image And generating the effect image, wherein the partial image is associated with the first identification information.
TW104124634A 2014-07-31 2015-07-30 Method and device for providing image TWI637347B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
KR20140098589 2014-07-31
??10-2014-0098589 2014-07-31
KR20140111628 2014-08-26
??10-2014-0111628 2014-08-26
KR1020150078777A KR102301231B1 (en) 2014-07-31 2015-06-03 Method and device for providing image
??10-2015-0078777 2015-06-03

Publications (2)

Publication Number Publication Date
TW201618038A TW201618038A (en) 2016-05-16
TWI637347B true TWI637347B (en) 2018-10-01

Family

ID=55357252

Family Applications (1)

Application Number Title Priority Date Filing Date
TW104124634A TWI637347B (en) 2014-07-31 2015-07-30 Method and device for providing image

Country Status (2)

Country Link
KR (1) KR102301231B1 (en)
TW (1) TWI637347B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI751571B (en) * 2020-06-02 2022-01-01 仁寶電腦工業股份有限公司 Video playback system and environment atmosphere adjusting method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10007974B2 (en) * 2016-06-10 2018-06-26 Sensors Unlimited, Inc. Enhancing images
KR102561305B1 (en) * 2016-11-03 2023-07-31 한화비전 주식회사 Apparatus for Providing Image and Method Thereof
KR20220017242A (en) * 2020-08-04 2022-02-11 삼성전자주식회사 Electronic device generating an image by applying effects to subject and background and Method thereof
KR20220101783A (en) * 2021-01-12 2022-07-19 삼성전자주식회사 Method for providing contents creation function and electronic device supporting the same

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076342A1 (en) * 2001-12-20 2004-04-22 Ricoh Company, Ltd. Automatic image placement and linking
US20050036044A1 (en) * 2003-08-14 2005-02-17 Fuji Photo Film Co., Ltd. Image pickup device and image synthesizing method
US20070008321A1 (en) * 2005-07-11 2007-01-11 Eastman Kodak Company Identifying collection images with special events
US20090006474A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Exposing Common Metadata in Digital Images
KR20110052247A (en) * 2009-11-12 2011-05-18 삼성전자주식회사 Camera apparatus for providing photograph image, display apparatus for displaying the photograph image and relating image, and methods thereof
US20110167081A1 (en) * 2010-01-05 2011-07-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20120002899A1 (en) * 2010-07-05 2012-01-05 Orr Iv James Edmund Aligning Images
US20120005209A1 (en) * 2010-05-24 2012-01-05 Intersect Ptp, Inc. Systems and methods for identifying intersections using content metadata
US20120092357A1 (en) * 2010-10-14 2012-04-19 Microsoft Corporation Region-Based Image Manipulation
US20120299958A1 (en) * 2007-12-21 2012-11-29 Sony Corporation Image processing apparatus, dynamic picture reproduction apparatus, and processing method and program for the same
US20120307096A1 (en) * 2011-06-05 2012-12-06 Apple Inc. Metadata-Assisted Image Filters
TW201327423A (en) * 2011-11-01 2013-07-01 Nokia Corp Apparatus and method for forming images
US20130330088A1 (en) * 2012-05-24 2013-12-12 Panasonic Corporation Information communication device
US20140009796A1 (en) * 2012-07-09 2014-01-09 Canon Kabushiki Kaisha Information processing apparatus and control method thereof
US20140037157A1 (en) * 2011-05-25 2014-02-06 Sony Corporation Adjacent person specifying apparatus, adjacent person specifying method, adjacent person specifying program, and adjacent person specifying system
US20140075393A1 (en) * 2012-09-11 2014-03-13 Microsoft Corporation Gesture-Based Search Queries
US20140078075A1 (en) * 2012-09-18 2014-03-20 Adobe Systems Incorporated Natural Language Image Editing
US20140164988A1 (en) * 2012-12-06 2014-06-12 Microsoft Corporation Immersive view navigation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9058647B2 (en) 2012-01-16 2015-06-16 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
JP6049345B2 (en) 2012-08-10 2016-12-21 キヤノン株式会社 Image processing apparatus, method, and program

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076342A1 (en) * 2001-12-20 2004-04-22 Ricoh Company, Ltd. Automatic image placement and linking
US20050036044A1 (en) * 2003-08-14 2005-02-17 Fuji Photo Film Co., Ltd. Image pickup device and image synthesizing method
US20070008321A1 (en) * 2005-07-11 2007-01-11 Eastman Kodak Company Identifying collection images with special events
US20090006474A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Exposing Common Metadata in Digital Images
US20120299958A1 (en) * 2007-12-21 2012-11-29 Sony Corporation Image processing apparatus, dynamic picture reproduction apparatus, and processing method and program for the same
KR20110052247A (en) * 2009-11-12 2011-05-18 삼성전자주식회사 Camera apparatus for providing photograph image, display apparatus for displaying the photograph image and relating image, and methods thereof
US20110167081A1 (en) * 2010-01-05 2011-07-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20120005209A1 (en) * 2010-05-24 2012-01-05 Intersect Ptp, Inc. Systems and methods for identifying intersections using content metadata
US20120002899A1 (en) * 2010-07-05 2012-01-05 Orr Iv James Edmund Aligning Images
US20120092357A1 (en) * 2010-10-14 2012-04-19 Microsoft Corporation Region-Based Image Manipulation
US20140037157A1 (en) * 2011-05-25 2014-02-06 Sony Corporation Adjacent person specifying apparatus, adjacent person specifying method, adjacent person specifying program, and adjacent person specifying system
US20120307096A1 (en) * 2011-06-05 2012-12-06 Apple Inc. Metadata-Assisted Image Filters
TW201327423A (en) * 2011-11-01 2013-07-01 Nokia Corp Apparatus and method for forming images
US20130330088A1 (en) * 2012-05-24 2013-12-12 Panasonic Corporation Information communication device
US20140009796A1 (en) * 2012-07-09 2014-01-09 Canon Kabushiki Kaisha Information processing apparatus and control method thereof
US20140075393A1 (en) * 2012-09-11 2014-03-13 Microsoft Corporation Gesture-Based Search Queries
US20140078075A1 (en) * 2012-09-18 2014-03-20 Adobe Systems Incorporated Natural Language Image Editing
US20140164988A1 (en) * 2012-12-06 2014-06-12 Microsoft Corporation Immersive view navigation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI751571B (en) * 2020-06-02 2022-01-01 仁寶電腦工業股份有限公司 Video playback system and environment atmosphere adjusting method

Also Published As

Publication number Publication date
KR102301231B1 (en) 2021-09-13
TW201618038A (en) 2016-05-16
KR20160016574A (en) 2016-02-15

Similar Documents

Publication Publication Date Title
US10733716B2 (en) Method and device for providing image
TWI585712B (en) Method and device for classifying image
KR102402511B1 (en) Method and device for searching image
RU2654133C2 (en) Three-dimensional object browsing in documents
TWI637347B (en) Method and device for providing image
CN116382554A (en) Improved drag and drop operations on mobile devices
US10191920B1 (en) Graphical image retrieval based on emotional state of a user of a computing device
KR20160062565A (en) Device and method for providing handwritten content
US11769500B2 (en) Augmented reality-based translation of speech in association with travel
WO2022005838A1 (en) Travel-based augmented reality content for images
CN116671121A (en) AR content for multi-video clip capture
US20220375137A1 (en) Presenting shortcuts based on a scan operation within a messaging system
US20240073166A1 (en) Combining individual functions into shortcuts within a messaging system
US20220058228A1 (en) Image based browser navigation
CN117597940A (en) User interface for presenting functions suitable for use in a camera device

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees