CN113994652B - Electronic apparatus and control method thereof - Google Patents
Electronic apparatus and control method thereof Download PDFInfo
- Publication number
- CN113994652B CN113994652B CN202080040734.6A CN202080040734A CN113994652B CN 113994652 B CN113994652 B CN 113994652B CN 202080040734 A CN202080040734 A CN 202080040734A CN 113994652 B CN113994652 B CN 113994652B
- Authority
- CN
- China
- Prior art keywords
- function
- image
- still image
- display
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 18
- 230000006870 function Effects 0.000 claims abstract description 170
- 238000004891 communication Methods 0.000 claims description 46
- 230000004044 response Effects 0.000 claims description 43
- 238000003702 image correction Methods 0.000 claims description 23
- 230000009471 action Effects 0.000 claims description 3
- 238000012905 input function Methods 0.000 claims 6
- 230000001755 vocal effect Effects 0.000 claims 2
- 238000012937 correction Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 230000014509 gene expression Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000003796 beauty Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003155 kinesthetic effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 239000004753 textile Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/738—Presentation of query results
- G06F16/739—Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/21—Intermediate information storage
- H04N1/2104—Intermediate information storage for one or a few pictures
- H04N1/2112—Intermediate information storage for one or a few pictures using still video cameras
- H04N1/2137—Intermediate information storage for one or a few pictures using still video cameras with temporary storage before final recording, e.g. in a frame buffer
- H04N1/2141—Intermediate information storage for one or a few pictures using still video cameras with temporary storage before final recording, e.g. in a frame buffer in a multi-frame buffer
- H04N1/2145—Intermediate information storage for one or a few pictures using still video cameras with temporary storage before final recording, e.g. in a frame buffer in a multi-frame buffer of a sequence of images for selection of a single frame before final recording, e.g. from a continuous sequence captured before and after shutter-release
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234318—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234327—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/34—Microprocessors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/36—Memories
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2201/00—Electronic components, circuits, software, systems or apparatus used in telephone systems
- H04M2201/38—Displays
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
An electronic device is provided. The electronic device includes a display, at least one processor, and at least one memory configured to store instructions that cause the at least one processor to: obtaining first information from a first still image frame included in a first moving image; obtaining second information from the first moving image; identifying at least one image function based on at least one of the first information or the second information; and controlling the display to display at least one function execution object for executing the at least one image function. Various other embodiments may be provided.
Description
Technical Field
The present disclosure relates to an electronic apparatus and a control method thereof. More particularly, the present disclosure relates to an electronic device capable of providing a function corresponding to an image.
Background
With the widespread use of various electronic devices, such as smartphones, tablet personal computers, portable multimedia players, personal digital assistants, laptop personal computers, and wearable devices, there is an increasing interest in techniques for viewing or editing images using electronic devices.
The user may generate a desired moving image or still image by applying an image correction function (e.g., correcting a portrait or inserting text).
The above information is presented merely as background information to aid in the understanding of the present disclosure. No determination has been made, nor has an assertion made, as to whether any of the above may be applied to the present disclosure as prior art.
Disclosure of Invention
Technical problem
When correcting a moving image or a still image using an electronic device, a user may feel inconvenience in that an image correction function suitable for the image must be found and determined.
Solution to the problem
According to one aspect of the present disclosure, an electronic device is provided. The electronic device includes a display, at least one processor, and at least one memory configured to store instructions that cause the at least one processor to: obtaining first information from a first still image frame included in a first moving image; obtaining second information from the first moving image; identifying at least one image function based on at least one of the first information or the second information; and controlling the display to display at least one function execution object for executing the at least one image function.
According to another aspect of the present disclosure, a control method for an electronic device is provided. The control method includes steps of: obtaining first information from a first still image frame included in a first moving image; obtaining second information from the first moving image; identifying at least one image function based on at least one of the first information or the second information; and displaying at least one function execution object for executing the at least one image function.
The beneficial effects of the invention are that
Aspects of the present disclosure aim to address at least the above problems and/or disadvantages and to provide at least the advantages described below. Accordingly, it is an aspect of the present disclosure to provide an electronic device capable of providing an image correction function corresponding to a display image.
Additional aspects will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the presented embodiments.
According to various embodiments of the present disclosure, an electronic device may identify an image correction function suitable for a still image displayed on a display and provide an object for performing the correction function.
According to various embodiments of the present disclosure, when the play of a moving image is paused, an electronic device may recognize an image correction function suitable for a still image displayed on a display and provide an object for performing the correction function.
According to various embodiments of the present disclosure, an electronic device may identify an image correction function suitable for a still image displayed on a display and provide an object for performing the correction function in cooperation with a server.
Other aspects, advantages, and salient features of the disclosure will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses various embodiments of the disclosure.
Drawings
The foregoing and other aspects, features, and advantages of certain embodiments of the present disclosure will become more apparent from the following description, taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a screen display showing a case where an electronic device displays a function execution object according to an embodiment of the present disclosure;
FIG. 2 is a schematic block diagram of an electronic device and a server according to an embodiment of the present disclosure;
fig. 3 is a screen display showing a case where an electronic device displays a function execution object based on an object recognition result according to an embodiment of the present disclosure;
Fig. 4 is a screen display showing a case where the electronic apparatus displays a function execution object based on a motion recognition result of an object included in a still image according to an embodiment of the present disclosure;
Fig. 5 is a screen display showing another case where the electronic apparatus displays a function execution object based on a motion recognition result of an object included in a still image according to an embodiment of the present disclosure;
fig. 6 is a screen display showing a case where the electronic apparatus displays a function execution object based on a result of recognizing a natural landscape in a still image according to an embodiment of the present disclosure;
Fig. 7 is a screen display showing a case where an electronic device displays a shortcut function execution object according to an embodiment of the present disclosure;
FIG. 8 is a flowchart of a process for an electronic device to display a function execution object according to an embodiment of the present disclosure;
FIG. 9 is a timing diagram of a process for an electronic device to display a function execution object in conjunction with a server according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of an electronic device in a network environment according to an embodiment of the present disclosure; and
Fig. 11 is a block diagram of a display device according to an embodiment.
Throughout the drawings, it should be noted that the same reference numerals are used to describe the same or similar elements, features and structures.
Detailed Description
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of the various embodiments of the disclosure defined by the claims and their equivalents. It includes various specific details that aid in understanding, but these are merely to be considered exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the various embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions may be omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to a bookend, but are used only by the inventors to enable a clear and consistent understanding of the present disclosure. Accordingly, it will be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It should be understood that the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a component surface" includes reference to one or more such surfaces.
In this disclosure, terms such as "comprising," "having," "including" or "can have" are intended to be inclusive of the stated elements, components, operations, functions, features, etc., but do not preclude the presence or addition of one or more other elements, components, operations, functions, features, etc.
In this disclosure, the expression "a or B", "at least one of a and/or B" or "one or more of a and/or B" is intended to include any possible combination of enumerated items. For example, the expression "a or B", "at least one of a and B" or "at least one of a or B" may denote all of the following: (1) cases comprising at least one a; (2) a case comprising at least one B; or (3) a case comprising at least one A and at least one B.
In this disclosure, expressions such as "1 st" or "first", "2 nd" or "second", etc., may denote various elements, regardless of order and/or importance, and are used merely to distinguish one element from another. For example, the first user device and the second user device may represent different user devices, regardless of their order or importance. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure.
When it is referred to that a certain (i.e., a first) element is coupled/coupled (operatively or communicatively) to "or" connected "to another (i.e., a second) element, it should be understood that the first element is coupled or connected to the second element directly or via any other (i.e., a third) element. On the other hand, when it is referred to that a certain (i.e., a first) element is "directly coupled/coupled to" or "directly connected to" another (i.e., a second) element, any other (i.e., a third) element is not present between the certain element and the other element.
In the present disclosure, the expression "configured to" may be used interchangeably with other expressions "suitable to", "capable of" to "," designed to "," suitable to "," manufactured to "or" capable of "to". In hardware, "configured to (or set to)" may not necessarily mean "specially designed to". Conversely, in some cases, the expression "configured to" a device may mean that the device is "capable of" with other devices or components. For example, a "processor configured (or arranged) to perform A, B and C" may represent a dedicated processor (e.g., an embedded processor) for performing the respective operations or a general-purpose processor (e.g., a Central Processing Unit (CPU) or an Application Processor (AP)) that performs the respective operations by executing one or more software programs stored in a storage device.
The terminology used in the present disclosure is for the purpose of describing certain embodiments only and is not intended to limit the scope of other embodiments. Singular expressions may include plural expressions unless the context clearly indicates otherwise. The terms used herein, including technical or scientific terms, may have the same meaning as commonly understood by one of ordinary skill in the art. Among terms used herein, terms defined in a general dictionary may be interpreted as having the same or similar meaning as the context of the related art, and are not interpreted in an ideal or excessively formal sense unless explicitly defined herein. In some cases, the terms, even if defined herein, are not to be construed as excluding embodiments of the present disclosure.
According to various embodiments of the present disclosure, the electronic device may include at least one of a smart phone, a tablet Personal Computer (PC), a mobile phone, a video phone, an electronic book reader, a desktop PC, a laptop PC, a netbook computer, a workstation, a server, a Personal Digital Assistant (PDA), a Portable Multimedia Player (PMP), a moving picture experts group (MPEG-1 or MPEG-2) audio layer 3 (MP 3) player, a medical device, a camera, or a wearable device. The wearable device may include at least one of an accessory device (e.g., a watch, a ring, a bracelet, a chain of feet, a necklace, glasses, contact lenses, and a Head Mounted Device (HMD), a textile or clothing integrated device (e.g., electronic clothing), a body attachment device (e.g., skin pad and tattoo), or a bioimplantable circuit.
In particular embodiments, the electronic device may be a household appliance including at least one of a Television (TV), a Digital Video Disk (DVD) player, an audio player, an air conditioner, a vacuum cleaner, an oven, a microwave, a washing machine, an air cleaner, a set-top box, a home automation control panel, a security control panel, a media box (e.g., samsung HomeSync TM, apple TV TM, or google TV TM), a game console (e.g., xbox TM、PlayStationTM), an electronic dictionary, an electronic key, a camera, or an electronic photo frame.
An electronic device according to various embodiments will now be described in detail with reference to the accompanying drawings. In this disclosure, the term "user" may refer to a person using an electronic device or a device using the electronic device (e.g., an artificial intelligence electronic device).
Fig. 1 is a screen display showing a case where an electronic device displays a function execution object according to an embodiment of the present disclosure.
Referring to fig. 1, portion 1-a illustrates that an electronic device 10 may render a moving image 100 on a display 11.
In one embodiment, the electronic device 10 may reproduce the moving image 100 by using a moving image file previously stored in a memory of the electronic device 10. Or the electronic device 10 may receive a moving image file in streaming form from a connected server and reproduce the moving image 100.
In one embodiment, the electronic device 10 may display various objects (i.e., icons) on the display 11 that control playback. For example, the electronic device 10 may display a progress bar 31, a play stop object 33, a back object 35, or a forward object 37 on the display 11, the progress bar 31 indicating a full play time (e.g., a slider) and a current play point (e.g., a cursor provided on the slider). The play stop object 33 may be displayed when the electronic device 10 plays a moving image, and the play start object 39 may be displayed when the play is paused or stopped.
In one embodiment, the electronic device 10 may pause the playing of the moving image 100 in response to an external input (e.g., user input) for selecting the play stop object 33. In this case, the electronic apparatus 10 may pause the playback of the moving image 100 while displaying the still image frame 110 output on the display 11 upon receiving the external input for selecting the playback stop object 33.
Referring to part 1-b of fig. 1, the electronic device 10 may pause the playing of the moving image 100 and display the still image frame 110 on the display 11 in response to a user input for selecting the play stop object 33. When the play of the moving image 100 is suspended, the electronic device 10 may display the play start object 39 on the display 11 instead of the play stop object 33.
In one embodiment, the electronic device 10 may obtain the first information or the second information when the play of the moving image is stopped. However, the case where the electronic device 10 obtains the first information or the second information is not limited thereto.
In one embodiment, the electronic device 10 may obtain the first information from a still image frame 110 displayed on the display 11. The first information may include, for example, a scene recognition result of a still image frame output on the display 11. The scene recognition result may include, for example, a human recognition result that identifies a person included in the still image frame, an object recognition result that identifies a shape included in the still image frame, or a location recognition result that identifies a geographic region included in the still image frame.
The first information may also include, for example, a photographing time of a still image frame output on the display 11, or a result of comparison between the still image frame output on the display 11 and at least one still image frame preceding the still image frame output on the display 11 or at least one still image frame following the still image frame output on the display 11. The comparison result may indicate, for example, a movement of a person or object detected in the still image frame. However, the first information is not limited thereto.
In one embodiment, the electronic device 10 may obtain the second information from the moving image 100. The second information may include, for example, information about a photographing time (e.g., photographing start time, photographing end time), a photographing position, a play duration, a photographing format, a file name, resolution, or a frame rate of the moving image 100. However, the second information is not limited thereto.
In one embodiment, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information. The electronic device 10 may display at least one function execution object for executing the identified at least one image-related function on the display 11.
For example, the electronic device 10 may identify person a as a result of scene recognition of the still image frame 110. The electronic apparatus 10 may display an image correction execution object (portrait correction execution object (e.g., beauty)) 51 for executing an image correction function to correct the face of the person a on the display 11. The electronic device 10 may also display an image generation execution object (e.g., an automatic clip) 53 on the display 11, and the image generation execution object 53 may generate a new moving image by extracting a still image frame in which the person a exists from the still image frames included in the moving image 100.
In one embodiment, the electronic device 10 may perform the corresponding function in response to user input for selecting a function execution object. Referring to part 1-c of fig. 1, in response to a user input for selecting a portrait correction execution object (e.g., beauty) 51, the electronic device 10 may display an adjustment object (or icon) for adjusting portrait correction in detail on the display 11.
For example, the electronic device 10 may display a correction level setting bar (e.g., intensity) 61, an eye correction object (e.g., large eye) 63, a face correction object (e.g., thin face) 65, and a skin tone correction object (e.g., skin tone) 67 on the display 11. The user can easily correct the still image frame 110 including the person a output on the display 11 by using the adjustment objects 61, 63, 65, and 67 for portrait correction.
In various embodiments, in response to a user input for selecting the portrait correction implementing object (e.g., beauty) 51, the electronic device 10 may directly apply a portrait correction effect to the still image frame 110 without displaying an adjustment object for adjusting portrait correction in detail. For example, the electronic device 10 may display an image to which the skin tone correction effect is applied to the person a.
In various embodiments, the electronic device 10 may apply the portrait correction effect to the moving image 100 in response to a user input for selecting a portrait correction performing object (e.g., a beauty) 51. For example, the electronic apparatus 10 may recognize a still image frame in which the person a exists among still image frames included in the moving image 100. The electronic device 10 may identify still image frames in which the person a is present and the moving image 100 is not being played. Or the electronic device 10 may identify a still image frame in which person a is present while the moving image 100 is being played. The electronic device 10 may play the moving image 100 while applying the person image correction effect to the face of the person a included in each still image frame in which the person a appears.
As described above, the electronic device 10 according to the embodiment of the present disclosure may provide the user with the function execution object that can apply the correction effect to the person appearing in the still image frame based on the human recognition result.
Fig. 2 is a schematic block diagram of an electronic device and a server according to an embodiment of the present disclosure.
Referring to portion 2-a of fig. 2, the electronic device 10 may include a processor 210, a display 220, and a memory 230. However, the electronic device 10 is not limited thereto. The electronic device 10 may or may not further include one of the components. For example, the electronic device 10 may further include communication circuitry 240 to establish communication with the server 20.
In one embodiment, memory 230 may store various Operating Systems (OSs) for controlling electronic device 10, various software programs (or applications) that support user functions, and data and instructions for operating electronic device 10. At least some of these programs may be downloaded from an external server via wireless or wired communication. The memory 230 may be implemented using a nonvolatile memory, a volatile memory, a flash memory, a hard disk drive, or a Solid State Drive (SSD). Memory 230 is accessed by processor 210 and its data may be processed by operations of processor 210, such as reading, writing, modifying, deleting, and updating.
In one embodiment, the memory 230 may store instructions configured to obtain first information from a still image frame included in a moving image, obtain second information from the moving image, identify at least one image-related function by using at least one of the first information or the second information, and display at least one function execution object for executing the identified image-related function.
In one embodiment, the display 220 may display various content under the control of the processor 210. The display 220 of fig. 2 may include the display 11 of fig. 1. The display 220 may display an image (e.g., a moving image or a still image), text, and/or an execution screen of an application. When the display 220 is implemented as a touch screen display, the display 220 may also function as an input device in addition to an output device.
In one embodiment, the processor 210 may control the components of the electronic device 10 described above. For example, the processor 210 may use a plurality of applications stored in the memory 230 to obtain features of an image or to modify (or correct) an image.
In one embodiment, the processor 210 may copy programs stored in the memory 230 to Random Access Memory (RAM) to perform various operations. Although the processor 210 is described as including only one CPU, the processor 210 may be implemented using multiple CPUs, digital Signal Processors (DSPs), or system on a chip (SoC).
The processor 210 may be implemented using a DSP, microprocessor, or Timing Controller (TCON) that processes digital signals. The processor 210 may include, but is not limited to, a CPU, a microcontroller unit (MCU), a micro-processing unit (MCU), a controller, an AP, a Communication Processor (CP), or an advanced Reduced Instruction Set (RISC) machine (ARM) processor, or a combination thereof. The processor 210 may also be implemented using a SoC or large scale integrated circuit (LSI) chip embedded with a processing algorithm, or may be implemented using a Field Programmable Gate Array (FPGA).
In one embodiment, the processor 210 may be configured to obtain first information from a still image frame included in a moving image, obtain second information from the moving image, identify at least one image-related function by using at least one of the first information or the second information, and display at least one function execution object for executing the identified image-related function.
Referring to part 2-b of fig. 2, the server 20 may include a data acquirer 250, a data processor 260, and a data outputter 270.
In one embodiment, the data acquirer 250 may receive a moving image or a still image frame included in a moving image from an external device.
In one embodiment, the data processor 260 may obtain the first information from a still image frame of a moving image. The data processor 260 may obtain the second information from the moving image. The data processor 260 may identify the image-related function by using at least one of the first information or the second information.
In one embodiment, the data outputter 270 may send information regarding the identified image-related functions to an external device.
Fig. 3 is a screen display showing a case where an electronic device displays a function execution object based on an object recognition result according to an embodiment of the present disclosure.
Referring to section 3-a of fig. 3, the electronic device 10 may play a moving image 300 on the display 11.
In one embodiment, the electronic device 10 may display various objects (e.g., icons) for playback control on the display 11. For example, the electronic device 10 may display a progress bar 31 indicating the full play time and the current play point, a play stop object 33, a back object 35, or a forward object 37 on the display 11. The play stop object 33 may be displayed when the electronic device 10 plays a moving image, and the play start object 39 may be displayed when the play is paused or stopped.
In one embodiment, the electronic device 10 may pause the playing of the moving image 300 in response to an external input (or user input) for selecting the play stop object 33. In this case, the electronic apparatus 10 may pause the playback of the moving image 300 while displaying the still image frame 310 that is output on the display 11 when the external input for selecting the playback stop object 33 is received.
Referring to section 3-b of fig. 3, the electronic device 10 may pause the playing of the moving image 300 and display the still image frame 310 on the display 11 in response to a user input for selecting the play stop object 33. When the play of the moving image 300 is suspended, the electronic device 10 may display the play start object 39 on the display 11 instead of the play stop object 33.
In one embodiment, the electronic device 10 may obtain the first information or the second information when the play of the moving image is stopped. The electronic device 10 may obtain the first information from the still image frame 310 output on the display 11 and may obtain the second information from the moving image 300.
In one embodiment, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information. The electronic device 10 may display at least one function execution object for executing the identified at least one image-related function on the display 11.
For example, the electronic device 10 may identify a "heart-comparing" gesture made by person A based on the scene recognition results of the still image frame 310. The electronic device 10 may display an icon recommendation execution object (e.g., a tag) 320 on the display 11 that may recommend an icon corresponding to the recognized gesture.
In one embodiment, in response to user input for selecting the icon recommendation execution object 320, the electronic device 10 may execute the corresponding function. Referring to section 3-c of fig. 3, in response to a user input for selecting an icon recommendation execution object (e.g., a tag) 320, the electronic device 10 may display an icon 330 on the display 11 that may be added to the still image frame. For example, the electronic device 10 may display the heart icon 330 on the display 11 based on the identified object being "heart rate". The electronic device 10 may add a user-selected icon to the still image frame to display the modified still image frame.
As described above, the electronic device 10 according to the embodiment of the present disclosure may provide the user with the function execution object that may add an icon related to the detected object to the still image frame or the moving image based on the object recognition result of the still image frame.
Fig. 4 is a screen display showing a case where the electronic apparatus displays a function execution object based on a motion recognition result of an object included in a still image according to an embodiment of the present disclosure.
Referring to portion 4-a of fig. 4, the electronic device 10 may play a moving image 400 on the display 11.
In one embodiment, the electronic device 10 may display various objects (or icons) for playback control on the display 11. For example, the electronic device 10 may display a progress bar 31 indicating the full play time and the current play point, a play stop object 33, a back object 35, or a forward object 37 on the display 11. The play stop object 33 may be displayed when the electronic device 10 plays a moving image, and the play start object 39 may be displayed when the play is paused or stopped.
In one embodiment, the electronic device 10 may pause the playing of the moving image 400 in response to an external input (or user input) for selecting the play stop object 33. In this case, the electronic apparatus 10 may pause the playback of the moving image 400 while displaying the still image frame 410 output on the display 11 upon receiving the external input for selecting the playback stop object 33.
Referring to section 4-b of fig. 4, the electronic device 10 may pause the playing of the moving image 400 and display the still image frame 410 on the display 11 in response to a user input for selecting the play stop object 33. When the play of the moving image 400 is suspended, the electronic device 10 may display the play start object 39 on the display 11 instead of the play stop object 33.
In one embodiment, the electronic device 10 may obtain the first information or the second information when the play of the moving image is stopped. The electronic device 10 may obtain the first information from the still image frame 410 output on the display 11 and may obtain the second information from the moving image 400. In one embodiment, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information. The electronic device 10 may display at least one function execution object for executing the identified at least one image-related function on the display 11.
For example, the electronic device 10 may identify that person B is performing a continuous action based on a comparison between the still image frame 410 and at least one still image frame preceding the still image frame 410 or at least one still image frame following the still image frame 410. The electronic device 10 may display at least one function execution object according to the scene recognition result.
In portion 4-b of fig. 4, the electronic device 10 may display a Graphics Interchange Format (GIF) generation execution object (e.g., an automatic GIF) 420 on the display 11 that may generate a GIF file using a plurality of still image frames.
In one embodiment, in response to user input for selecting the GIF generation execution object (e.g., an automatic GIF) 420, the electronic device 10 may perform the corresponding function. For example, the electronic device 10 may generate the GIF file by using the still image frame 410 output on the display 11, two to four still image frames before the still image frame 410, and two to four still image frames after the still image frame 410. However, the number of still image frames extracted by the electronic device 10 to generate the GIF file is not limited thereto. For example, the electronic device 10 may determine the number of still image frames extracted for generating the GIF file based on the resolution of the moving image file, the codec information and the frame rate, and the resolution of the still image frames.
Referring to section 4-c of fig. 4, the electronic device 10 may display a GIF execution object 430 for playing the generated GIF file on the display 11. In response to user input for selecting the GIF execution object 430, the electronic device 10 may play the GIF file on a portion or all of the display 11.
As described above, the electronic apparatus 10 according to the embodiment of the present disclosure may provide the user with the function execution object that may generate the GIF file by using the scene recognition result of the plurality of still image frames and the information obtained from the moving image.
Fig. 5 is a screen display showing another case where the electronic apparatus displays a function execution object based on a motion recognition result of an object included in a still image according to an embodiment of the present disclosure.
The portion 5-a of fig. 5 corresponds to the portion 4-a of fig. 4, and a description thereof is omitted.
Referring to section 5-b of fig. 5, in response to a user input for selecting the play stop object 33, the electronic device 10 may pause the play of the moving image 500 and display a still image frame 510 on the display 11. When the play of the moving image 500 is suspended, the electronic device 10 may display the play start object 39 on the display 11 instead of the play stop object 33.
In one embodiment, the electronic device 10 may obtain the first information or the second information when the play of the moving image is stopped. The electronic device 10 may obtain the first information from the still image frame 510 output on the display 11 and may obtain the second information from the moving image 500. In one embodiment, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information. The electronic device 10 may display at least one function execution object for executing the identified at least one image-related function on the display 11.
For example, the electronic device 10 may identify person B as the scene recognition result of the still image frame 510. The electronic device 10 may display an image generation execution object (e.g., an automatic clip) 520 on the display 11, and the image generation execution object 520 may generate a new moving image by extracting a plurality of still image frames in which the person B appears.
In one embodiment, in response to user input for selecting the image generation execution object (e.g., automatic clipping) 520, the electronic device 10 may perform the corresponding function. For example, the electronic apparatus 10 may generate a moving image file by using the still image frame 510 output on the display 11 and the still image frame of the moving image 500 in which the person B appears. The electronic device 10 may determine the number of still image frames extracted to generate a new moving image file based on the resolution of the moving image file, the codec information and the frame rate, and the resolution of the still image frames.
In various embodiments, the electronic device 10 may generate a new moving image file by using a still image received from an external electronic device (e.g., a server). For example, the electronic device 10 may use communication circuitry (e.g., communication circuitry 240 in fig. 2) to establish communication with at least one server. The electronic device 10 may request the server to select a still image including the person B from among the stored moving images or still images. The electronic device 10 may receive a still image including the person B from the server to generate a new moving image file.
Referring to part 5-c of fig. 5, the electronic device 10 may display an image execution object 530 for playing the generated moving image file on the display 11. In response to a user input for selecting the image execution object 530, the electronic device 10 may play a moving image file on a part or all of the display 11.
As described above, the electronic apparatus 10 according to the embodiment of the present disclosure may provide the user with a function execution object that may generate a moving image file by using the human recognition result of the still image frame and the information obtained from the moving image.
Fig. 6 is a screen display showing a case where an electronic apparatus displays a function execution object based on a result of recognizing a natural landscape in a still image according to an embodiment of the present disclosure.
The portion 6-a of fig. 6 corresponds to the portion 4-a of fig. 4, and a description thereof is omitted.
Referring to section 6-b of fig. 6, the electronic device 10 may pause the playing of the moving image 600 and display the still image frame 610 on the display 11 in response to a user input for selecting the play stop object 33. When the play of the moving image 600 is suspended, the electronic device 10 may display the play start object 39 on the display 11 instead of the play stop object 33.
In one embodiment, the electronic device 10 may obtain the first information or the second information when the play of the moving image is stopped. The electronic device 10 may obtain the first information from the still image frame 610 output on the display 11 and may obtain the second information from the moving image 600. In one embodiment, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information. The electronic device 10 may display at least one function execution object for executing the identified at least one image-related function on the display 11.
For example, as a result of the scene recognition of the still image frame 610, the electronic device 10 may recognize that the still image frame 610 is a scenic image. Based on the still image frame 610 being a landscape image, the electronic device 10 may display a text input execution object (e.g., add text) 620 capable of adding text to the still image frame 610 on the display 11.
In one embodiment, in response to user input for selecting text input execution object (e.g., adding text) 620, electronic device 10 may perform the corresponding function. Referring to portion 6-c of fig. 6, the electronic device 10 may display a keyboard 630 for text entry on a portion of the display 11. The electronic device 10 may also display a text entry box 640 on another portion of the display 11.
In one embodiment, the electronic device 10 may add text input to the still image frame 610 being displayed based on user input. The electronic device 10 may change the position, size, or shape of the text input box 640 according to user input. The electronic device 10 may also change the font or size of text entered into the text input box 640 based on user input.
As described above, the electronic device 10 according to the embodiment of the present disclosure may provide the user with the function execution object that may add text to the still image frame based on the scene recognition result of the still image frame.
Fig. 7 is a screen display showing a case where the electronic device displays a shortcut function execution object according to an embodiment of the present disclosure.
Referring to portion 7-a of fig. 7, electronic device 10a may play moving image 700 on display 11 a. The electronic device 10 may not display various objects (or icons) for play control on the display 11 a.
In one embodiment, the electronic device 10a may display the shortcut function execution object 720 on an area of the display 11 a.
For example, the electronic device 10a may display the shortcut function execution object 720 in the form of a translucent icon on the display 11 a. The electronic device 10a may move the shortcut function execution object 720 on the display 11a in response to a user input for touching and dragging the shortcut function execution object 720.
In one embodiment, in response to a user input for selecting the shortcut function execution object 720, the electronic device 10a may display at least one function execution object that may be provided to the user on the display 11 a.
Referring to part 7-b of fig. 7, in response to a user input for selecting the shortcut function execution object 720, the electronic device 10a may display function execution objects, such as a text input execution object 731 and an image generation execution object 733, on the display 11 a.
For example, in response to a user input for selecting the shortcut function execution object 720, the electronic device 10a may pause the play of the moving image 700 and display the still image frame 710 on the display 11 a. The electronic device 10a may obtain first information from the still image frame 710 output on the display 11a and may obtain second information from the moving image 700.
For example, the electronic device 10a may recognize the smartphone-shaped object 715 as an object recognition result of the still image frame 710. The electronic device 10a may display a text input execution object 731 on the display 11a, the text input execution object 731 allowing text input with respect to the identified smartphone-shaped object 715.
As another example, the electronic device 10a may identify the smartphone-shaped object 715 as an object identification result of the still image frame 710. The electronic device 10a may display an image generation execution object 733 on the display 11a, which image generation execution object 733 may generate a new moving image by extracting a plurality of still image frames in which the identified smartphone-shaped object 715 appears.
As described above, the electronic device 10a according to the embodiment of the present disclosure may display a plurality of function execution objects on the display 11a in a simplified form so as to minimize the phenomenon that contents output on the display 11a are hidden or blocked by the objects.
Fig. 8 is a flowchart of a process in which an electronic device displays a function execution object according to an embodiment of the present disclosure.
Referring to fig. 8, in operation 810, the electronic device 10 may obtain first information from a still image frame included in a moving image.
In one embodiment, the electronic device 10 may play a moving image on a display (e.g., display 11 in FIG. 1). In response to an external input, the electronic device 10 may stop playing of the moving image and display a still image frame on the display 11. For example, when playback of a moving image is stopped, the electronic device 10 may obtain first information from a still image frame output on the display 11.
The first information may include, for example, a scene recognition result of a still image frame output on the display 11. The scene recognition result may include, for example, a human recognition result that identifies a person included in the still image frame, an object recognition result that identifies a shape included in the still image frame, or a location recognition result that identifies a geographic region included in the still image frame.
The first information may also include, for example, a photographing time of a still image frame output on the display 11, or a result of comparison between the still image frame output on the display 11 and at least one still image frame preceding the still image frame output on the display 11 or at least one still image frame following the still image frame output on the display 11. The comparison result may indicate, for example, a movement of a person or object detected in the still image frame.
In operation 820, the electronic device 10 may obtain the second information from the moving image.
In one embodiment, the electronic device 10 may obtain the second information from the moving image when the play of the moving image is stopped. The second information may include, for example, information about a photographing time (e.g., photographing start time, photographing end time), a photographing position, a play duration, a photographing format, a file name, resolution, or a frame rate of a moving image. However, the second information is not limited thereto.
In operation 830, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information.
For example, the electronic apparatus 10 may identify an image generation function using a human recognition result obtained based on the first information and a resolution of a moving image obtained based on the second information, the image generation function may generate a new moving image by extracting a still image frame in which the identified person appears from the moving image.
In operation 840, the electronic device 10 may display at least one function execution object for executing the identified at least one image-related function.
For example, the electronic device 10 may display an image generation execution object for executing the image generation function on the display 11.
Fig. 9 is a timing diagram of a process in which an electronic device cooperates with a server to display a function execution object according to an embodiment of the present disclosure.
Referring to fig. 9, the electronic device 10 may identify a still image frame included in a moving image in operation 910.
In one embodiment, when playing a moving image, the electronic device 10 may receive an external input for stopping the playing of the moving image. In response to an external input for stopping playback, the electronic device 10 may stop playback of a moving image while displaying a still image frame output on a display (e.g., the display 11 in fig. 1). The electronic apparatus 10 may recognize the still image frame output on the display 11 as a still image frame for obtaining the first information while stopping playing the moving image.
In operation 920, the electronic device 10 may transmit the identified still image frame to the server 20.
For example, the electronic device 10 may send the identified still image frame, two to ten still image frames before the identified still image frame, and two to ten still image frames after the identified still image frame to the server 20. Or the electronic device 10 may send the moving image and the identified still image frames to the server 20.
In operation 930, the server 20 may obtain first information from the received still image frame.
For example, the server 20 may obtain the first information by performing scene recognition on the received still image frames. Or the server 20 may detect an object or person in motion as the first information by analyzing the received still image frame, two to ten still image frames before the received still image frame, and two to ten still image frames after the identified still image frame.
In operation 940, the server 20 may transmit the obtained first information to the electronic device 10.
In operation 950, the electronic device 10 may obtain the second information from the moving image.
In one embodiment, when the server 20 receives the moving image from the electronic device 10, the server 20 may obtain the second information from the moving image and transmit the second information to the electronic device 10. In this case, the electronic apparatus 10 may skip the operation 950 of obtaining the second information from the moving image.
In operation 960, the electronic device 10 may identify at least one image-related function by using at least one of the first information or the second information.
In operation 970, the electronic device 10 may display at least one function execution object for executing the identified image-related function.
Fig. 10 is a block diagram of an electronic device in a network environment according to an embodiment of the present disclosure.
Referring to fig. 10, an electronic device 1001 in a network environment 1000 may be implemented by the electronic device shown in fig. 1. The electronic device 1001 may communicate with another electronic device 1002 via a first network 1098 (e.g., a short range wireless communication network) or with the electronic device 1004 or server 1008 via a second network 1099 (e.g., a remote wireless communication network). According to an embodiment, electronic device 1001 may communicate with electronic device 1004 via server 1008. According to an embodiment, the electronic device 1001 may include a processor 1020, a memory 1030, an input device 1050, a sound output device 1055, a display device 1060, an audio module 1070, a sensor module 1076, an interface 1077, a haptic module 1079, a camera module 1080, a power management module 1088, a battery 1089, a communication module 1090, a Subscriber Identity Module (SIM) 1096, or an antenna module 1097. In some embodiments, at least one component (e.g., display device 1060 or camera module 1080) may be omitted from electronic device 1001, or one or more other components may be added to electronic device 1001. In some embodiments, some components may be implemented as a single integrated circuit. For example, the sensor module 1076 (e.g., a fingerprint sensor, iris sensor, or illuminance sensor) may be implemented as embedded in the display device 1060 (e.g., a display).
The processor 1020 may execute, for example, software (e.g., program 1040) to control at least one other component (e.g., hardware component or software component) of the electronic device 1001 coupled to the processor 1020, and may perform various data processing or calculations. According to one embodiment, as at least part of the data processing or calculation, the processor 1020 may load commands or data received from another component (e.g., the sensor module 1076 or the communication module 1090) into the volatile memory 1032, process the commands or data stored in the volatile memory 1032, and store the resulting data in the non-volatile memory 1034. According to an embodiment, the processor 1020 may include a main processor 1021 (e.g., a CPU or AP) and a secondary processor 1023 (e.g., a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a sensor hub processor, or CP), and the secondary processor 1023 may operate independently of the main processor 1021 or in conjunction with the main processor 1021. Additionally or alternatively, the secondary processor 1023 may be adapted to consume less power than the primary processor 1021, or to be dedicated to a particular function. The secondary processor 1023 may be implemented separately from the primary processor 1021 or as part of the primary processor 1021.
The secondary processor 1023 may control at least some functions or states associated with at least one of the components of the electronic device 1001 (e.g., the display device 1060, the sensor module 1076, or the communication module 1090) in place of the primary processor 1021 when the primary processor 1021 is in an inactive (e.g., sleep) state, or with the primary processor 1021 when the primary processor 1021 is in an active state (e.g., executing an application). According to an embodiment, the secondary processor 1023 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., a camera module 1080 or a communication module 1090) functionally related to the secondary processor 1023.
Memory 1030 may store various data used by at least one component of electronic device 1001 (e.g., processor 1020 or sensor module 1076). The various data may include, for example, input data or output data for the software (e.g., program 1040) and commands associated therewith. Memory 1030 may include volatile memory 1032 or nonvolatile memory 1034.
Programs 1040 may be stored as software in memory 1030, and may include, for example, an OS1042, middleware 1044, or applications 1046.
Input device 1050 may receive commands or data from outside of electronic device 1001 (e.g., a user) to be used by other components of electronic device 1001 (e.g., processor 1020). Input device 1050 may include, for example, a microphone, a mouse, a keyboard, or a digital pen (e.g., a stylus).
The sound output device 1055 may output a sound signal to the outside of the electronic device 1001. The sound output device 1055 may include, for example, a speaker or a receiver. The speaker may be used for general purposes such as playing multimedia or playing a record, and the receiver may be used for incoming calls. Depending on the embodiment, the receiver may be implemented separate from the speaker or as part of the speaker.
The display device 1060 may visually provide information to an exterior (e.g., a user) of the electronic device 1001. The display device 1060 may include, for example, a display, a holographic device, or a projector, and control circuitry to control a corresponding one of the display, holographic device, and projector. According to an embodiment, the display device 1060 may include touch circuitry adapted to detect touches, or sensor circuitry (e.g., pressure sensors) adapted to measure the strength of forces caused by touches.
The audio module 1070 may convert sound to electrical signals and vice versa. According to an embodiment, the audio module 1070 may obtain sound via the input device 1050, or output sound via the sound output device 1055 or headphones of an external electronic device (e.g., electronic device 1002) coupled directly (e.g., wired) or wirelessly to the electronic device 1001.
The sensor module 1076 may detect an operational state (e.g., power or temperature) of the electronic device 1001 or an environmental state (e.g., a state of a user) external to the electronic device 1001 and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, the sensor module 1076 may include, for example, a gesture sensor, a gyroscope sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an Infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
The interface 1077 may support one or more specified protocols used by the electronic device 1001 to couple directly (e.g., wired) or wirelessly with an external electronic device (e.g., the electronic device 1002). The interface 1077 may include, for example, a high definition multimedia interface, a universal serial bus interface, a secure digital card interface, or an audio interface, depending on the embodiment.
The connection end 1078 may include a connector via which the electronic device 1001 may be physically connected with an external electronic device (e.g., the electronic device 1002). According to an embodiment, the connection 1078 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).
The haptic module 1079 may convert the electrical signal into mechanical stimulus (e.g., vibration or motion) or electrical stimulus that may be recognized by the user through his sense of touch or kinesthetic sense. According to an embodiment, the haptic module 1079 may include, for example, a motor, a piezoelectric element, or an electro-stimulator.
The camera module 1080 may capture still images or moving images. According to an embodiment, the camera module 1080 may include one or more lenses, an image sensor, an image signal processor, or a flash.
The power management module 1088 may manage power provided to the electronic device 1001. According to an embodiment, the power management module 1088 may be implemented as at least a portion of, for example, a Power Management Integrated Circuit (PMIC).
Battery 1089 may provide power to at least one component of electronic device 1001. According to an embodiment, battery 1089 may include, for example, a primary non-rechargeable battery, a secondary rechargeable battery, or a fuel cell.
The communication module 1090 may support establishment of a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 1001 and an external electronic device (e.g., the electronic device 1002, the electronic device 1004, or the server 1008), and perform communication via the established communication channel. The communication module 1090 may include one or more communication processors that are operable independent of the processor 1020 (e.g., an AP) and support direct (e.g., wired) or wireless communication. According to an embodiment, the communication module 1090 may include a wireless communication module 1092 (e.g., a cellular communication module, a short-range wireless communication module, or a Global Navigation Satellite System (GNSS) communication module) or a wired communication module 1094 (e.g., a Local Area Network (LAN) communication module or a Power Line Communication (PLC) module). A respective one of these communication modules may communicate with external electronic devices via a first network 1098 (e.g., a short-range communication network such as bluetooth, wireless fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or a second network 1099 (e.g., a remote communication network such as a cellular network, the internet, or a computer network (e.g., a LAN or Wide Area Network (WAN)). These different types of communication modules may be implemented as a single component (e.g., a single chip) or may be implemented as multiple components (e.g., multiple chips) separate from each other. The wireless communication module 1092 may use user information (e.g., an International Mobile Subscriber Identity (IMSI)) stored in the SIM 1096 to identify and authenticate the electronic device 1001 in a communication network (e.g., the first network 1098 or the second network 1099).
The antenna module 1097 may transmit or receive signals or power to or from an outside of the electronic device 1001 (e.g., an external electronic device). According to an embodiment, the antenna module 1097 may include an antenna including a radiating element composed of a conductive material or conductive pattern formed in or on a substrate, such as a Printed Circuit Board (PCB). According to an embodiment, the antenna module 1097 may include multiple antennas. In this case, for example, the communication module 1090 (e.g., the wireless communication module 1092) may select at least one antenna from among a plurality of antennas suitable for a communication scheme used in a communication network (e.g., the first network 1098 or the second network 1099). The signal or power may then be transmitted or received between the communication module 1090 and the external electronic device via the selected at least one antenna. According to an embodiment, another component other than the radiating element, such as a Radio Frequency Integrated Circuit (RFIC), may additionally be formed as part of the antenna module 1097.
At least some of the above components may be coupled to each other and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., bus, general Purpose Input Output (GPIO), serial Peripheral Interface (SPI), or Mobile Industrial Processor Interface (MIPI)).
According to an embodiment, commands or data may be sent or received between the electronic device 1001 and the external electronic device 1004 via the server 1008 coupled to the second network 1099. Each of the electronic devices 1002 and 1004 may be the same type or a different type of device than the electronic device 1001. According to an embodiment, all or some of the operations to be performed at the electronic device 1001 may be performed at one or more of the external electronic devices 1002, 1004, or 1008. For example, if the electronic device 1001 should automatically perform a function or service, or in response to a request from a user or another device, the electronic device 1001 may request one or more external electronic devices to perform at least a portion of the function or service instead of, or in addition to, performing the function or service. The one or more external electronic devices receiving the request may perform at least a portion of the requested function or service, or additional functions or additional services related to the request, and communicate the result of the performance to the electronic device 1001. The electronic device 1001 may provide the results as at least a portion of a reply to the request with or without further processing of the results. To this end, for example, cloud computing, distributed computing, or client-server computing techniques may be used.
Fig. 11 is a block diagram of a display device according to an embodiment of the present disclosure.
Referring to FIG. 11, a block diagram 1100 or example display device 1060 is shown. The display device 1060 may include a display 1110 and a display driver integrated circuit (DDI) 1130 to control the display 1110. Display 1110 includes display 11 of fig. 1.DDI 1130 may include an interface module 1131, a memory 1133 (e.g., a buffer memory), an image processing module 1135, or a mapping module 1137.DDI 1130 may receive image information including image data or an image control signal corresponding to a command to control the image data from another component of electronic device 1001 via interface module 1131. For example, according to an embodiment, image information may be received from a processor 1020 (e.g., a main processor 1021 (e.g., an application processor)) or a secondary processor 1023 (e.g., a graphics processing unit) that operates independently of the functionality of the main processor 1021. For example, DDI 1130 may communicate with input device 1050 or sensor module 1076 via interface module 1131. DDI 1130 may also store at least part of the received image information in memory 1133, e.g., on a frame-by-frame basis.
The image processing module 1135 may perform pre-processing or post-processing (e.g., resolution, brightness, or size adjustment) on at least a portion of the image data. According to an embodiment, the preprocessing or post-processing may be performed, for example, based at least in part on one or more features of the image data or one or more features of the display 1110.
The mapping module 1137 may generate voltage values or current values corresponding to the image data pre-or post-processed by the image processing module 1135. According to an embodiment, the generation of the voltage value or the current value may be performed, for example, based at least in part on one or more properties of the pixel (e.g., an array of pixels, such as red, green, blue (RGB) stripes or a layered structure, or a size of each sub-pixel). For example, at least some pixels of display 1110 may be driven based at least in part on the voltage values or the current values such that visual information (e.g., text, images, or icons) corresponding to the image data may be displayed via display 1110.
According to an embodiment, the display device 1060 may further include touch circuitry 1150. The touch circuit 1150 may include a touch sensor 1151 and a touch sensor integrated circuit 1153 that controls the touch sensor 1151. The touch sensor IC 1153 may control the touch sensor 1151 to sense a touch input or a hover input relative to a location on the display 1110. To achieve this, for example, touch sensor 1151 may detect (e.g., measure) a change in a signal (e.g., voltage, amount of light, resistance, or amount of one or more charges) corresponding to a location on display 1110. Touch circuitry 1150 may provide input information (e.g., location, area, pressure, or time) to processor 1020 indicative of touch input or hover input detected via touch sensor 1151. According to an embodiment, at least a portion of touch circuitry 1150 (e.g., touch sensor IC 1153) may be formed as part of display 1110 or DDI 1130, or as part of another component (e.g., auxiliary processor 1023) disposed external to display device 1060.
According to an embodiment, the display device 1060 may further include at least one sensor (e.g., a fingerprint sensor, an iris sensor, a pressure sensor, or an illuminance sensor) of the sensor module 1176 or a control circuit for the at least one sensor. In this case, the at least one sensor or control circuitry for the at least one sensor may be embedded in a portion of a component of the display device 1060 (e.g., the display 1110, the DDI 1130, or the input device 1050). For example, when the sensor module 1176 embedded in the display device 1060 includes a biometric sensor (e.g., a fingerprint sensor), the biometric sensor may obtain biometric information (e.g., a fingerprint image) corresponding to touch input received via a portion of the display 1110. As another example, when the sensor module 1176 embedded in the display device 1060 includes a pressure sensor, the pressure sensor may obtain pressure information corresponding to touch input received via a portion or the entire area of the display 1110. Touch sensor 1151 or sensor module 1176 may be disposed between pixels in a pixel layer of display 1110, or above or below a pixel layer, depending on the embodiment.
As used herein, the term "module" may include units implemented in hardware, software, or firmware, and may be used interchangeably with other terms, such as "logic," logic block, "" portion, "or" circuit. A module may be a single integral component, or a minimal unit or portion thereof, adapted to perform one or more functions. For example, according to an embodiment, a module may be implemented in the form of an Application Specific Integrated Circuit (ASIC).
The various embodiments set forth herein may be implemented as software (e.g., program 1040) comprising one or more instructions stored on a storage medium (e.g., internal memory 1036 or external memory 1038) readable by a machine (e.g., electronic device 1001). For example, a processor (e.g., processor 1020) of a machine (e.g., electronic device 1001) may invoke at least one of the one or more instructions stored in the storage medium and execute it with or without one or more other components under the control of the processor. This allows the machine to be operated according to the at least one instruction invoked to perform at least one function. The one or more instructions may include code generated by a compiler or code executed by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. The term "non-transitory" merely means that the storage medium is a tangible device and does not include signals (e.g., electromagnetic waves), but the term does not distinguish between locations where data is semi-permanently stored in the storage medium and locations where data is temporarily stored in the storage medium.
According to embodiments, methods according to certain embodiments of the present disclosure may be included and provided in a computer program product. The computer program product may be transacted as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium, e.g., a compact disc read only memory (CD-ROM), or distributed online (e.g., downloaded or uploaded) via an application store, e.g., playStore TM, or distributed directly between two user devices, e.g., smartphones. If distributed online, at least a portion of the computer program product may be temporarily generated or at least temporarily stored in a machine readable storage medium, such as a memory of a manufacturer's server, a server of an application store, or a relay server.
In one embodiment, a computer program product may include a computer-readable storage medium containing one or more instructions configured to: obtaining first information from a still image frame included in a moving image; obtaining second information from the moving image; identifying at least one image-related function by using at least one of the first information or the second information; and displaying at least one function execution object for executing the identified image-related function on the display.
According to some embodiments, each of the above-described components (e.g., a module or program) may include a single entity or multiple entities. According to some embodiments, one or more of the above components may be omitted, or one or more other components may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In this case, according to some embodiments, the integrated components may still perform one or more functions of each of the plurality of components in the same or similar manner as they were performed by a respective one of the plurality of components prior to integration. According to some embodiments, operations performed by a module, a program, or another component may be performed sequentially, in parallel, repeatedly, or heuristically, or one or more operations may be performed in a different order or omitted, or one or more other operations may be added.
While the present disclosure has been shown and described with reference to various embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents.
Claims (13)
1. An electronic device, the electronic device comprising:
A display;
At least one processor; and
At least one memory configured to store instructions that cause the at least one processor to:
controlling the display to display a first still image frame included in a first moving image,
Obtaining first information of scene recognition results from the displayed first still image frame, the scene recognition results including human recognition results that identify a person included in the displayed first still image frame, object recognition results that identify a shape included in the displayed first still image frame, location recognition results that identify a geographic area included in the displayed first still image frame, or motion recognition results of a person or object detected in the displayed first still image frame,
Second information is obtained from the first moving image,
Identifying at least one image function among a plurality of image functions related to the first moving image based on at least one of the first information or the second information of the scene recognition result, wherein the plurality of image functions include an image correction function, a text input function, a graphic exchange format generation function, an image generation function, and an icon recommendation function,
Controlling the display to display at least one function execution object corresponding to the identified at least one image function, and
In response to detecting an input selecting a function execution object of the displayed at least one function execution object, executing an image function corresponding to the selected function execution object using the displayed first still image frame or first moving image,
Wherein the at least one function execution object includes a graphics-interchange format generation execution object corresponding to the graphics-interchange format generation function,
Wherein the instructions are further configured to cause the at least one processor to:
Controlling the display to play the first moving image;
Stopping playback of the first moving image at the displayed first still image frame in response to an external input for stopping playback;
identifying that a subject of the displayed first still image frame is performing a continuous action;
identifying a plurality of image frames from the first moving image corresponding to the continuous motion; and
A second moving image is generated based on the plurality of image frames and the displayed first still image frame.
2. The electronic device of claim 1, wherein the at least one function execution object further comprises at least one of an image correction execution object corresponding to the image correction function, a text input execution object corresponding to the text input function, an image generation execution object corresponding to the image generation function, or an icon recommendation execution object corresponding to the icon recommendation function,
Wherein the at least one image function identified includes the image correction function and the image generation function when the first information about the human recognition result is obtained,
Wherein the at least one image function identified when the first information on the object identification result is obtained includes the icon recommending function,
Wherein, when the first information about the location recognition result is obtained, the at least one recognized image function includes the text input function, and
Wherein the at least one image function identified includes the image generation function and the graphics interchange format generation function when the first information about the motion recognition result is obtained.
3. The electronic device of claim 2, wherein the instructions are further configured to cause the at least one processor to:
Applying an image correction function to the displayed first still image frame and generating a corrected image frame in response to an external input for selecting the image correction execution object, and controlling the display to display the corrected image frame;
Applying an image correction function to the first moving image and generating a third moving image in response to an external input for selecting the image correction execution object, and controlling the display to display the third moving image;
controlling the display to display a text input box in response to an external input for selecting the text input execution object;
generating a graphics interchange format file based on the displayed first still image frame in response to an external input for selecting the graphics interchange format generation execution object;
extracting a plurality of still image frames from the first moving image based on the first information in response to an external input for selecting the image generation execution object, and generating a fourth moving image based on the plurality of still image frames; and
The display is controlled to display one or more selectable icons in response to an external input for selecting the icon recommendation execution object.
4. The electronic device of claim 3, further comprising a communication circuit, and
Wherein the instructions are further configured to cause the at least one processor to, in response to an external input for selecting the image generation execution object:
control the communication circuit to send the displayed first still image frame to a server,
Controlling the communication circuit to receive one or more still image frames selected by the server based on the displayed first still image frame, and
The fourth moving image is generated based on the one or more still image frames and the displayed first still image frame.
5. The electronic device of claim 1, wherein the instructions are further configured to cause the at least one processor to control the display to display a shortcut function execution object associated with the function execution object in response to a user input.
6. The electronic device of claim 1, wherein the first information further comprises at least one of a time of capture of the first still image frame displayed, or a result of a comparison between the first still image frame displayed and a still image frame preceding or following the first still image frame displayed.
7. The electronic device of claim 1, wherein the second information comprises at least one of a shooting time, a shooting location, a play duration, a shooting format, a file name, a resolution, or a frame rate of the first moving image.
8. The electronic device of claim 2, wherein the instructions are further configured to cause the at least one processor to:
identifying a non-verbal communication in the displayed first still image frame,
Identifying a plurality of graphical objects corresponding to the non-verbal communication,
Displaying the plurality of graphical objects, and
A second still image frame including a first graphical object of the plurality of graphical objects is generated in response to an external input to the first graphical object.
9. The electronic device of claim 1, wherein a number of the plurality of image frames is determined based on the second information.
10. The electronic device of claim 2, wherein the instructions are further configured to cause the at least one processor to:
identifying a person in the displayed first still image frame,
Identifying a plurality of still image frames from the first moving image based on the identification of the person, and
The second moving image is generated based on the plurality of still image frames and the displayed first still image frame.
11. A method for control by an electronic device, the method comprising:
Displaying a first still image frame included in a first moving image;
Obtaining first information of scene recognition results from the displayed first still image frame, the scene recognition results including one of a human recognition result that recognizes a person included in the displayed first still image frame, an object recognition result that recognizes a shape included in the displayed first still image frame, a location recognition result that recognizes a geographic area included in the displayed first still image frame, or a motion recognition result of a person or object detected in the displayed first still image frame;
obtaining second information from the first moving image;
Identifying at least one image function among a plurality of image functions related to the first moving image based on at least one of the first information or the second information of the scene recognition result, wherein the plurality of image functions include an image correction function, a text input function, a graphic exchange format generation function, an image generation function, and an icon recommendation function;
displaying at least one function execution object corresponding to the identified at least one image function; and
In response to detecting an input selecting a function execution object of the displayed at least one function execution object, executing an image function corresponding to the selected function execution object using the displayed first still image frame or first moving image,
Wherein the at least one function execution object includes a graphics-interchange format generation execution object corresponding to the graphics-interchange format generation function,
Wherein the method further comprises:
displaying the play of the first moving image;
Stopping the playing of the first moving image at the displayed first still image frame in response to an external input for stopping the playing;
identifying that a subject of the displayed first still image frame is performing a continuous action;
identifying a plurality of image frames from the first moving image corresponding to the continuous motion; and
A second moving image is generated based on the plurality of image frames and the displayed first still image frame.
12. The method of claim 11, wherein the at least one function execution object further comprises at least one of an image correction execution object corresponding to the image correction function, a text input execution object corresponding to the text input function, an image generation execution object corresponding to the image generation function, or an icon recommendation execution object corresponding to the icon recommendation function,
Wherein the at least one image function identified includes the image correction function and the image generation function when the first information about the human recognition result is obtained,
Wherein the at least one image function identified when the first information on the object identification result is obtained includes the icon recommending function,
Wherein, when the first information about the location recognition result is obtained, the at least one recognized image function includes the text input function, and
Wherein the at least one image function identified includes the image generation function and the graphics interchange format generation function when the first information about the motion recognition result is obtained.
13. The method of claim 12, wherein performing the image function comprises:
applying an image correction function to the displayed first still image frame and generating a corrected still image frame in response to an external input for selecting the image correction execution object, and displaying the corrected still image frame;
Displaying a text input box on a display in response to an external input for selecting the text input execution object;
generating a graphic interchange format file by using a plurality of still image frames based on the displayed first still image frame in response to an external input for selecting the graphic interchange format generation execution object; and
In response to an external input for selecting the image generation execution object, a plurality of still image frames are extracted from the first moving image based on the first information, and a third moving image is generated.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020190039012A KR102656963B1 (en) | 2019-04-03 | 2019-04-03 | Electronic device and Method of controlling thereof |
KR10-2019-0039012 | 2019-04-03 | ||
PCT/KR2020/004399 WO2020204572A1 (en) | 2019-04-03 | 2020-03-31 | Electronic device and control method thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113994652A CN113994652A (en) | 2022-01-28 |
CN113994652B true CN113994652B (en) | 2024-08-30 |
Family
ID=72662407
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202080040734.6A Active CN113994652B (en) | 2019-04-03 | 2020-03-31 | Electronic apparatus and control method thereof |
Country Status (5)
Country | Link |
---|---|
US (2) | US11531701B2 (en) |
EP (1) | EP3935821A4 (en) |
KR (1) | KR102656963B1 (en) |
CN (1) | CN113994652B (en) |
WO (1) | WO2020204572A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102656963B1 (en) * | 2019-04-03 | 2024-04-16 | 삼성전자 주식회사 | Electronic device and Method of controlling thereof |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105046661A (en) * | 2015-07-02 | 2015-11-11 | 广东欧珀移动通信有限公司 | Method, apparatus and intelligent terminal for improving video beautification efficiency |
Family Cites Families (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6496981B1 (en) * | 1997-09-19 | 2002-12-17 | Douglass A. Wistendahl | System for converting media content for interactive TV use |
US20020056136A1 (en) * | 1995-09-29 | 2002-05-09 | Wistendahl Douglass A. | System for converting existing TV content to interactive TV programs operated with a standard remote control and TV set-top box |
GB9902235D0 (en) * | 1999-02-01 | 1999-03-24 | Emuse Corp | Interactive system |
US7868912B2 (en) * | 2000-10-24 | 2011-01-11 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US6804396B2 (en) | 2001-03-28 | 2004-10-12 | Honda Giken Kogyo Kabushiki Kaisha | Gesture recognition system |
US8028249B2 (en) * | 2001-05-23 | 2011-09-27 | Eastman Kodak Company | Method and system for browsing large digital multimedia object collections |
TW544634B (en) * | 2001-10-05 | 2003-08-01 | Newsoft Technology Corp | Thumbnail sequence generation system and method |
US20030149983A1 (en) * | 2002-02-06 | 2003-08-07 | Markel Steven O. | Tracking moving objects on video with interactive access points |
US7293242B2 (en) * | 2002-10-15 | 2007-11-06 | International Business Machines Corporation | Facilitated source to target object editing functions |
US7194701B2 (en) * | 2002-11-19 | 2007-03-20 | Hewlett-Packard Development Company, L.P. | Video thumbnail |
US20050163462A1 (en) * | 2004-01-28 | 2005-07-28 | Pratt Buell A. | Motion picture asset archive having reduced physical volume and method |
US7739599B2 (en) | 2005-09-23 | 2010-06-15 | Microsoft Corporation | Automatic capturing and editing of a video |
US8326775B2 (en) * | 2005-10-26 | 2012-12-04 | Cortica Ltd. | Signature generation for multimedia deep-content-classification by a large-scale matching system and method thereof |
US20100169786A1 (en) * | 2006-03-29 | 2010-07-01 | O'brien Christopher J | system, method, and apparatus for visual browsing, deep tagging, and synchronized commenting |
US7945142B2 (en) * | 2006-06-15 | 2011-05-17 | Microsoft Corporation | Audio/visual editing tool |
US8413182B2 (en) * | 2006-08-04 | 2013-04-02 | Aol Inc. | Mechanism for rendering advertising objects into featured content |
US9142253B2 (en) * | 2006-12-22 | 2015-09-22 | Apple Inc. | Associating keywords to media |
KR101345302B1 (en) * | 2007-06-21 | 2013-12-27 | 삼성전자주식회사 | Digital image processing apparatus, method for controlling the same, and recording medium storing programs to implement the method |
US20090006368A1 (en) | 2007-06-29 | 2009-01-01 | Microsoft Corporation | Automatic Video Recommendation |
JP4799511B2 (en) | 2007-08-30 | 2011-10-26 | 富士フイルム株式会社 | Imaging apparatus and method, and program |
JP2009065551A (en) * | 2007-09-07 | 2009-03-26 | Fujifilm Corp | Related information transmitting method, related information transmitting server, terminal device, and related information transmitting system |
US20090077459A1 (en) * | 2007-09-19 | 2009-03-19 | Morris Robert P | Method And System For Presenting A Hotspot In A Hypervideo Stream |
KR101382499B1 (en) * | 2007-10-22 | 2014-04-21 | 삼성전자주식회사 | Method for tagging video and apparatus for video player using the same |
US8136133B2 (en) * | 2007-11-13 | 2012-03-13 | Walker Digital, Llc | Methods and systems for broadcasting modified live media |
KR101419701B1 (en) * | 2007-12-03 | 2014-07-21 | 삼성전자주식회사 | Playback control method for multimedia play device using multi touch |
US20090213270A1 (en) * | 2008-02-22 | 2009-08-27 | Ryan Ismert | Video indexing and fingerprinting for video enhancement |
US8427552B2 (en) * | 2008-03-03 | 2013-04-23 | Videoiq, Inc. | Extending the operational lifetime of a hard-disk drive used in video data storage applications |
US20100036875A1 (en) * | 2008-08-07 | 2010-02-11 | Honeywell International Inc. | system for automatic social network construction from image data |
JP5210091B2 (en) | 2008-08-29 | 2013-06-12 | キヤノン株式会社 | Image processing apparatus, control method therefor, imaging apparatus, and program |
US9141860B2 (en) * | 2008-11-17 | 2015-09-22 | Liveclips Llc | Method and system for segmenting and transmitting on-demand live-action video in real-time |
US9852761B2 (en) * | 2009-03-16 | 2017-12-26 | Apple Inc. | Device, method, and graphical user interface for editing an audio or video attachment in an electronic message |
US9113124B2 (en) * | 2009-04-13 | 2015-08-18 | Linkedin Corporation | Method and system for still image capture from video footage |
US8195034B2 (en) * | 2009-04-13 | 2012-06-05 | Texas Instruments Incorporated | Low complexity event detection for video programs |
US9110517B2 (en) * | 2009-09-14 | 2015-08-18 | Broadcom Corporation | System and method for generating screen pointing information in a television |
US20110102637A1 (en) * | 2009-11-03 | 2011-05-05 | Sony Ericsson Mobile Communications Ab | Travel videos |
US8566348B2 (en) * | 2010-05-24 | 2013-10-22 | Intersect Ptp, Inc. | Systems and methods for collaborative storytelling in a virtual space |
JP4988012B2 (en) * | 2010-06-15 | 2012-08-01 | 株式会社東芝 | Electronic device and indexing control method |
US20120078899A1 (en) * | 2010-09-27 | 2012-03-29 | Fontana James A | Systems and methods for defining objects of interest in multimedia content |
US8726161B2 (en) * | 2010-10-19 | 2014-05-13 | Apple Inc. | Visual presentation composition |
AU2010257454B2 (en) * | 2010-12-24 | 2014-03-06 | Canon Kabushiki Kaisha | Summary view of video objects sharing common attributes |
EP2523145A1 (en) * | 2011-05-11 | 2012-11-14 | Alcatel Lucent | Method for dynamically adapting video image parameters for facilitating subsequent applications |
US8643746B2 (en) * | 2011-05-18 | 2014-02-04 | Intellectual Ventures Fund 83 Llc | Video summary including a particular person |
US8615776B2 (en) * | 2011-06-03 | 2013-12-24 | Sony Corporation | Video searching using TV and user interface therefor |
US9269243B2 (en) * | 2011-10-07 | 2016-02-23 | Siemens Aktiengesellschaft | Method and user interface for forensic video search |
US8789120B2 (en) * | 2012-03-21 | 2014-07-22 | Sony Corporation | Temporal video tagging and distribution |
US9454303B2 (en) * | 2012-05-16 | 2016-09-27 | Google Inc. | Gesture touch inputs for controlling video on a touchscreen |
US9118886B2 (en) * | 2012-07-18 | 2015-08-25 | Hulu, LLC | Annotating general objects in video |
US9836180B2 (en) * | 2012-07-19 | 2017-12-05 | Cyberlink Corp. | Systems and methods for performing content aware video editing |
KR101978216B1 (en) * | 2013-01-04 | 2019-05-14 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
EP2801919A1 (en) * | 2013-05-10 | 2014-11-12 | LG Electronics, Inc. | Mobile terminal and controlling method thereof |
US9372609B2 (en) * | 2014-04-03 | 2016-06-21 | Adobe Systems Incorporated | Asset-based animation timelines |
JP6043753B2 (en) * | 2014-06-12 | 2016-12-14 | 富士フイルム株式会社 | Content reproduction system, server, portable terminal, content reproduction method, program, and recording medium |
US10452713B2 (en) * | 2014-09-30 | 2019-10-22 | Apple Inc. | Video analysis techniques for improved editing, navigation, and summarization |
US20160112727A1 (en) * | 2014-10-21 | 2016-04-21 | Nokia Technologies Oy | Method, Apparatus And Computer Program Product For Generating Semantic Information From Video Content |
KR20160057864A (en) * | 2014-11-14 | 2016-05-24 | 삼성전자주식회사 | Electronic apparatus for generating summary contents and methods thereof |
US10110858B2 (en) * | 2015-02-06 | 2018-10-23 | Conduent Business Services, Llc | Computer-vision based process recognition of activity workflow of human performer |
US9860451B2 (en) * | 2015-06-07 | 2018-01-02 | Apple Inc. | Devices and methods for capturing and interacting with enhanced digital images |
US20160364103A1 (en) * | 2015-06-11 | 2016-12-15 | Yaron Galant | Method and apparatus for using gestures during video playback |
KR20170041098A (en) * | 2015-10-06 | 2017-04-14 | 엘지전자 주식회사 | Mobile device and method for operating the same |
US20180132006A1 (en) * | 2015-11-02 | 2018-05-10 | Yaron Galant | Highlight-based movie navigation, editing and sharing |
KR20170072071A (en) | 2015-12-16 | 2017-06-26 | 에스케이플래닛 주식회사 | Method and Apparatus for Providing Recommended Contents |
CN105677735B (en) * | 2015-12-30 | 2020-04-21 | 腾讯科技(深圳)有限公司 | Video searching method and device |
KR102096733B1 (en) * | 2016-03-04 | 2020-04-02 | 천종윤 | Video reproduction service method and server |
US10750207B2 (en) * | 2016-04-13 | 2020-08-18 | Mykhailo Dudko | Method and system for providing real-time video solutions for car racing sports |
KR102618404B1 (en) * | 2016-06-30 | 2023-12-26 | 주식회사 케이티 | System and method for video summary |
KR20180032025A (en) * | 2016-09-21 | 2018-03-29 | 엘지전자 주식회사 | Terminal and method for controlling the same |
US10606887B2 (en) * | 2016-09-23 | 2020-03-31 | Adobe Inc. | Providing relevant video scenes in response to a video search query |
US20180139408A1 (en) * | 2016-11-17 | 2018-05-17 | Parrotty, LLC | Video-Based Song Comparison System |
US10123065B2 (en) * | 2016-12-30 | 2018-11-06 | Mora Global, Inc. | Digital video file generation |
JP7056003B2 (en) * | 2017-03-16 | 2022-04-19 | 大日本印刷株式会社 | Image processing equipment, image processing methods and programs |
KR102314370B1 (en) * | 2017-05-17 | 2021-10-19 | 엘지전자 주식회사 | Mobile terminal |
KR101938667B1 (en) * | 2017-05-29 | 2019-01-16 | 엘지전자 주식회사 | Portable electronic device and method for controlling the same |
RU2666336C1 (en) * | 2017-08-01 | 2018-09-06 | Общество С Ограниченной Ответственностью "Яндекс" | Method and system for recommendation of media-objects |
US10176366B1 (en) * | 2017-11-01 | 2019-01-08 | Sorenson Ip Holdings Llc | Video relay service, communication system, and related methods for performing artificial intelligence sign language translation services in a video relay service environment |
US10650245B2 (en) * | 2018-06-08 | 2020-05-12 | Adobe Inc. | Generating digital video summaries utilizing aesthetics, relevancy, and generative neural networks |
KR102656963B1 (en) * | 2019-04-03 | 2024-04-16 | 삼성전자 주식회사 | Electronic device and Method of controlling thereof |
-
2019
- 2019-04-03 KR KR1020190039012A patent/KR102656963B1/en active IP Right Grant
-
2020
- 2020-03-31 WO PCT/KR2020/004399 patent/WO2020204572A1/en unknown
- 2020-03-31 EP EP20784798.9A patent/EP3935821A4/en active Pending
- 2020-03-31 US US16/835,828 patent/US11531701B2/en active Active
- 2020-03-31 CN CN202080040734.6A patent/CN113994652B/en active Active
-
2022
- 2022-11-10 US US17/984,749 patent/US11907290B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105046661A (en) * | 2015-07-02 | 2015-11-11 | 广东欧珀移动通信有限公司 | Method, apparatus and intelligent terminal for improving video beautification efficiency |
Also Published As
Publication number | Publication date |
---|---|
US11907290B2 (en) | 2024-02-20 |
KR102656963B1 (en) | 2024-04-16 |
EP3935821A1 (en) | 2022-01-12 |
KR20200117216A (en) | 2020-10-14 |
US20230065006A1 (en) | 2023-03-02 |
US20200320122A1 (en) | 2020-10-08 |
WO2020204572A1 (en) | 2020-10-08 |
US11531701B2 (en) | 2022-12-20 |
EP3935821A4 (en) | 2022-09-28 |
CN113994652A (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3700184B1 (en) | Electronic device and method for changing magnification of image using multiple cameras | |
US11069323B2 (en) | Apparatus and method for driving display based on frequency operation cycle set differently according to frequency | |
US11138434B2 (en) | Electronic device for providing shooting mode based on virtual character and operation method thereof | |
US20160378190A1 (en) | Electronic device and method for providing haptic feedback thereof | |
US20210027513A1 (en) | Electronic device for providing avatar and operating method thereof | |
US10827125B2 (en) | Electronic device for playing video based on movement information and operating method thereof | |
US20220027009A1 (en) | Electronic device for processing input event and method of operating same | |
CN105427369A (en) | Mobile terminal and method for generating three-dimensional image of mobile terminal | |
US20180286089A1 (en) | Electronic device and method for providing colorable content | |
CN111684782B (en) | Electronic device and control method thereof | |
US11907290B2 (en) | Electronic device and control method thereof | |
US20190278393A1 (en) | Method for operating touch pad and electronic device for supporting same | |
US10551960B2 (en) | Input processing method and device | |
CN111512357B (en) | Electronic device including display driving circuit | |
US11538438B2 (en) | Electronic device and method for extending time interval during which upscaling is performed on basis of horizontal synchronization signal | |
US20190311670A1 (en) | Method for driving plurality of pixel lines and electronic device thereof | |
KR20200055980A (en) | Electronic device and method for providing multiple services respectively corresponding to multiple external objects included in image | |
CN111492422B (en) | Display driver circuit for synchronizing output timing of images in low power state | |
CN112567326A (en) | Method for processing dynamic image and electronic device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |