FI20175539A1 - Arrangement and related method for provision of video items - Google Patents

Arrangement and related method for provision of video items Download PDF

Info

Publication number
FI20175539A1
FI20175539A1 FI20175539A FI20175539A FI20175539A1 FI 20175539 A1 FI20175539 A1 FI 20175539A1 FI 20175539 A FI20175539 A FI 20175539A FI 20175539 A FI20175539 A FI 20175539A FI 20175539 A1 FI20175539 A1 FI 20175539A1
Authority
FI
Finland
Prior art keywords
video
arrangement
user
item
metadata
Prior art date
Application number
FI20175539A
Other languages
Finnish (fi)
Swedish (sv)
Other versions
FI129291B (en
Inventor
Janne Neuvonen
Seppo Sormunen
Olli Rantula
Original Assignee
BCaster Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BCaster Oy filed Critical BCaster Oy
Priority to FI20175539A priority Critical patent/FI129291B/en
Priority to EP18818244.8A priority patent/EP3639164A4/en
Priority to PCT/FI2018/050442 priority patent/WO2018229332A1/en
Priority to KR1020207000782A priority patent/KR20200017466A/en
Publication of FI20175539A1 publication Critical patent/FI20175539A1/en
Application granted granted Critical
Publication of FI129291B publication Critical patent/FI129291B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/11Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2353Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information

Abstract

An arrangement (100), optionally comprising one or more network accessible servers, for searching for digital video material, wherein the arrangement comprises at least one processor (102) configured to receive a number of video items: store the video items in a database; provide metadata, preferably a number of tags, for any or each video item wherein said metadata is obtained through utilizing supplementary data provided by at least one user device (104) via which said video item has been obtained; receive a search query wherein at least one search criterion comprised in the search query is related to said metadata; determine video item search results matching the search query among said at least one video item stored in the database; and provide an output to a user comprising the video item search results, wherein the video item is related to time duration and said provided metadata comprises at least one quality indicator, which is determined through analysis of said supplementary data related to one or more subitems of the video item, wherein a subitem is related to a subduration of thereof, and said at least one quality indicator is assigned to a corresponding subitem. A corresponding method is presented.

Description

ARRANGEMENT AND RELATED METHOD FOR PROVISION OF VIDEO ITEMS
TECHNICAL FIELD OF THE INVENTION
The present invention is related to provision and consumption of video material. More specifically, however not exclusively, the present invention provides an arrangement and method for searching for digital video items, such as video clips, wherein metadata is provided through utilization of data obtained from or via devices with which said video items have been captured. Search criteria may be related to such metadata.
20175539 prh 12 -06- 2017
BACKGROUND OF THE INVENTION
In the past years, advancements in camera technology and mobile devices have led to a substantial rise in the amount of video footage being recorded by mobile devices such as smartphones and tablets. Much of this footage is uploaded to social media sites through which users may search interesting video footage using preferred search criteria. The search criteria are typically used to search matching metadata associated with video footage such as different video clips. Relevant metadata may be associated with a video clip through tagging, done by the recorder of the video material or the uploader of the material.
Searching for video material taken at a specific event is usually rather difficult, as searches are typically conducted through specifying search words. The search words used may be related to the specific event, but video material taken at the event may not have been tagged using these words and is thus not found. Video material may also have been tagged erroneously by the tagger.
The vast amount of video material available may also lead to such a large amount of search results that it may be practically impossible for a conductor of the search to review them.
Methods for organizing the search results may thus include relevance according to metadata based tagging, which is problematic due to reasons
20175539 prh 12 -06- 2017 stated above, including lacking and erroneous (meta)data. Organizing may also be done through popularity based on the number of views or on ratings given by viewers, but this is again subjective and somewhat non-informative in many cases.
Traditional video search methods may also provide information regarding quality of the videos. Still, this may be problematic and misleading, as video quality may vary drastically even within a certain video. The assessment of quality itself is often also rather difficult, because quality is a rather subjective measure and if the users of video items such as uploaders or io downloaders may themselves rate the quality, the used scale may mean different things to different people, whereupon the issued quality ratings are easily mutually uncomparable.
It is possible to combine videos taken at a specific event to form a compilation of e.g. videos taken from different angles or directions, is However, tools for easily constructing such complication e.g. upon need and with minimum or at least reduced necessary user input are currently unavailable. Available tools require quite a bit of manual work and effort, special expertise around video editing, wading through different video clips for selection in the compilation, etc.
SUMMARY OF THE INVENTION
An object of the invention is to alleviate at least some of the problems relating to the known prior art.
The arrangement of the present invention is characterized by what is stated 25 in attached claim 1. The method of the present invention is characterized by what is stated in attached claim 19.
20175539 prh 12 -06- 2017
The present invention offers a plethora of advantages over the known prior art, such as providing an intuitive, easy-to-use dynamic arrangement for searching for video items through preferably automatically generated metadata. As metadata is not specified by an uploader or recorder of the 5 video item, errors, either intentional or accidental, may not be made by the uploader or recorder, and are thus avoided.
Utilizing an embodiment of the present invention, any video item recorded at a specific time or place that has been uploaded may be searched and downloaded from an associated system, such as a server arrangement io optionally implemented utilizing a cloud computing environment, by using the time and place (of capture as indicated by supplementary data associated with the video item) as search criteria.
An arrangement may provide many different ways of displaying search results to a user of the arrangement. In an advantageous embodiment, the is search results are provided to a user of the arrangement so that video quality may also be indicated through at least one quality indicating parameter, where such quality parameter, or ‘indicator’, has been advantageously determined as metadata through data obtained via sensors of the device through which the video material has been obtained, such as a 20 smartphone, tablet, or other terminal device, or a camera device provided with or at least functionally connected to a communication device. This metadata may preferably be generated automatically by the arrangement. Utilizing the present invention, video material from an event may be conveniently searched for and found, with the additional benefit of providing 25 search results indicative of video quality, in which case a user of the arrangement may choose not to view video items of poor quality. Quality or other metadata, such as tags, associated with a video item may be generated based on data acquired from sensors, such as image (e.g. the image sensor itself used for capturing the video), sound, position and/or 30 acceleration/inertial sensors.
A video item may comprise one or more, nowadays typically digital, video files, or clips, or at least one or more video frames, which are related to a duration, which may be considered a temporal ‘length’ or ‘extent’, of the video item in question, being also indicative of the related recording, or 35 ‘shooting’, duration. A video item may be perceived to be the result of substantially continuous recording having a certain frame rate, or sampling
20175539 prh 12 -06- 2017 rate, typically but not necessarily falling within a range from at least about 10 or 15 Hz to e.g. about 25, 50, 100 Hz or higher. The rate may be static or dynamic within an item. Video quality may be advantageously assessed and preferably also indicated individually for a segment of the video or even separately for each video frame, giving the additional benefit of possibly indicating to a user if only part of a video is of good or bad quality. The video quality may then be assessed through analysis of two or more subitems of the video item, where the subitem is related to a subduration. Assessment and e.g. related tagging may take place at a recording device or e.g. the io receiving (server) arrangement, or by a third party system. Quality may be assessed e.g. on a predetermined scale, so that a user may be aware of the degree of quality of e.g. segments of a video. Alternatively or additionally, an average quality of a video item may optionally be communicated to a user. The scale or quality in general may be based on e.g. frame rate, is resolution, stability, focus, field of view, etc. evaluated utilizing e.g. some more widely adopted standard or a proprietary solution, and/or based on user-selectable settings.
In some embodiments also other search criteria may be given that may be related to automatically generated metadata. In one embodiment, for 20 instance metadata obtained through image analysis may be utilized. In an exemplary use scenario, a user of the arrangement may have a desire to find video material taken at a specific event such as a sports event in which a specific player is visible. This may be specified in the user’s search query as for example a player (sur)name or number, and in the search results the 25 arrangement may then provide video items of the event in which the player in question is visible, where the player number has been detected through image analysis of the video footage, essentially involving e.g. pattern recognition, and has been added to the video metadata.
Related to the above example, in an embodiment the arrangement may 30 also, if desired, produce a video compilation of the event in which e.g. a specific player or team is featured exclusively or with positive or negative emphasis, and video items in which the player or team is visible may be compiled, for instance. Also other material, for example audio material may be integrated with the video material.
Video compilations may be generated through utilization of various types of automatically generated metadata. For example time, a direction or angle in
20175539 prh 12 -06- 2017 which the video has been taken with respect to an event, or certain objects being visible in the video footage may be taken into account. In an embodiment, video quality may have been assessed individually for subitems of a video item. A potential benefit may then be that video 5 compilations with only high or sufficient quality video subitems according to e.g. user-determined criteria are created.
The benefits explained above may substantially decrease the amount of time that is required to perform video searches. Human labor and computational resources may be saved, as video compilations may be io automatically generated even dynamically, in some cases substantially in real-time fashion, responsive to user input such as search queries including criteria given by a user.
In some embodiments, the arrangement may be utilized by private users, whereas in some other embodiments professional users such as reporters is or news stations may obtain video material of an event for distribution, publishing or other purposes. Relevant and high-quality footage may be acquired swiftly and with ease through the arrangement. Also, time needed and expenses required may be reduced in the news production process, as reporters may not have to travel to a specific location in order to obtain 20 video footage. A best angle or direction from which video footage is taken may for instance also be found through the arrangement.
Also, video material obtained even accidentally of an event, perhaps something happening unexpectedly, may be searched for. This feature may be utilized for example by law enforcement agencies to investigate possible 25 criminal events or search for individuals suspected of committing a crime.
Yet, e.g. fire brigades or other emergency units may exploit the available data in performing their duties.
Embodiments of the present invention may provide novel methods for news providers or any other private or corporate instance to purchase or generally 30 acquire video content. Through an embodiment of the arrangement, a user may obtain access rights or exclusive rights of video content, and a method of conducting payment to the content uploader may be provided. If exclusive rights to content are purchased or otherwise issued, the content in question may be removed from the database.
20175539 prh 12 -06- 2017
The price of access or exclusive rights may in different embodiments be determined through e.g. auctions, or pricing may be determined automatically by the arrangement based on video content and/or quality analysis. Alternatively or in addition, also other parameters may affect 5 pricing, such as demand or generally popularity of video items in question, which may be estimated based on e.g. search term statistics available.
Different user rights for an arrangement may also be provided along with different user profile categories. For instance, a user may obtain a user profile free of charge, and may be provided more limited possibilities to io search for video content acquired only e.g. in predetermined locations. A fee, for example a monthly fee, may then be charged for user profiles in which available search locations or other search criteria are expanded.
Through embodiments of the invention, specific events, locations or instants may also be blocked, so that a group of users or any user may not have is access to video material obtained at the event or location, or at a particular instant. The blocking may be temporary or permanent and may be determined manually (e.g. by the operator of the arrangement) or automatically based on information or triggers acquired through external sources. For instance, an accident site may be reported through an 20 emergency response center (automatically or manually), whereupon the emergency site and optionally adjacent region may be automatically blocked by the arrangement due to e.g. ethical considerations and/or may, alternatively, be set as a location to be specifically monitored.
In other aspects, the invention may provide means for uploading video items 25 automatically to a server optionally depending on the settings of the recording terminal or e.g. user account associated therewith. All videos taken using a specific device may, if desired, be uploaded to a server automatically and without unnecessary delay. This way, video items may be available for search briefly, optionally essentially in real-time, and one does 30 not need to remember to manually trigger uploading video material. In one embodiment, an application for a mobile device may be provided, and all video material captured by the device while the application is running or set in some specific mode or state, e.g. active state, may be automatically uploaded to a server.
20175539 prh 12 -06- 2017
Automatic initial tagging of metadata done through e.g. image analysis methods may also be corrected, adjusted or otherwise altered using external data resources. In the case of a marathon, for instance, the arrangement may have mistakenly assigned a tag referring to a competitor 5 number “26” to a video footage, when the footage is actually of a competitor with the number “28”. Timing result records for the marathon may be provided, from which it may be possible in some cases to verify that the competitor “26” could not have been at the location in question at the time that the video footage was captured. In this case, the arrangement may io determine that assigning metadata/tagging has originally been erroneous, and through image analysis, the associated tag may be revised to refer to a competitor “28”.
Applications with user interfaces comprising various advantageous features may be provided for the devices of users of an arrangement. For instance, is for a content providing user, a user interface (Ul) may be provided through which video content may be easily captured. The user interface may comprise a multifunctional record button which may be manipulated to initiate or terminate video recording and additionally also communicate, in real-time to the content providing user, information related to the video that 20 is being recorded or may be recorded at the given instant, for example.
The multifunctional record button or similar Ul feature may also be cleverly repositioned by the content providing user or automatically, for instance, to a desired location on a screen of a recording device. This ensures that the record button may at all times be located at an optimal position for the 25 content providing user, where the optimal position may depend, for instance, on the handedness or length of fingers of the content providing user.
In an embodiment, the multifunctional record button may also be resized (enlargened, reduced, and/or otherwise redimensioned) automatically or by 30 the content providing user. The multifunctional record button may then be set to a size according to a (user) preference or may be resized to ensure e.g. that the content providing user has desired visibility of the screen of the recording device in which case the size of the button should not exceed e.g. a selected limit of the screen or window size.
20175539 prh 12 -06- 2017
In one embodiment, using a user interface, a searching user may, after providing a search query with a time and location (e.g. indication of locationspecific target event or more explicit indication of a geographic location, such as address, coordinates, city or region name), view on a map where in 5 the vicinity of the given location videos have been captured at the provided time. The quality and/or point of view or direction from which the video has been shot or a field of view of the videos may also be communicated to the searching user. The user interface may comprise a timeline which may be navigated by the searching user, where manipulation of the timeline may io result in the searching user being able to view, through the aforementioned map, changes with time in the provided information being related to videos that are available from the map region. In an embodiment, a user interface may additionally show one or more of the videos indicated in the map at the time which is given at the timeline. The videos may be played and the is timeline may be allowed to proceed, while the map view changes accordingly. A searching user may also manipulate the timeline to view video frames and/or map configurations corresponding to a desired time in the vicinity of the time expressed through the search query. In some embodiments, the time and location may also be chosen or varied by the 20 searching user after a search query has been given.
The claims and description do not pose limitations to the number of users, either searching or content providing, that may utilize the invention simultaneously or at different times. At a given time, any number of users may exist, or there may be one or no users. From a technical standpoint, the 25 disclosed arrangement may be assigned, optionally dynamically e.g. from a cloud, the necessary hardware in terms of required processing, storage, and communications capacity for serving the users thereof.
The term “time” may herein refer to a date, an exact time (e.g. per second or fraction of a second) on a specific date, or a time interval, such as a time 30 within a predetermined time limit from another time. The term “location” may refer to a specific location, such as coordinates, or a region such as a city, or a region which may be specified as residing within a predetermined distance from a specific location.
The term “plurality” refers herein to a number of two or more. The term 35 “number” refers to one or more.
20175539 prh 12 -06- 2017
The exemplary embodiments presented in this text are not to be interpreted to pose limitations to the applicability of the appended claims. The verb to comprise is used in this text as an open limitation that does not exclude the existence of unrecited features. The features recited in depending claims 5 are mutually freely combinable unless otherwise explicitly stated.
The novel features which are considered as characteristic of the invention are set forth in particular in the appended claims. The invention itself, however, both as to its construction and its method of operation, together with additional objects and advantages thereof, will be best understood from io the following description of specific example embodiments when read in connection with the accompanying drawings.
The previously presented considerations concerning the various embodiments of the arrangement may be flexibly applied to the embodiments of the method mutatis mutandis, and vice versa, as is being appreciated by a skilled person.
BRIEF DESCRIPTION OF THE DRAWINGS
Next the invention will be described in greater detail with reference to exemplary embodiments in accordance with the accompanying drawings, in 20 which:
Figure 1 illustrates an exemplary arrangement according to one embodiment of the invention,
Figure 2 shows an exemplary application view of a first user interface according to an embodiment of the invention,
Figure 3 shows an exemplary application view of a second user interface according to an embodiment of the invention, and
Figure 4 depicts steps that may be executed in a method according to an embodiment of the invention.
DETAILED DESCRIPTION
20175539 prh 12 -06- 2017
Figure 1 gives an exemplary arrangement 100, optionally comprising one or more server computers and/or other network side elements implementing e.g. a network service for distribution of videos, according to one embodiment of the invention.
In more detail, the arrangement 100 comprises a processor 102, which may receive a video item from a first user device 104, which is a content providing user device, where the user device 104 may be a mobile device such as a mobile phone or a tablet. In some embodiments, the video item may be received from the device 104 as a stream e.g. in substantially reallo time fashion upon capturing, whereas in other embodiments, a readycaptured complete item such as a video file, or ‘clip’, may be received after capturing phase. The processor 102 may indeed reside in at least one remote server, and the receiving of a video item may be realized utilizing e.g. at least partially wireless communication between the server or is processor 102 and the user device 104. For storing data e.g. in at least one database 122, the server, or the arrangement 100 in general, may include memory and for communication purposes, a communications interface such as wireless or wired transceiver, e.g. a transceiver operable in a LAN (local area network) network. In practice, the arrangement 100 may be coupled via 20 the interface to a communications network, such as the internet, whereto the device 104 also has access via a wireless link of a related wireless, optionally cellular or Wi-Fi type, network.
The first user device 104 may, in addition to a processor, memory and e.g. communication interface (e.g. a wireless transceiver), comprise a plurality of 25 sensors, at least a camera 106 (image sensor capturing visible and/or invisible such as infrared frequencies; the latter could provide e.g. thermal images for temperature estimation and/or heat-based object/subject locating purposes) and additionally other sensors, such as a magnetometer 108, an accelerometer 110, a microphone 112, a GPS sensor 114, and/or a 30 gyroscope 116 or still a further inertial sensor. Further applicable sensors include e.g. temperature, moisture and pressure sensors. Sensor data from sensors in the first user device 104 (the sensor data being acquired at the time of capturing the video item and associated with the capturing instant in the video item or the video item in general, possibly depending on the 35 particular data item), may be received by the processor 102. Additionally, other data may be received by the processor 102 through the first user
20175539 prh 12 -06- 2017 device 104, such as temporal data (capture time, duration, etc.) related to the video item.
An arrangement may additionally or alternatively in some embodiments comprise secondary content providing devices or secondary user devices 5 105.
These devices 105 external to the first user device 104 may comprise sensors that acquire and transmit data, e.g. as a stream or ready-captured complete item (e.g. file), to a first user device 104 through wired or wireless communication. The first user device 104 may then transmit the data (as io such or in adapted form) and possibly other data, e.g., from sensors comprised in the first user device 104, to the processor 102.
As an example, a drone, other unmanned, optionally aerial, vehicle or generally some other type of a sensor/camera carrying (user) device may be used to supply video data and optionally additional data such as sensor is data. The item may be transmitted to a first user device 104. Such data thus originally potentially obtained from a connected video capturing device 105 may then be received by the processor 102 via the first user device 104.
In an embodiment, the arrangement 100 comprises a first user application
118 in connection with the first user device 104. The first user application 20 118 may provide a first user interface 120, which may enable recording of video content by the first user device 104 and provide information regarding the video content and/or the sensor data that is being acquired.
The video item received from the first user device 104 may be stored in a database 122, which the processor 102 may have access to, both of which 25 may be comprised in a server apparatus.
The processor 102 may be configured to use the data received through a first user device 104 to perform automatic tagging (assigning e.g. a keyword or other descriptor that can be searched) of at least one received video item. Here, the data may refer to the data obtained through sensors of the 30 first user device 104 and also additionally to other data that may be received from the first user device 104 in conjunction with the video item, such as data regarding the time that the video item was captured. In an advantageous embodiment, at least the time is added in a metadata file,
20175539 prh 12 -06- 2017 field, portion or other data element associated with the video content, i.e., the video item is tagged with at least temporal information.
The processor 102 may comprise or have access to a sensor data analysis module 124, which may be used to analyze the received video item or other 5 data content and additionally may also be utilized in automatic tagging.
Alternatively or additionally, some tagging could take place at the user device 104. The sensor data analysis module may comprise means for analysis of different types of sensor data or metadata associated with a received video item.
io Information from, e.g., a GPS sensor 114 may be used to determine a location in which a video item has been captured. In advantageous embodiments, in addition to temporal information, the received video material may be automatically tagged with information regarding data received through a GPS sensor 112, i.e., the location in which video content is has been captured. Sensor such as GPS based data may be included in a tag as such, and/or higher-level location data be derived therefrom, e.g. address, landmark, district, building, city, country and/or event data (when combined with temporal/time data and schedule data regarding different events and their locations).
Sensor data from sensors such as a camera 106, magnetometer 108, an accelerometer 110, and/or a gyroscope 116, may be analyzed by the sensor data analysis module 124 to create metadata and metadata based tags that are related to e.g. field of view or viewing angle, or ‘watching direction’, of the captured video content.
The sensor data may in some embodiments be used to determine metadata that is indicative of quality of the video material. The arrangement 102 may through the sensor data analysis module 124 determine, e.g., if the first user device 104 has been shaking during the capturing of the video material. This information may be utilized in tagging. In some embodiments, the aforementioned information may also be used to automatically enhance video quality by a stabilization algorithm, for instance. Additionally or alternatively, in some embodiments shaking may be detected directly from the captured video data based on image analysis.
20175539 prh 12 -06- 2017
Other quality associated data or similar parameters, such as video resolution, brightness, and/or acutance (sharpness), may also be determined by an arrangement 100 through analysis of the video item and/or sensor data associated therewith. The parameters may be further 5 analyzed to determine e.g. a characterizing value on a scale or some other criteria that may be employed to indicate the quality of a video item. The video item may then also be tagged with the respective quality information for search and/or compilation purposes.
Advantageously, one or more quality parameters may be evaluated for a io video subitem in addition to or instead of general parameters regarding the item as a whole. For example, a video item may be related to a temporal duration, and may be divided into two or more subitems, which may be related to temporal subdurations. These subdurations may be mutually of equal or different length (duration). For example, the subitems may be is video segments or video frames. In some embodiments, the device 104 or processor 102 may be configured to divide the video item into subitems based on e.g. recognized video (image) content, quality and/or other parameters of the underlying video or related sensor data. In some other embodiments, the subitems may be created automatically responsive to 20 user input; for example, each recording pause instructed by the user via the
Ul may translate into switching over to a new subitem. The subitems may be determined as having discrete or overlapping durations relative to the overall item. In an embodiment, each of the video subitems is assigned at least one quality parameter.
In view of the foregoing, the sensor data analysis module 124 may comprise means for image analysis, which may utilize various methods of computer vision and pattern recognition. The image analysis may be utilized to detect, classify and/or identify e.g. objects appearing in video content, and also be capable of more complicated image analysis, such as facial recognition.
Accordingly, related metadata such as tags may be associated with the video item. The image analysis may additionally comprise methods where also other sensor data, such as auditory data from a microphone 112 is utilized to analyse complex events and provide information on semantic content that may be detected or determined through video material and 35 possible other available sensor data.
20175539 prh 12 -06- 2017
In one embodiment, an arrangement 100 may comprise a video compilation module 126, which may combine two or more video items or subitems of one or more video items to create a compiled video. The video items may be received from one or several different first user devices 104 in the possession of one or more users. The video compilation module 126 may be utilized to create a compiled video of, e.g., an event, where only video items or subitems of desired (user-selected) kind, e.g. video items or subitems that have quality criteria matching or exceeding a certain limit, may be used or at least preferred over other items/subitems. Compiled videos io where an event or object of interest may be viewed or shown from various different angles may also be created. A user requesting the compilation may preferably define characteristics of the compilation to be established based on available video data, said characteristics possibly including quality characteristics, temporal characteristics (e.g. may the compilation have is temporally overlapping portions, or should it be temporally more linear or strictly chronological), angle of view/shooting direction/ shooting location related preferences, target object or person preferences, use of slow-motion or other effects (e.g. transitional effects) related preferences, etc.
The video compilation module 126 thereby enables compiling videos from 20 an event with various video items or subitems featuring an entity that may be defined through image analysis. As an example, a compilation video from a certain sports event may feature a certain player, or a compilation video from a certain cultural event such as a concert may feature a certain artist.
In the video compilation module 126, the creation of compilation videos may be further enabled through the use of the sensor data analysis module 124. Video subitems, such as segments or frames, may be analyzed separately, and a compilation video with high quality featuring an event or an entity of interest may be generated where video subitems appear in chronological 30 order, as temporal information is also utilized in the arrangement 100.
The video compilation module 126 may incorporate into a compilation video data from a plurality of video (image) sensors comprised in a first user device 104. Alternatively of additionally, data from some other data source of the device 104 or a further device may be integrated with a compilation 35 video. For instance, if a video compilation from an event such as a concert, other cultural event or a sports event is created, sensor data such as sound
20175539 prh 12 -06- 2017 from a microphone 112 may be added as a sound track to the video. The microphone 112 or other sensor data may originate from the same first user device 104 that has been used to capture the video content or another first user device 104. Also external data, such as an external sound track, may 5 be added.
In other instances, an arrangement 100 may through the video compilation module 126 and image analysis determine semantic content or context of a video and automatically generate a compilation video and optionally also include external data pertinent to the detected context. For instance, an io arrangement 100 may identify events or situations happening in a video or videos, for instance that the video content has been captured during a wedding and add an appropriate sound track tagged as wedding type music.
In an advantageous embodiment, a processor 102 may receive, possibly is through a second user device 128 utilizing a second user application 130 and a second user interface 132, a search query for a video or compilation. The search query may cause the processor 102 to conduct a search and identify video items stored in the database 122 that match the search query. The processor 102 may then communicate to a user of the second user 20 device 128 the results of the search through providing an output and facilitate access of said user to the video item search results.
In one embodiment of the present invention, a search query comprises at least a time and/or a location. Any number of other search criteria may also be added. Advantageously, at least one of the search criteria is related to 25 the metadata or associated tags that have been received or created by an arrangement 100. A search query may also indicate e.g. the name of an event such as a concert or sports event.
Through the second user interface 132, the output provided by the arrangement 100 may be indicative of the characteristics such as the quality 30 of the video item search results. The quality may be related to quality parameters obtained through analysis of the at least one video item and/or other sensor data.
Figure 2 shows two views, 2A and 2B, of an exemplary application view 200 in a depiction of a first user interface 120, which may be provided through a
20175539 prh 12 -06- 2017 first user application 118, the first user interface 120 intended for use with a first user device 104. The first user interface 120 may comprise a multifunctional record feature essentially in the form or shape of e.g. button 202, which may comprise an outer portion such as an outer rim 204. In the 5 embodiment of Fig. 2, the multifunctional record button 202 is shown on a screen of the first user device 140 and thereby implemented as a touchsensitive icon displayed, whereupon pressing the multifunctional record button 202 may result in initiating or terminating recording of video content with the first user device 104. In some embodiments, initiation or termination io of video recording may be done through some other method, such as auditory methods based on the analysis of microphone-captured sound data such as voice commands.
In an embodiment, the outer rim 204 may visually indicate to a content providing user of the first user device 104 information regarding the video is content that is being recorded by the first user device 104 or may be recorded at that particular instant in time. The information may be related to e.g. video quality. For example, a color or pattern may be presented on the outer rim 204 which may be indicative of video quality and/or some other video related characteristics or information.
In one embodiment, the first user application 118 may communicate to a user of the first user device 104 information related to video content through haptic feedback initiated through the first user device 104. For instance, if a video is being recorded by the first user device 104 and the first user device 104 is tilted during the recording, haptic feedback via e.g. a vibration 25 element may inform the user of the first device 104 of the tilting. This may be advantageous e.g. in a situation where video content is being recorded by a handheld device such as a mobile phone and a user of the device is not constantly looking at the device or the video content that is being recorded, and may want to focus their attention or gaze on something else, 30 while still obtaining video material of high quality.
In Figure 3, an exemplary depiction of an application view 300 of a second user interface 132 is shown. Figure 3 specifically illustrates a possible schematic appearance of an application view that may comprise an output that may be provided by an arrangement 100 in response to a search query.
The application view 300 may comprise e.g. a map or other locationindicating view 302. The view 302 may show an area representative of a
20175539 prh 12 -06- 2017 geographical location that is specified through the search query or show an area that comprises the location explicitly (e.g. location name, address, coordinates, etc.) or implicitly (e.g. event) specified in the search query. The area to be shown may be predetermined as to related one or more 5 parameters such as coverage. For example, it may be specified that the map view 302 initially shows an area comprising of 1 square kilometer. In a preferred embodiment, the searching user may manipulate the map view 302 via the Ul in order to navigate surrounding areas and/or e.g. zoom in and out.
io The map view 302 may visually indicate geographical points corresponding to locations where at least one first user device 104 has captured video content at a specific time. This may be done by e.g. placing points, circles or other pointers 304 on a map view 302.
The map view 302 may also visually indicate the quality of the video is material. In an embodiment, a quality parameter relating to the video subitem in question may be indicated to a user through e.g. a color or pattern which may displayed on the map view 302. Also other indicators may be used, such as a percentage or a number, for instance.
The field of view 306, shooting direction and/or similar characteristics of a 20 video item at a specific time may also be visually and preferably dynamically (i.e. changing in accordance with e.g. shown video 308) indicated on the map view 302, which may also comprise the indication of a quality parameter as discussed above. Such an implementation is indeed shown in Fig. 3.
The application view 300 may in one embodiment also show a preview 308 of one or more video items. A preview may be shown for one or more video items indicated in the map view 302. The video items for which previews are currently provided may be selected by the arrangement 100 according to some criterion or may be selected by a searching user through, e.g., clicking 30 on the circles 304 or other visual indicators associated with video items.
An application view 300 may additionally comprise a timeline 310, through which a searching user may navigate the video items that are depicted on a map view 302. A searching user may, for instance, manipulate the timeline to directly or gradually proceed to certain time and the map view 302 and
20175539 prh 12 -06- 2017 preview(s) 308 may change accordingly. A timeline may also be allowed to proceed in real time and the preview(s) 308 may play the selected video item(s) accordingly, while the map view 302 may also change accordingly.
Other ways of manipulating the timeline 310 may also be made possible, 5 such as e.g. “fast forwarding” or proceeding on the timeline in a time scale that is faster than real time with the preview(s) 308 and map view 302 changing accordingly. The timeline may also be navigated in the opposite direction, i.e., one may move backwards in time on the timeline.
Because of the possible vast amount of available video material, search io criteria may yield a number of video item search results that are too large for practical considerations based on e.g. user-defined settings. In an embodiment of the present invention, it may be possible to filter search results for example through an area density filter. It may be specified, for instance, that an output may give only a certain number of video items is captured within a certain area or essentially location. E.g., it may be specified that only one video item per five square meters is to be shown in the output, where shown video items are those meeting the search criteria which exhibit a highest quality and/or other criterion, which may advantageously be user adjustable. Other specifications may also be made 20 to filter search results, such as showing in the output only video items with a specific field of view or shooting direction. Generally, filtering may in different embodiments be done when configuring the second user application 130 and second user interface 132 or it may be done by a searching user e.g. through the second user interface 132.
Figure 4 shows steps that may be performed in a method according to an embodiment of the present invention. In 402, one or more video items are received from a number of first user devices, whereby the video items are stored in a database 122 in 404. Supplementary data is received in 406, the data being provided through the device with which said at least one video 30 item has been recorded. In a preferred embodiment, the data may be provided by one or more sensors comprised in the first user device(s) 104. The data may also comprise temporal data (time, timecode, clock, sync, etc.) provided through the first user device(s) 104.
In an advantageous embodiment, the one or more video items relating to 35 initial time duration are divided into a plurality of discrete or overlapping
20175539 prh 12 -06- 2017 subitems related to corresponding subdurations in 408 as contemplated hereinbefore.
In step 410, metadata is provided preferably in connection with video items or even specific subitems (e.g. subitem-specific dedicated quality 5 indicators), the metadata being obtained through utilizing the received supplementary data. The metadata may be created by the sensor data analysis module 124.
In some embodiments, metadata may be assigned to (associated with) one or more video items and/or related subitems as related searchable and io preferably also inspectable tags. E.g., one or more quality indicating parameters may be assigned to each video subitem to enable quality-based searches.
A search query may be received in 412, and video item search results matching the search query may be determined in 414, after which in step is 416, an output indicative of the video item search results may be provided.
In some embodiments, the output may include at least a listing of the video items of the results group, which may contain at least one video item. In some embodiments, the output may include the actual video item(s).
In some embodiments, a video compilation typically based on a plurality of 20 originally discrete video items, fulfilling the user-defined criteria such as temporospatial criteria (time, location, event, etc.) may be established either fully automatically or in a user-assisted fashion. The compilation may be provided as the outputted video item. The compilation may comprise a single video stream or a video file constructed from the constituent (source) 25 video items preferably best fulfilling the criteria, for example.
The invention has been explained above with reference to the aforementioned embodiments, and several advantages of the invention have been demonstrated. It is clear that the invention is not only restricted to these embodiments, but comprises all possible embodiments within the 30 spirit and scope of inventive thought and the following patent claims.
The features recited in dependent claims are mutually freely combinable unless otherwise explicitly stated.

Claims (5)

1. An arrangement (100), optionally comprising one or more network accessible servers, for searching for digital video material, wherein the arrangement comprises at least one processor (102) configured to
5 - receive a number of video items;
- store the video items in a database;
- provide metadata, preferably a number of tags, for any or each video item wherein said metadata is obtained through utilizing supplementary data provided by at least one user device (104) via io which said video item has been respectively obtained, said user device comprising e.g. a user terminal capable of autonomous video capturing and/or of receiving and forwarding video data from a connected video capturing device (105) such as a drone;
- receive a search query wherein at least one search criterion comprised is in the search query is related to said metadata;
- determine video item search results matching the search query among said at least one video item stored in the database; and
- provide an output to a user comprising the video item search results, wherein the video item is related to time duration and said provided 20 metadata comprises at least one quality indicator, which is determined through analysis of said supplementary data related to one or more subitems of the video item, wherein a subitem is related to a subduration of thereof, and said at least one quality indicator is assigned to a corresponding subitem.
25
2. The arrangement of claim 1, wherein the video item is divided into a plurality of subdurations, giving a plurality of subitems and at least one quality indicator is determined and assigned to each of said subitems.
3. The arrangement of claim 1 or 2, wherein the output is indicative of said at least one quality indicator.
20175539 prh 12 -06- 2017
4. The arrangement of any previous claim, wherein at least part of the metadata is obtained through utilizing data provided by one or more sensors, at least one of said sensors being selected from the group consisting of: a camera, thermal imaging camera, image sensor, a
5 magnetometer, an accelerometer, a microphone, a location sensor, a satellite positioning based location sensor, a GPS sensor, pressure sensor, temperature sensor, moisture sensor, and a gyroscope.
5. The arrangement of any previous claim wherein the metadata comprises indication of time and/or location, which is obtained via the user io device.
6. The arrangement of any previous claim, wherein the search query comprises at least a time and a location as search criteria.
7. The arrangement of any previous claim, wherein the output comprises a selected, optionally user-selected, number of video item search results is within a selected, optionally user-selected, location range, where all video item search results within the location range are arranged according to a selected criterion.
8. The arrangement of claim 7, wherein the criterion is related to video quality.
20
9. The arrangement of any previous claim, configured to determine, for the video item, metadata such as a tag indicative of an event, optionally a cultural, sports or party event, associated with the item.
10. The arrangement of claim 9, wherein the event indicating metadata is determined based on the video image data of the item, time data associated
25 with the item, location data associated with the item, microphone-based sound data, and/or other sensor data.
11. The arrangement of claim 9 or 10, wherein the event indicating metadata is determined based on event information characterizing the locations and times of at least one event.
30
12. The arrangement of any previous claim, configured to provide a map (302) comprising the location indicated in the search query and a timeline (310) comprising the time indicated in the search query, wherein the map
20175539 prh 12 -06- 2017 indicates a number of points (304) where a corresponding number of video items have been captured at a time corresponding to the time indicated on the timeline.
13. The arrangement of claim 12, wherein the output (306) provides 5 information concerning the angle of view, shooting direction and/or direction from which video material of a video item has been captured.
14. The arrangement of any previous claim, configured to provide a compilation selectively constructed from two or more video items taking into account the search query in selecting the video items and/or in determining io their contribution to the compilation.
15. The arrangement of any previous claim, comprising said at least one user device, provided with a user interface (200), wherein the user interface comprises a multifunctional record-button (202, 204) preferably as onscreen symbol, the record button being indicative of the quality of the video is material being recorded at a certain time.
16. The arrangement of claim 15, wherein the user interface is configured to enable repositioning of the record-button by a user of the user device.
17. The arrangement of any previous claim wherein the arrangement indicates, via haptic feedback to a user of the user device, a change
20 occurring in the captured video content.
18. The arrangement of claim 17, wherein the change is related to the angle of view or shooting direction regarding a camera of the capturing device and video content captured therewith.
19. A method for searching for video material, comprising:
25 - receiving a number of video items (402),
- storing the video items in a memory (404), and
- providing metadata (410) for any or each video item, wherein said metadata is obtained through utilizing supplementary data (406) provided by at least one user device via which said video item has 30 been obtained, wherein the video item is related to time duration and said provided metadata comprises at least one quality indicator, which is determined through analysis of said supplementary data related to one or more subitems (408) of the video item, wherein a subitem is related to a subduration thereof, and said at least one quality indicator is assigned to a corresponding subitem.
5 20. The method of claim 19, further comprising: receiving a search query (412) wherein at least one search criterion comprised in the search query is related to said metadata, determining video item search results (414) matching the search query, and providing an output (416) to a user comprising the video item search results.
FI20175539A 2017-06-12 2017-06-12 Arrangement and related method for provision of video items FI129291B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
FI20175539A FI129291B (en) 2017-06-12 2017-06-12 Arrangement and related method for provision of video items
EP18818244.8A EP3639164A4 (en) 2017-06-12 2018-06-12 Arrangement and related method for provision of video items
PCT/FI2018/050442 WO2018229332A1 (en) 2017-06-12 2018-06-12 Arrangement and related method for provision of video items
KR1020207000782A KR20200017466A (en) 2017-06-12 2018-06-12 Apparatus and associated method for providing video items

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FI20175539A FI129291B (en) 2017-06-12 2017-06-12 Arrangement and related method for provision of video items

Publications (2)

Publication Number Publication Date
FI20175539A1 true FI20175539A1 (en) 2018-12-13
FI129291B FI129291B (en) 2021-11-15

Family

ID=64660044

Family Applications (1)

Application Number Title Priority Date Filing Date
FI20175539A FI129291B (en) 2017-06-12 2017-06-12 Arrangement and related method for provision of video items

Country Status (4)

Country Link
EP (1) EP3639164A4 (en)
KR (1) KR20200017466A (en)
FI (1) FI129291B (en)
WO (1) WO2018229332A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130097172A1 (en) * 2011-04-04 2013-04-18 Zachary McIntosh Method and apparatus for indexing and retrieving multimedia with objective metadata
US9584834B1 (en) * 2012-06-25 2017-02-28 Google Inc. Video broadcasting with geolocation
JP6542199B2 (en) * 2013-05-20 2019-07-10 インテル コーポレイション Adaptive Cloud Editing and Multimedia Search
US9685194B2 (en) * 2014-07-23 2017-06-20 Gopro, Inc. Voice-based video tagging
US9734870B2 (en) * 2015-01-05 2017-08-15 Gopro, Inc. Media identifier generation for camera-captured media
KR102246557B1 (en) * 2015-04-07 2021-04-30 엘지전자 주식회사 Mobile terminal and method for controlling the same
US20170161961A1 (en) * 2015-12-07 2017-06-08 Paul Salsberg Parking space control method and system with unmanned paired aerial vehicle (uav)

Also Published As

Publication number Publication date
EP3639164A4 (en) 2021-01-20
FI129291B (en) 2021-11-15
KR20200017466A (en) 2020-02-18
EP3639164A1 (en) 2020-04-22
WO2018229332A1 (en) 2018-12-20

Similar Documents

Publication Publication Date Title
US20180357316A1 (en) Arrangement and related method for provision of video items
US10559324B2 (en) Media identifier generation for camera-captured media
US10084961B2 (en) Automatic generation of video from spherical content using audio/visual analysis
US11381744B2 (en) Video processing method, device and image system
US9747502B2 (en) Systems and methods for automated cloud-based analytics for surveillance systems with unmanned aerial devices
US7594177B2 (en) System and method for video browsing using a cluster index
US20180132006A1 (en) Highlight-based movie navigation, editing and sharing
US10043079B2 (en) Method and apparatus for providing multi-video summary
JP7372008B2 (en) Methods, apparatus and systems for time-based and geographical navigation of video content
US20180103197A1 (en) Automatic Generation of Video Using Location-Based Metadata Generated from Wireless Beacons
WO2021031733A1 (en) Method for generating video special effect, and terminal
US10999556B2 (en) System and method of video capture and search optimization
Kim et al. Design and implementation of geo-tagged video search framework
Patel et al. The contextcam: Automated point of capture video annotation
CN105049768A (en) Video playback method of video recording equipment
FI129291B (en) Arrangement and related method for provision of video items
US20170200465A1 (en) Location-specific audio capture and correspondence to a video file
CN112437270B (en) Monitoring video playing method and device and readable storage medium
US20150032744A1 (en) Generation of personalized playlists for reproducing contents
CN107079144B (en) Method, device and system for realizing media object display
CN117294900A (en) Video playing method and device, electronic equipment and readable storage medium
JP2013081018A (en) Moving image editing device, moving image editing method, and program
CN117909522A (en) Multimedia focusing
TW201941075A (en) Fast image sorting method improving the speed of the image arrangement and the richness of the image content
JP2017175362A (en) Image collection device, display system, image collection method, and program

Legal Events

Date Code Title Description
FG Patent granted

Ref document number: 129291

Country of ref document: FI

Kind code of ref document: B

MM Patent lapsed