WO2015022689A1 - Sélection d'objets de média - Google Patents

Sélection d'objets de média Download PDF

Info

Publication number
WO2015022689A1
WO2015022689A1 PCT/IL2014/050727 IL2014050727W WO2015022689A1 WO 2015022689 A1 WO2015022689 A1 WO 2015022689A1 IL 2014050727 W IL2014050727 W IL 2014050727W WO 2015022689 A1 WO2015022689 A1 WO 2015022689A1
Authority
WO
WIPO (PCT)
Prior art keywords
media objects
profiling
computerized method
group
target user
Prior art date
Application number
PCT/IL2014/050727
Other languages
English (en)
Inventor
Adi ECKHOUSE BARZILAI
Original Assignee
Pic Dj Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pic Dj Ltd. filed Critical Pic Dj Ltd.
Publication of WO2015022689A1 publication Critical patent/WO2015022689A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures

Definitions

  • the present invention in some embodiments thereof, relates to media object selection and, more specifically, but not exclusively, to methods and systems of selecting group of media objects having common subject matter out of one or more datasets.
  • the computerized method comprises providing a plurality of profiling media objects, each one of the plurality of profiling media objects associated with a target user, processing, using a processor, the plurality of profiling media objects to identify a prevalence of each of a plurality of characterizing properties among the plurality of profiling media objects, selecting at least some of the plurality of characterizing properties based on respective the prevalence, identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having at least one of the at least some of characterizing properties, and outputting an indication of the group.
  • the plurality of profiling media objects are published in at least one web document accessible via a network to a plurality of other users. More optionally, the providing comprises analyzing at least one web accessible document to identify automatically the plurality of profiling media objects.
  • the at least one web accessible document includes at least one social network profile webpage the target user.
  • the computerized method further comprises selecting by an operator at least one dataset storing the plurality of additional media objects.
  • the providing comprises analyzing a plurality of messages sent by the target user to identify automatically the plurality of profiling media objects.
  • the providing comprises analyzing text associated with each one of a plurality of probed media objects associated with the target user to identify positive or negative context and selecting the profiling media objects from the plurality of probed media objects accordingly.
  • the text is text extracted from at least one of a comment given by a user which is socially connected to the target user in at least one web accessible document and a text provided with a message send by the target user.
  • the providing comprises crawling a plurality of web accessible documents to extract automatically the plurality of profiling media objects.
  • At least some of the plurality of profiling media objects are visual media objects uploaded by the target user to a web accessible page.
  • At least some of the plurality of profiling media objects are media objects added to one of a plurality of messages sent by the target user.
  • the plurality of messages are selected from a group consisting of instant messaging messages, cellular messages, and electronic mail messages.
  • the plurality of profiling media objects includes a plurality of still images imaging the target user.
  • the plurality of tagged with a like tag by the target user is not limited to, but not limited to, the plurality of tagged with a like tag by the target user.
  • At least some of the plurality of profiling media objects are profiling media objects are automatically selected based on the number of tags set with regard thereto by a plurality of users.
  • the at least some characterizing properties include a characterizing facial feature of a figure that appears in at least some of the plurality of profiling media objects. More optionally, the at least some characterizing properties are selected from a group consisting of a prominent depicted side of the face, a depicted facial area of a figure, a depicted facial area of one figure in relation to another figure, a tilting angle of an imaged head of a figure, a curvature level of a month of a figure, a exposure level of an organ of a figure, and a symmetry level of an imaged face.
  • the at least some characterizing properties include a capturing location indicative of a location in which at least some of the plurality of profiling media objects have been taken.
  • the at least some characterizing properties include a capturing location indicative of a location selected from a geographic information based dataset defining a plurality of areas.
  • the at least some characterizing properties include a capturing time indicative of an event during which numerous profiling media objects are captured.
  • the at least some characterizing properties include a quality measure indicative of an automatically deduced quality of at least some of the plurality of profiling media objects.
  • the at least some characterizing properties include a composition of at least one object and figure in at least some of the plurality of profiling media objects.
  • the at least some characterizing properties include a capturing time indicative of a time during which at least some of the plurality of profiling media objects have been taken.
  • At least some of the plurality of profiling media objects are manually selected by a human operator.
  • the system comprises a processor, a profiling module which processes, using the processor, a plurality of profiling media objects, each one of the plurality of profiling media objects associated with a target user, to identify and to select a plurality of characterizing properties, each of the plurality of characterizing properties is selected based on a prevalence thereof among the plurality of profiling media objects, a clustering module which identifies automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having at least one of the plurality of characterizing properties, and a user interface module which outputs an indication of the group.
  • a computerized method selecting a group of group of media objects.
  • the method comprises receiving a public profile of a target user that includes a plurality of profiling media objects, each one of the plurality of profiling media objects is associated with a target user and accessible via a network to a plurality of other users, analyzing the public profile, using a processor, to identify a plurality of characterizing properties in each one of the plurality of profiling media objects, identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having at least one of the plurality of characterizing properties characterizing, and outputting an indication of the group.
  • a computerized method selecting a group of media objects comprises dividing a poll of a plurality of media objects to a plurality of groups wherein a first group of media objects from the poll comprises media objects which document at least one target user and a second group of media objects from the poll comprises media objects which do not document the at least one target user, identifying at least one characterizing property which characterizes a property of media objects preferred by the at least one target user, applying a first image processing analysis to select a first subgroup of the first group such that each member of the first subgroup having the at least one characterizing property, applying a second image processing analysis to select a second subgroup of the second group based on at least one image quality parameter, and outputting an indication of the first group and the second group.
  • the ratio between the number members in the first subgroup and the number members in the second subgroup complies with a desired ratio.
  • the computerized method further comprises managing storage of the poll among a plurality of storage locations based on the indication.
  • the computerized method further comprises selecting the poll such that each one of the plurality of media objects is captured in a common geographical area or during a period associated with an event.
  • the desired ratio is the ratio between the number members in the first group and the number members in the second group.
  • the outputting comprises automatically generating an album which includes members of the first and second subgroups.
  • FIG. 1 is a schematic illustration of an exemplary system for selection of visual media objects based on identification of characterizing properties of profiling media objects, according to some embodiments of the present invention.
  • FIG. 2 is a flowchart of a method of identifying a media objects from a user selected datasets based on one or more characterizing properties, according to some embodiments of the present invention.
  • FIG. 3 is a method for automatically computing instructions to select a group of image from a poll of images, according to some embodiments of the present invention.
  • the present invention in some embodiments thereof, relates to media object selection and, more specifically, but not exclusively, to methods and systems of selecting group of media objects having common subject matter out of one or more datasets.
  • a set of media objects such as images and video files
  • media object datasets based on characterizing properties of profiling media objects, such as media objects which are tagged by a target user and/or with a tag indicative of the target user set either manually or automatically for instance images tagged and identified via facial recognition.
  • knowledge based automatic selection of media objects may be made based on the public profile of a target user and/or manual selection of media objects, for instance photos selected by a user (e.g. uploaded and/or marked for access).
  • the characterizing properties found in a set of user associated images and/or video files with relatively high prevalence may be extracted from webpages which are associated with the target user, for example a social network profile, messages sent by a target user, items stored in a folder of a messaging service or application, a user media object folder and/or the like.
  • the set of user associated images and/or video files may be selected by a user using a user interface, for example from folders stored in the user personal computer and/or Smartphone and/or from designated folders which are available online.
  • the characterizing properties may be features of figures which appear in the images and/or video files, for instance facial features, image quality parameters, composition, capturing time, capturing location, and/or the like.
  • the set of user associated images and/or video files may be selected based on analysis of related text and/or tags provided by socially connected peers. For instance, the textual content of related comments or messages and/or the number of social tags which have been provided to probed image in taken into account while selecting the set of user associated images and/or video files.
  • the characterizing properties are facial features which are extracted from the faces imaged in the images and/or video files, for instance, a face size ratio, a prominent side of the face of a figure, for example a left side or a right side. And/or the like.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 is a schematic illustration of an exemplary system 100 for selection of visual media objects, such as images or image sequences, for instance video files, from one or more datasets based on identification of characterizing properties in profiling media objects, for instance media objects posted by a user to a social network page, according to some embodiments of the present invention.
  • visual media objects such as images or image sequences, for instance video files
  • the exemplary system 100 may be implemented by one or more network nodes, for example one or more processor based servers 101, connected to a network 106, which host image processing modules 111, 112, for profiling module 111 and clustering module 112, as described below.
  • the modules 111, 112 are executed using one or more processors 113, for example central processing units, microprocessor, distributed computing and/or the like.
  • the network is optionally a communication network such as the Internet and/or an Ethernet.
  • the system 100 includes one or more user interface modules 103A/B which generate and manage user interface(s) that allow users to input selections, for instance a selection of profiling media objects and/or references to datasets 104 of visual media objects and to be present matched visual media objects, for example as described below.
  • the interface modules may be managed at the network connected server(s) 101, set to instruct a browser host in a client terminal 105, such as a laptop, a desktop, a Smartphone, Smart glasses, such as Google GlassTM, and a tablet to present a graphical user interface (GUI) that allows presenting visual media objects to a user and/or receiving user selections therefrom.
  • GUI graphical user interface
  • the user GUI may be a widget, a FlashTM component, extensible markup language (XML), and/or any hypertext markup language (HTML) component.
  • the interface module may be part of an existing web service or site, automatically or manually providing an image and/or video selection service.
  • An interface module 103B may be locally installed in the client terminal 105, for instance as an application downloaded from an application store, an application, a browser addon and/or a plug-in.
  • FIG. 2 is a flowchart 200 of a method of identifying a group of media objects from a user selected datasets based on one or more characterizing properties, according to some embodiments of the present invention.
  • the method is optionally implemented using the system 100 depicted in FIG. 1 for automatic selection of media objects for each one each of a plurality of target users, for example subscribers of a service and/or social network peers.
  • media objects which are associated with a target user are provided, for example automatically identified or manually received.
  • These media objects are optionally media object automatically found as part of the public profile of the user and/or media object manually selected by the user, for instance using a designated user interface.
  • a public profile of a target user means a set of data, including media data that is available to users who are not the target user by browsing to network accessible documents, such as webpages. These users may receive access based on social connection to the target user, for instance friendship in a social network, by membership in an organization, by receiving messages and/or without any connection to the target user, for example by using a search engine and a browser.
  • a web accessible document means a webpage, such as a social network profile page or wall, a media file, for instance a clip or any video file, a feed, a word document, a portable document format (PDF) document, a presentation, a media posting service, such as InstagramTM.
  • a user generated message means an email, an SMS, an instance messaging (IM) message, and/or the like.
  • a visual media object for brevity referred to herein as media object, includes an image, a video file, a selected background, and/or the like.
  • the profiling media objects are optionally images and/or video clips tagged in a social network, such as FacebookTM, by the target user and/or his socially connected friends.
  • Tagging of a visual media object is indicative that the target user appears in the visual media object, found to be related to the visual media object by a social network user, for example indicating that he is related to a scene imaged in the visual media object, and/or marked the visual media object with a Like tag.
  • Social networks tagging for instance name tagging or Like tagging, are user controlled tags that may be removed by the tagged user, for instance the target user, even if he did not add the tag.
  • such tagging is usually indicative that the tagged user has a positive appreciation of the tagged image as he at least did not remove the tag.
  • the profiling media objects are images and/or video files (e.g. clips) tagged by the user with a like tag
  • the tag is indicative that the target user wants other people to know that the visual media object has a positive or substantial effect.
  • media objects are selected based on the number of Like tags it received from in a social network, the number of positive comments it received from in a social network, the number of sharing actions of the image in the social network, and/or the number of users how shared the image. Positive comments may be identified using textual analysis, for instance sentiment analysis as described below. The above allows maintaining a direct relationship between the scope of social network appreciation and positive reaction and the weight given to the image or video.
  • the profiling media objects are visual media objects published by the user, for example uploaded to a public space, for instance uploaded to a social network album and/or wall.
  • the profiling media objects are images and/or video clips sent or forwarded by the target user to a contact via a messaging service such as iMessageTM, WhatsappTM, and LineTM. Additionally or alternatively, the profiling media objects are visual media objects sent or forwarded to a contact via a messaging service such as an email manager and/or a short message service (SMS) and/or multimedia messaging service (MMS) editor.
  • a messaging service such as an email manager and/or a short message service (SMS) and/or multimedia messaging service (MMS) editor.
  • SMS short message service
  • MMS multimedia messaging service
  • comments related to one or more of the images are analyzed, for example filtered based on textual analysis, identifying positive or negative context of the image.
  • the analyzed comments are of the target user, the comments may be comments of an image tagged in a social network, textual content in a message sent with the message, and/or text that appears with the image after identifying a copy of the image in a third party document, for example using image matching algorithms.
  • the textual analysis includes sentiment analysis, for instance as described in Turney (2002). "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews". Proceedings of the Association for Computational Linguistics, pp. 417-424 and Benjamin Snyder; Regina Barzilay (2007). "Multiple Aspect Ranking using the Good Grief Algorithm". Proceedings of the Joint Human Language Technology/North American Chapter of the ACL Conference (HLT-NAACL). pp. 300-307.
  • the one or more web accessible documents may be identified using web crawler(s) which search for publically available content pertaining to the target user.
  • the one or more web accessible documents may be manually designated by a user.
  • the one or more web accessible documents may be identified based on predefined settings, for example a profile of the target user in social network websites.
  • the selection of one or more web accessible documents which includes profiling media objects associated with the user removes from a human operator, for instance the user, the need to invest time and effort in selecting the profiling media objects.
  • the fact that these visual media objects have tags or other web associations to the user, for example appear in one of his public folders or pages, tagged by the user, tagged by the users who are socially connected to the user, selected as profile visual media objects and/or the like is indicative that the user positively evaluates these visual media objects.
  • the profiling media objects are manually selected by the target user any other user, for example marked and/or uploaded using a user interface.
  • the target user or any other user may grant the system with access to a selection of media objects stores in his client terminal and/or in a third party storage.
  • each one of the profiling media objects is processed to identify one or more characterizing properties.
  • each image is processed to determine a compliance with a plurality of characterizing property rules and/or filters.
  • the output of the analysis of each image is a list per image of one or more characterizing properties which are found and/or not found in the image. For example a matrix documenting compliance of all images is generated. This allows identifying which characterizing properties prevail in the user associated images and which characterizing properties do not.
  • a characterizing property is a facial property that prevails among figures imaged in the media objects.
  • a characterizing property is a facial property that prevails among figures imaged in the media objects.
  • the face size ratio may be calculated by a simple pixel count calculation.
  • the facial property is a prominent depicted side of the face of a figure, for instance the side of a face imaged in the media object, for example a left side or a right side.
  • the preferred side may be identified by image processing, for example identification of the location of an imaged eye in relation to another imaged facial feature, for example the location and/or size of an imaged ear, the angle of the nose and/or the like.
  • the facial property is the tilting angle of an imaged head, for example left, right, up, and/or down tilt.
  • the facial property is a fixed facial expression, for example lips facial expression such as a smile, pursed lips, lip biting, and/or covered mouth.
  • the facial property is a curvature level of a month, for instance a curvature of a smile and/or a turn of the lips (e.g. downturn or upturn); see for example Yu-Hao Huan, Face Detection and Smile Detection, Dept. of Computer Science and Information Engineering, National Taiwan University.
  • a facial property may be identified by smile detection algorithms.
  • the facial property is an exposure and/or luck of exposure of an organ of figures, for instance the teeth, the nose, the ears and/or the like.
  • the facial property may be the teeth brightness and/or whiteness level.
  • the teeth brightness and/or whiteness level may be identified by respective pixel value analysis.
  • the facial property is symmetry level of an imaged face.
  • the symmetry may be determined using various facial symmetry level detection processes, see for example Fred Stentiford, Attention Based Facial Symmetry Detection, UCL Adastral Park Campus, Martlesham Heath, Ipswich, UK.
  • the facial property is the eye state (e.g. opening level, opening curve) of the eyes of the imaged face.
  • the eye states may be obtained from the eye features such as the inner corner of eye, the outer corner of the eye, the iris, and the eyelid.
  • the eye state may be detected as described Qiong Wang, Eye Location and Eye State Detection in Facial Images with Unconstrained Background, School of Computer, Nanjing University of Science & Technology, Nanjing 210094, China, which is incorporated herein by reference.
  • the characterizing property is a locational property that prevails in the media objects, for example a location stamp found in the metadata of an image.
  • the locational property may be indicative of an image taken in certain scenery, bar, restaurant, house and/or the like.
  • the locational property is optionally a locational stamp, such as a global positioning system (GPS) stamp that is found in the metadata of the image.
  • GPS global positioning system
  • the characterizing properties include capturing time that prevails in the media objects. This allows identifying a set of media objects captured at the same event and/or area, an indication of the importance of the event and the documentation thereof to the user. Additionally or alternatively, the capturing time is matched with a list of dates that includes general dates, such as holidays and/or weekends, and personal dates, such as birthdays, wedding days, friends or family events and/or the like. Personal dates may be extracted from a calendar and/or deduced from a social network page associated with the target user. Friends or family events may be deduced from social network pages of peers who are socially connected to the target user.
  • the capturing location is matched with a dataset of location, for example geographic information based dataset indicative of touristic or historical locations. This allows identifying a set of media objects captured at leisure locations, an indication of the importance and uniqueness of the image and the documentation thereof to the user.
  • the locations may be identified by matching geographical coordinates.
  • the characterizing properties include one or more quality automatically deduced measures that prevail in the media objects, for example Sharpness, Noise, dynamic range (or exposure range), tone reproduction, image brightness, Contrast, also known as gamma, color accuracy, distortion, Vignetting (light falloff), Exposure accuracy, lateral chromatic aberration (LCA), lens flare, Color moire, and software induced artifacts.
  • quality automatically deduced measures may be defocus blur level, motion blur level, off-angle level, occlusion level, specular reflection, lighting and pixel count.
  • quality measures may be bivariate measures such as average difference structural content, normalized cross correlation, correlation quality, maximum difference, image fidelity, weighted distance, Laplacian mean square error, peak mean square error, normalized absolute error, and normalized mean square error. Each quality measure may be evaluated by a designated filter or algorithm.
  • quality measures which prevail in the visual media objects from the web accessible documents and/or user generated messages are used to select more images, based on the assumption that these quality measures had a positive effect on the user who selected to place these images or selected not to remove these images.
  • the characterizing properties include one or more compositions which that prevails among figures imaged in the media objects
  • the composition may be determined by analyzing each image in accordance with one or more predefined composition rules. The analysis may include identifying regions of compositional significance within the image and applying the composition rules to those regions, identifying whether an image comply with the composition rule(s) or not.
  • the compositional rules may include well known photographic composition rules such as the rule of thirds, the golden ratio, and/or the diagonal method. Composition detection algorithms may be used.
  • the characterizing properties include the identity of one or more people imaged in the analyzed images.
  • the figures imaged in the analyzed image identified and their identity is used as a characterizing property.
  • the identity may be of the target user, the target user close family, the target user remote family and/or images which are socially connected to the target user.
  • the identity of imaged people may be determined using facial recognition techniques and based on a map that associates between users, for example a social network connection map and/or a family network, for instance myHeritage family tree and/or the like. This allows identifying more images of these people.
  • the characterizing properties include the number of people which are imaged in the user preferred media objects, for example how many people are imaged in each image.
  • the number of people may be determined by face detection algorithm(s).
  • face detection algorithm(s) it should be noted that in the above examples, extraction of characterizing properties from images is described. Similar methods may be used to extract the characterizing properties from video files, for example by analysis of frame(s) and/or using video processing methods. Similar methods may be used to extract the characterizing properties from image mosaics and other forms of visual media objects.
  • characterizing properties having a prevalence rate of more a predefined threshold among the profiling media objects are selected.
  • the predefined threshold may be 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, or any intermediate or larger prevalence rate.
  • one or more datasets of visual media objects are selected by the operator, for example the user associated with the profiling media objects.
  • the datasets may be of visual media objects stored locally on one or more client terminals of one or more users, visual media objects remotely stored on a web connected database, and/or visual media objects which are published by users which are socially connected to the operator.
  • the dataset may be an aggregation from a plurality of sources, for example from folders or devices of a plurality of users.
  • the datasets includes images taken during a user defined period, for example images taken during the last week, month and/or year.
  • the datasets includes images taken during an event held in a certain time and/or location.
  • image taken in a class or in school during a year of education are selected as a dataset.
  • images taken during a vacation, a wedding, a tour or a house's event are selected.
  • images taken during a weekly course are selected, for example images taken during a class held between 16:00 and 18:00 in a certain area of the University or school.
  • the visual media objects from one or more datasets are analyzed, for instance by the clustering module, to identify automatically a set of one or more matching media objects.
  • the set of matching media objects may include 5, 10, 100, 1000 or any intermediate or larger amount of media objects.
  • the amount may be determined manually by an operator and/or automatically, for example by a requirement from an album generation application.
  • the size may be determined based on a template that is filled with the matching media objects, for instance a webpage or a widget that is set to present dynamic or static information about the target user.
  • the dataset(s) may include 10, 100, 1000, 10,000, 100,000, 1,000,000 or any intermediate or larger amount of media objects.
  • Each matching media object has one or more of the selected characterizing properties.
  • the one or more matching media objects are the matching media objects having more selected characterizing properties than other media objects.
  • a media object may be scored, for instance based on the number of characterizing properties which appear in the media object and/or type of characterizing properties where different characterizing properties receive a different weight.
  • the matching media objects may be filtered or otherwise selected by taken into account one or more image selection property, for example, type, capturing location, capturing time, and/or the like.
  • image selection property may be any of the above characterizing properties.
  • the image selection property is a locational property, for example a location stamp found in the metadata of an image.
  • the locational property may be indicative of an image taken in certain scenery, bar, restaurant, house and/or the like.
  • the locational property is optionally a locational stamp, such as a GPS stamp that is found in the metadata of the image.
  • matching media objects are filtered so that more media objects are selected from touristic, leisure or historical locations. The locations may be identified by matching geographical coordinates. This allows selecting images taken in placed have been acknowledged as visually interesting by many users.
  • an indication of the matching media object(s) is outputted, for example by automatically presenting some or all of the matching media objects on the client terminal of the operator, uploading some or all of the matching media objects to a social network page and/or a public folder, printing some or all of the matching media objects, creating a sequence of images, for instance a video file that includes some or all of the matching media objects, and/or creating a mosaic of the matching media objects.
  • a media object album is automatically created and/or updated based on the process depicted in FIG. 2.
  • the media object album may be uploaded to a social network page, posted in a blog, added to a memory of a mobile device, for instance a Smartphone, forwarded by an electronic message such as an email and/or the like.
  • the process is performed iteratively, for example every day, week, month, and/or any intermediate time, allowing updating the matching media object(s).
  • the updating is performed by replacing some or all of the matching media object(s) with new the matching media object(s), refreshing the images which are presented or uploaded based on exposure time, matching rank and/or the like.
  • the process depicted in FIG. 2 is used to create an album for summarizing a period, an event, and/or to create a visual documentation of the target user.
  • a user interface may guide the user to create the album and optionally to send it to print.
  • the matching media object(s) are published in a folder and/or set to be presented in a presentation upon request.
  • the above methods and systems are used as a summarization tool, allowing an operator to save time and effort in creating user document album.
  • the method for selecting a group of media objects depicted in FIG. 2 is used for selecting a group of media objects, such as images and video clips which document the target user while images of landscape or other people are selected by other image selection methods.
  • the characterizing properties of media objects wherein the target user is imaged are characterizing properties which are found in media objects liked by the target user or found with relatively high prevalence in images selected by the target user for publication, for example in posted by the target user to social network page(s) or sent or forwarded by the target user to contact(s) via a messaging service.
  • FIG. 3 depicts a flowchart of a method 300 for automatically computing instructions to select a group of image from a poll of images, according to some embodiments of the present invention.
  • the method may be a method for automatically computing instructions to generate an album from a poll of images or for selecting media objects for a certain memory space.
  • a poll of images is designated, for example selected by a target user or any other operator.
  • the poll may be images in a selected folder, images taken in a certain period, images posted social network(s) and/or the like.
  • a set of images captured in a certain period (or repetitive time slots such as a course) and/or location (or a geographic area) is identified, for example images captured during a vacation, a course, an event and/or the like.
  • the poll may be provided similarly to above described process with reference to 204.
  • images are classified to two classes, for example images with faces and images without faces.
  • images are classified to two classes, images documenting the target user(s) therein and images which do not document the target user(s).
  • images wherein one or more target user(s) are imaged are processed to facilitate a selection of images which are preferred and/or liked by the target user(s), for example as described above with reference to 205.
  • facial media objects are processed to identify and to select media objects with characterizing properties which have been liked by the target user or media objects with characterizing properties found with relatively high prevalence in media objects selected by the target user for publication. In such a manner, only images and video clips wherein the target user is in a subjectively attractive pose, angle, and/or facial exposure are selected while others images remain unselected.
  • images wherein one or more target user(s) are not imaged may be classified based on one or more image quality parameters.
  • the image quality parameters may be outcomes of applying one or more generic image quality filters where classification is determined with reference to a quality threshold.
  • images may be selected based on composition analysis such as the rule of thirds analysis, the golden ratio analysis, and/or the diagonal method analysis.
  • images may be selected based on image quality estimation, for example Sharpness estimation, Noise estimation, dynamic range (or exposure range) estimation, tone reproduction estimation, image brightness level, Contrast level, also known as gamma, color accuracy, distortion presence or absence, Vignetting (light falloff), Exposure accuracy, lateral chromatic aberration (LCA), lens flare presence or absence, Color moire presence or absence, and/or any artifact analysis.
  • image quality estimation for example Sharpness estimation, Noise estimation, dynamic range (or exposure range) estimation, tone reproduction estimation, image brightness level, Contrast level, also known as gamma, color accuracy, distortion presence or absence, Vignetting (light falloff), Exposure accuracy, lateral chromatic aberration (LCA), lens flare presence or absence, Color moire presence or absence, and/or any artifact analysis.
  • images from two groups are selected to maintain a predefined ratio, optionally user defined ratio.
  • media objects selected from a group may be referred to as a sub group.
  • image selection process may be repeated until a desired suitable ratio of images is achieved.
  • the suitable ratio is automatically deduced from an analysis of the albums uploaded by the target user(s).
  • the suitable ratio is automatically deduced from an analysis of the social web pages of the target user(s).
  • the suitable ratio is automatically deduced from the poll such that the ration in the poll is maintained in the generated album.
  • the suitable ratio is manually defined by the target user(s).
  • a suitable ratio of images to video files is also maintained based on the same principles.
  • a collection of image is selected, optionally with the predefined ratio.
  • the selected images may be images which are set to be stored in a certain location and/or a basis for an album uploaded to a webpage or account and/or made available via a social web page.
  • the above process may be continuously or iteratively performed to update the group of selected images.
  • the methods described above with reference to FIG. 2 and FIG. 3 are used to manage the storage of media objects in the memory of a client terminal, for example in the memory of a Smartphone, a tablet, a camera and/or the like.
  • the poll may be automatically uploaded to a cloud storage, the selected images remain in the storage of the client terminal and/or otherwise made available for immediate usage.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.
  • a compound or “at least one compound” may include aplurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

L'invention concerne un procédé informatisé de sélection d'un groupe d'objets de média. Le procédé comporte les étapes consistant à analyser une pluralité d'objets de média de caractérisation, chaque objet de la pluralité d'objets de média visuels étant associé à un utilisateur visé, à identifier, à l'aide d'un processeur, une prévalence de chaque propriété parmi une pluralité de propriétés caractéristiques dans chaque objet de la pluralité d'objets de média visuels, à sélectionner au moins une propriété parmi la pluralité de propriétés caractéristiques sur la base de sa prévalence respective, à identifier automatiquement un groupe d'une pluralité d'objets de média correspondants parmi une pluralité d'objets de média supplémentaires, chaque membre du groupe possédant la ou les propriétés caractéristiques, et à présenter une indication du groupe.
PCT/IL2014/050727 2013-08-13 2014-08-13 Sélection d'objets de média WO2015022689A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361865231P 2013-08-13 2013-08-13
US61/865,231 2013-08-13

Publications (1)

Publication Number Publication Date
WO2015022689A1 true WO2015022689A1 (fr) 2015-02-19

Family

ID=52468113

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2014/050727 WO2015022689A1 (fr) 2013-08-13 2014-08-13 Sélection d'objets de média

Country Status (1)

Country Link
WO (1) WO2015022689A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091552A1 (en) * 2015-09-28 2017-03-30 Fujifilm Corporation Image evaluation system, image evaluation method and recording medium storing image evaluation program
WO2018125274A1 (fr) * 2016-12-30 2018-07-05 Facebook, Inc. Systèmes et procédés de transition entre des articles de contenu multimédia
US10311290B1 (en) 2015-12-29 2019-06-04 Rogue Capital LLC System and method for generating a facial model
US10706265B2 (en) 2017-07-28 2020-07-07 Qualcomm Incorporated Scanning a real-time media stream to detect one or more faces that are prevalent among a set of media files stored on a user equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110029562A1 (en) * 2009-07-30 2011-02-03 Whitby Laura R Coordinating user images in an artistic design
US20120014560A1 (en) * 2010-07-19 2012-01-19 Telefonica, S.A. Method for automatic storytelling for photo albums using social network context
US20130051670A1 (en) * 2011-08-30 2013-02-28 Madirakshi Das Detecting recurring themes in consumer image collections
US20130148864A1 (en) * 2011-12-09 2013-06-13 Jennifer Dolson Automatic Photo Album Creation Based on Social Information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110029562A1 (en) * 2009-07-30 2011-02-03 Whitby Laura R Coordinating user images in an artistic design
US20120014560A1 (en) * 2010-07-19 2012-01-19 Telefonica, S.A. Method for automatic storytelling for photo albums using social network context
US20130051670A1 (en) * 2011-08-30 2013-02-28 Madirakshi Das Detecting recurring themes in consumer image collections
US20130148864A1 (en) * 2011-12-09 2013-06-13 Jennifer Dolson Automatic Photo Album Creation Based on Social Information

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170091552A1 (en) * 2015-09-28 2017-03-30 Fujifilm Corporation Image evaluation system, image evaluation method and recording medium storing image evaluation program
JP2017068333A (ja) * 2015-09-28 2017-04-06 富士フイルム株式会社 画像評価システム,画像評価方法,画像評価プログラムおよびそのプログラムを格納した記録媒体
US10303949B2 (en) * 2015-09-28 2019-05-28 Fujifilm Corporation Image evaluation system, image evaluation method and recording medium storing image evaluation program
US10311290B1 (en) 2015-12-29 2019-06-04 Rogue Capital LLC System and method for generating a facial model
WO2018125274A1 (fr) * 2016-12-30 2018-07-05 Facebook, Inc. Systèmes et procédés de transition entre des articles de contenu multimédia
US10499090B2 (en) 2016-12-30 2019-12-03 Facebook, Inc. Systems and methods to transition between media content items
US10706265B2 (en) 2017-07-28 2020-07-07 Qualcomm Incorporated Scanning a real-time media stream to detect one or more faces that are prevalent among a set of media files stored on a user equipment

Similar Documents

Publication Publication Date Title
US11778028B2 (en) Automatic image sharing with designated users over a communication network
US9906576B2 (en) System and method for creating and managing geofeeds
US10438094B1 (en) Automatic suggestion to share images
Highfield et al. A methodology for mapping Instagram hashtags
US10762126B2 (en) System and method for reducing similar photos for display and product design
US9280565B1 (en) Systems, methods, and computer program products for displaying images
EP3063730B1 (fr) Recadrage et partage d'images automatisés
US8577872B2 (en) Selection of photos based on tagging history
US20160328096A1 (en) Systems and methods for generating and presenting publishable collections of related media content items
US10699454B2 (en) Systems and methods for providing textual social remarks overlaid on media content
US10276213B2 (en) Automatic and intelligent video sorting
JP6396897B2 (ja) 出席者によるイベントの検索
US10013639B1 (en) Analyzing digital images based on criteria
US20170192965A1 (en) Method and apparatus for smart album generation
CA3018542A1 (fr) Systemes et procedes d'identification de contenu correspondant
US9081801B2 (en) Metadata supersets for matching images
US20160328868A1 (en) Systems and methods for generating and presenting publishable collections of related media content items
WO2015022689A1 (fr) Sélection d'objets de média
US9781228B2 (en) Computer-implemented system and method for providing contextual media tagging for selective media exposure
US12131363B2 (en) System and method for reducing similar photos from display and product design
US10169849B2 (en) Contextual personalized focus for variable depth of field photographs on social networks
US12056929B2 (en) Automatic generation of events using a machine-learning model
KR20130020419A (ko) 온라인 콘텐츠 종합 관리 시스템
JP7292349B2 (ja) 画像を処理するための方法およびシステム
CN112055847B (zh) 处理图像的方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14836575

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14836575

Country of ref document: EP

Kind code of ref document: A1