WO2015162027A2 - Procédé, dispositif, équipement utilisateur et programme informatique permettant d'extraire des objets d'un contenu multimédia - Google Patents

Procédé, dispositif, équipement utilisateur et programme informatique permettant d'extraire des objets d'un contenu multimédia Download PDF

Info

Publication number
WO2015162027A2
WO2015162027A2 PCT/EP2015/057968 EP2015057968W WO2015162027A2 WO 2015162027 A2 WO2015162027 A2 WO 2015162027A2 EP 2015057968 W EP2015057968 W EP 2015057968W WO 2015162027 A2 WO2015162027 A2 WO 2015162027A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
points
interest
interest point
contour
Prior art date
Application number
PCT/EP2015/057968
Other languages
English (en)
Other versions
WO2015162027A3 (fr
Inventor
Tommy Arngren
Tim Kornhammar
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Publication of WO2015162027A2 publication Critical patent/WO2015162027A2/fr
Publication of WO2015162027A3 publication Critical patent/WO2015162027A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation

Definitions

  • This disclosure relates to a method, a device, a user equipment and a computer program for object extraction from multimedia content.
  • the World Wide Web grows larger and larger every day. Many users of the Web access it multiple times a day every day of the week using a variety of communication devices, such as Personal computers (PC), phones, tablets, cameras and Internet Protocol Television (IP-TV) devices. Advances in mobile technologies have made it easier for a user to capture multimedia content (e.g. audio content, video content, image content), and different social network and video sharing Web sites make it possible for the user to share such content on the Web.
  • multimedia content e.g. audio content, video content, image content
  • search engines are being deployed not only for Internet search, but also for proprietary, personal, or special-purpose databases, such as personal multimedia archives, user generated content sites, proprietary data stores, workplace databases, and others.
  • personal computers may host search engines to find content on the entire computer or in special-purpose archives (e.g., personal music or video collection).
  • User generated content sites which host multimedia and/or other types of content, may provide custom search functionality tailored to that type of content.
  • Multimedia content typically contains several layers of information, the so-called modalities, e.g. image, sound, spoken language, or text.
  • the methods for object recognition focus on matching existing objects, images or shapes, to each tested image in a data set or a database.
  • the problem with these is that they are very computationally heavy.
  • Image registration methods analyze parts of the image by selecting fixed points in the image, which then are compared with other images. This typically requires high similarity between the images and is therefore much more sensitive to the quality, translation, scaling, rotation or color difference of the images.
  • Image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels).
  • the goal of segmentation is to separate different parts of an image by colors, edges or patterns.
  • An advantage of this method is the ability to analyze each part separately.
  • Edge detection is one example of image segmentation. This method uses difference in colors to identify edges. Edge detection is a noise sensitive approach but at the same time it is more computationally efficient. The result of edge detection is contours that can be extracted from the image.
  • An aspect of the embodiments defines a method for extracting objects of a multimedia content.
  • the multimedia content comprises multiple image segments. Each segment comprises multiple pixels.
  • the method comprises calculation of a force reflecting an edge strength for at least two pixels in an image segment of the multiple image segments.
  • the method further comprises identification of at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels. This means that the identification is based on the calculated forces of the pixels.
  • Each interest point is associated with a directionality and exit points.
  • the directionality gives an angle indicative of a strength of the force, i.e. an angle in which the force is strongest.
  • the exit points are used for connecting one interest point to another interest point.
  • the method further comprises creation of at least one contour by connecting the at least two interest points by using at least one predefined pattern, selected from a number of predefined patterns.
  • the method further comprises creation of at least one contour from the identified interest points, where a contour is created by connecting the interest points by some predefined patterns.
  • the method further comprises extraction of at least one created contour, e.g. obtained by connecting the interest points.
  • the method further comprises extraction of at least one object from the at least one extracted contour.
  • Another aspect of the embodiments defines a device for extracting objects of a multimedia content, wherein the multimedia content comprises multiple image segments, wherein each image segment comprises multiple pixels.
  • the device is configured to calculate a force reflecting an edge strength for at least two pixels in an image segment of the multiple image segments. Furthermore, the device is configured to identify at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels, wherein each interest point is associated with a directionality and exit points, wherein the directionality gives an angle indicative of a strength of the force, and wherein the exit points are used for connecting one interest point to another interest point.
  • the device is configured to create at least one contour by connecting the at least two interest points by using at least one predefined pattern, selected from a number of predefined patterns. Additionally, the device is configured to extract the at least one created contour, and to extract at least one object from the at least one extracted contour.
  • a device for extracting objects of a multimedia content having the following characteristics.
  • the device comprises a force calculator for calculating a force reflecting an edge strength for the at least two pixels in an image segment of the multiple image segments.
  • the device further comprises an identifier, aka identifier module, for identifying at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels.
  • Each interest point is associated with a directionality and exit points.
  • the directionality gives an angle indicative of a strength of the force.
  • the exit points are used for connecting one interest point to another interest point.
  • the device further comprises a contour creator for creating at least one contour by connecting the at least two interest points by using at least one predefined pattern.
  • the at least one contour may be created from the identified interest points.
  • the device further comprises a contour extractor for extracting the at least one created contour, e.g. obtained by connecting the interest points.
  • the device further comprises an object extractor for extracting the at least one object from the at least one extracted contour.
  • the device for extracting objects of a multimedia content comprising multiple image segments may also comprise a processor and a memory, said memory containing instructions executable by said processor whereby said device is operative to: calculate a force for at least two pixels in the image segment, identify at least two interest points among the pixels in the image segment based on the calculated forces of pixels, create at least one contour from the identified interest points, extract at least one created contour obtained by connecting the interest points and extract at least one object from the at least one extracted contour.
  • a further aspect of the embodiments defines a computer program for extracting objects of a multimedia content.
  • the multimedia content comprises multiple image segments.
  • the computer program comprises a force calculator module for calculating a force reflecting an edge strength for at least two pixels in an image segment of the multiple image segments.
  • the computer program further comprises a module for identifying at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels.
  • the computer program further comprises a contour creator module for creating at least one contour from the identified interest points, where a contour is created by connecting the at least two interest points by using at least one predefined pattern, selected from a number of predefined patterns.
  • the computer program further comprises a contour extractor module for extracting at least one created contour, e.g. obtained by connecting the interest points.
  • the computer program further comprises an object extractor module for extracting at least one object from the at least one extracted contour.
  • FIG. 1 illustrates functional components of a system wherein the embodiments can be implemented.
  • FIG. 2 illustrates a flowchart representing a sequence of actions from the moment a new multimedia content is accessed to the moment when the objects are extracted from the new content, according to an embodiment.
  • FIG. 3 illustrates an example of an image segment and its calculated force, according to an embodiment.
  • FIG. 4 illustrates how the interest points are identified, according to an embodiment.
  • FIG. 5 shows examples of the patterns that can be used for connecting the identified interest points, according to an embodiment.
  • FIG. 6 depicts a rule according to which the interest points whose difference in forces is above the threshold are not connected by any pattern, according to an embodiment.
  • FIG. 7 shows how the remaining unconnected pixels can be connected by following the directions of the strongest force, according to an embodiment.
  • FIG. 8 summarizes the steps of forming of contours from the identified interest points, according to an embodiment.
  • FIG. 8a' shows an example of an image segment for which the contours are to be found, according to an embodiment.
  • FIG. 8b illustrates the straight lines that are extracted from the image segment in FIG. 8a', according to an embodiment.
  • FIG. 8c shows the extracted patterns after a search for semi-straight lines is performed, according to an embodiment.
  • FIG. 8d shows the extracted patterns when the arc-like structures are included as well, according to an embodiment.
  • FIG. 8e illustrates the extracted contours after the attempt to connect the remaining unconnected interest points by following the direction of the strongest force, according to an embodiment.
  • FIG. 9 illustrates an example of morphological operations of dilation and erosion for a rectangular structuring element, according to an embodiment.
  • FIG. 10a illustrates an extracted contour from an image segment, according to an embodiment.
  • FIG. 10b illustrates a result of flood filling performed on the contour in FIG. 10a, according to an embodiment.
  • FIG. 10c shows the difference between dilation and erosion of the flood- filled image segment from FIG. 10b, according to an embodiment.
  • FIG. 11 is a schematic block diagram illustrating a device for object extraction from multimedia content according to an embodiment.
  • FIG. 12 is a schematic block diagram further illustrating a device for object extraction from multimedia content according to an embodiment.
  • FIG. 13 is a schematic block diagram illustrating a computer comprising a computer program product with a computer program for object extraction from multimedia content according to an embodiment.
  • FIG. 1 illustrates functional components of a system in which the embodiments may be implemented.
  • a user submits search queries to a search and indexing server 101.
  • a search query 102 can contain a shape or a keyword for example.
  • the search and indexing server 101 may be deployed for internet search, e.g. online content 103 as depicted in FIG. 1, for proprietary, personal, or special-purpose databases, such as personal multimedia archives, user generated content sites, proprietary data stores, workplace databases, and others (all represented by the "content database" 104 in FIG. l).
  • Multimedia content typically contains several layers of information, the so-called modalities, e.g. objects, sound, spoken language, or text. Recently, objects have been recognized as an important modality in multimedia.
  • Object extraction is performed by the search and indexing server 101 and may be initiated as soon as a new multimedia content is uploaded or when the server 101 detects consumption of a new content.
  • the extraction of objects is usually done on the level of images (frames) or image segments of a multimedia content.
  • the object extraction is a step performed before identification - see below.
  • the object extraction deals with how to separate what possibly can be identified as objects from an image segment in order to allow comparison of objects.
  • the extracted objects can, for example, be represented by a number of frequencies, including one or more frequencies, and further indexed and stored in the index table 105. This makes the objects searchable within an image segment or a frame of a multimedia content.
  • each extracted object may be associated with a time stamp.
  • FIG. 2 illustrates a flowchart representing a sequence of actions performed for object extraction, in accordance with embodiments herein.
  • Multimedia content is usually segmented into smaller parts that can be handled separately, i.e. image segments. For example, it is common to split a video into separate frames that are further processed independently. Alternatively, a video can be segmented into groups containing a number of adjacent frames with high temporal correlation. It is also possible to perform segmentation on the image level. In this application, the term image will be used to represent both image and image segment.
  • the object extraction is performed by the search and indexing server 101.
  • the method herein may be performed by other entities, such as a device, a computer etc., as described below.
  • Object extraction is performed on an image segment level, which will result in a list of searchable items associated to a multimedia content.
  • a multimedia content For example, in case of video, there may be a video_id, an extracted object and a time_stamp when this object is detected in a video.
  • a time stamp may be a frame number as well.
  • a force reflecting an edge strength for at least two pixels in an image segment of the multiple image segments is calculated.
  • the calculation of the rate of change, also referred to as force, of an image segment is performed. This may, for example, be done by gradient-based edge detection.
  • the rate of change F(6>) and the angle ⁇ along which the image segment has this rate of change can be calculated as follows:
  • D x 2 , D y 2 and D xy are the tensor functions
  • R, G and B are the red, green and blue image segment component respectively
  • x and y the image segment coordinates,— ,— , dB dR dG dB
  • — , — ,—,— are the gradients of the R, G and B image segment components in ox dy dy dy horizontal (x) and vertical (y) directions, and r, g and b are the unitary vectors associated with the R, G and B components respectively: dR dG dB dR dG dB
  • the matrices above define the so-called Scharr operator. However, this disclosure is by no means limited to the Scharr operator. Other examples of operators that could be used are: Sobel, Laplacian, Prewitt, Roberts etc.
  • the angle of the rate of change, ⁇ is calculated from the equations above and is subsequently used to calculate the actual rate of change (force).
  • a common approach is to store the force of an image segment in a matrix form, referred to as a matrix of forces.
  • each matrix element of the matrix of forces reflects the edge strength of the corresponding pixel.
  • FIG. 3 shows an example of an image segment and a calculated rate of change for all the pixels in that image segment. Left portion of FIG. 3 shows an image segment, denoted image', that is a color photograph converted into black/white for proper reproducibility. Right portion of FIG. 3 shows force calculated on the image segment to the left.
  • the search and indexing server 101 identifies at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels, wherein each interest point is associated with a directionality and exit points, wherein the directionality gives an angle indicative of a strength of the force, and wherein the exit points are used for connecting one interest point to another interest point.
  • a number of interest points are identified from the matrix of forces described above.
  • a point is weaker in the direction of north if the forces of the three nearest points directly above the tested point are weaker than the forces of the three points consisting of the tested point and its two nearest neighbors from the left and right respectively.
  • the forces are compared point-wise, that is the force of the tested point is compared to the force of the point on top of it etc.
  • the center point with a force value 6 in FIG. 4 is weaker to the north in example (a), weaker to the west in example (c), weaker to both north and south in example (b) and weaker to both east and west in example (d).
  • a point that is weaker on two opposite sides is an interest point, where by opposite sides it is meant north-south, east-west, northeast-southwest, northwest- southeast or, in mathematical terms, the sides that are 180° apart.
  • Each interest point is characterized by a directionality measured as an angle between the horizontal direction and the direction in which the interest point is weaker. For example, an interest point that is weaker in the directions of north and south has a directionality of 0°, an interest point that is weak in the direction of north-west and southeast has a directionality of 45° and an interest point that is weaker towards east and west has a directionality of 90°.
  • An interest point is further characterized by the so-called exit points.
  • An interest point with directionality of 0° has the exit points to the east and west.
  • An interest point with directionality of 45° has exit points to the north-east and south-west.
  • an interest point with 90° directionality has exits to the north and south.
  • the exit points are used for connecting one interest point to another interest point.
  • the exit points are used for connection to other interest points.
  • each interest point is therefore described with its coordinates, force, directionality and exit points.
  • the search and indexing server 101 creates at least one contour by connecting the at least two interest points by using at least one predefined pattern, selected from a number of predefined patterns.
  • the interest points are connected in order to form a contour for an object to be extracted.
  • the interest points can be connected by using predefined patterns.
  • a predefined pattern is determined in advance of the execution of the method herein, i.e. not dynamically created when the method is performed. Some typical examples of patterns are depicted in FIG. 5. The patterns are marked with boxes. An alternative representation is to set the squares with boxes to a grey shade, which however is more difficult to reproduce.
  • the top left pattern in FIG. 5 shows a straight horizontal line that consists of two interest points with a directionality of 0° and two non-interest points in between them.
  • a semi- straight line (pattern 2) further depicted in the bottom left, consists of the two interest points with directionality 0°, placed in two adjacent rows and separated by two pixels in the horizontal direction. These interest points are connected to the other two points, one to the east exit of the first interest point and another one to the west of the second interest point, to form a semi-straight line.
  • the same figure further shows various other examples of arc-like patterns and other common patterns. However, embodiments herein are by no means limited to these patterns only.
  • pattern 1 from FIG. 5 can be rotated by 90° to obtain a vertical line.
  • Pattern 2 can be rotated and mirrored to produce a total of four semi-straight-line-like patterns.
  • Patterns 3, 5 and 6 each produce three more patterns of the same shape in different directions by rotation by 90°, whereas patterns 4 and 7 each produce additional seven patterns with different orientations. This gives a total of 34 pre-defined patterns to be tested when connecting the interest points, for this example.
  • the interest points are now connected by the pre-defined patterns. Some of these patterns may be considered as more important than the others, depending on the application.
  • the search for patterns can be performed for every interest point in the priority order that can be set.
  • the priority order may correspond to the number of every pattern depicted in FIG. 5, where for each pattern we test its rotated and mirrored versions as well.
  • a pattern 1 this of course given that an interest point has a proper directionality of 0°.
  • the same steps are then repeated for each interest point and a vertical line etc.
  • Some additional rules may be imposed for connecting the interest points.
  • the connections may only start at an exit of an interest point and end in the exit of another interest point.
  • an interest point may not be connected to another interest point if the difference of their forces is above a certain threshold. This is to prevent connecting interest points that for example belong to different objects.
  • FIG. 6 illustrates this scenario.
  • Points A and B, having forces 50 each and forces 40 for in-between pixels are connected by a straight line (pattern 1).
  • the difference in forces between points C and D, as well as point C and in-between points is considered too large for the points to be connected, despite the fact that these points can be connected by a straight line (pattern 1).
  • the information for every connected interest point is updated with the formed connections.
  • the updated information can be that the interest point with coordinates (i,j) is connected to the interest point with coordinates (m,n) by a semi- straight line (pattern 2).
  • FIG. 7 An example illustrating this step is depicted in FIG. 7.
  • Points A and B are initially unconnected. Point A has exits to the east and west, whereas point B has exits to the north-east and south-west. By following the directions of the strongest force for these two points one can find the path marked with framing as shown in FIG. 7.
  • each non-interest point we may set a counter to check how many times it is used in creating connections. If it is used more than once, we can set it as an interest point.
  • FIG. 8 summarizes the step S3 described above.
  • FIG. 8(a') is the original image segment for which the contours are to be found.
  • the original color photograph has been converted to a black/white image segment for proper reproducibility.
  • FIG. 8(b) shows the straight lines (pattern 1) that are extracted first, whereas FIG. 8(c) shows the extracted patterns after a search for semi-straight lines is performed.
  • FIG. 8(d) shows the extracted arc-like structures are included as well.
  • FIG. 8(e) shows the extracted contours after the attempt to connect the remaining unconnected interest points by following the direction of the strongest force.
  • the contour(s) found following the steps described above may be filtered to remove noise. This can be done, but is not limited to, by morphological filtering. This step can be performed even prior to connecting the remaining unconnected points by following the strongest force.
  • a step S4 the search and indexing server 101 extracts the at least one created contour.
  • the found contour(s) are finally being extracted from an image segment.
  • This is initiated by a search for the connected interest points according to some scanning order (for example line-by-line search starting from the top left corner in an image segment).
  • the first encountered connected interest point is assigned to belong to the first contour to be extracted. Further on, all the points having a connection to this interest points will be added to belong to this contour, and all of their connected points will be added and so on.
  • the search is continued to find the next non-extracted interest point until all of the interest points are exhausted.
  • the search and indexing server 101 extracts at least one object from the at least one extracted contour.
  • an object is extracted from the extracted contours.
  • Each contour can be considered as a black and white image segment where the white pixels are the contour pixels and the black pixels are the non-contour pixels.
  • Each extracted object should have a shape defined by the outer contour of the object. This shape is obtained with for example morphological filtering, which is usually performed on binary image segments that are image segments containing only two colors - black and white. However, extensions to gray-scale image segments are available as well.
  • morphological filtering the two most basic methods are dilating and eroding. Both methods use a structuring object to define their operation.
  • the structuring object can have different effects depending on the format.
  • One such object a rectangle with size 2x1 pixels, is being depicted in FIG.9.
  • In dilating this object will offset each white pixel outwards (widening), with the same format as the structuring object.
  • eroding the same operation will be done inwards (thinning) instead, see FIG. 9.
  • a useful technique for morphological operations is the so-called flood-fill. If the contour (white pixels) is closed then, from a given start point the technique would fill all the black pixels that are connected into a third set (for example a gray scale). At this point all the black pixels that would remain in the image segment would be the inner parts of the closed contour. By converting the black pixels to white pixels and then the grey pixels back to black, the contour would now be filled without any holes.
  • FIG. 10 An example of this operation can be seen in FIG. 10, where (a) shows the original shape, (b) the flood-filled shape (a), and finally (c) the difference between dilation and erosion of the flood- filled image (b).
  • the extracted objects can be stored as image segments.
  • the image segment size of each object is object ieight x object_width.
  • Each object may be stored and associated with the video_id and a time_stamp/frame. This may be realized as a hyperlink to a certain position in the video where the extracted object occurs.
  • FIG. 11 is a schematic block diagram of a device 100 for object extraction according to an embodiment.
  • the device 100 is configured to calculate a force reflecting an edge strength for at least two pixels in an image segment of the multiple image segments, identify at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels, wherein each interest point is associated with a directionality and exit points, wherein the directionality gives an angle indicative of a strength of the force, and wherein the exit points are used for connecting one interest point to another interest point.
  • the device 100 is configured to create at least one contour by connecting the at least two interest points by using at least one predefined pattern, selected from a number of predefined patterns, to extract the at least one created contour, and to extract at least one object from the at least one extracted contour.
  • the device may comprise one or more of the following modules, such as hardware or software modules.
  • the device 100 comprises a force calculator 110, e.g. a force calculating module, configured to calculate a force reflecting an edge strength for the at least two pixels in an image segment of the multiple image segments. This means that the force calculator 110 calculate forces for pixels in an image or image segment.
  • a force calculator 110 e.g. a force calculating module, configured to calculate a force reflecting an edge strength for the at least two pixels in an image segment of the multiple image segments. This means that the force calculator 110 calculate forces for pixels in an image or image segment.
  • An identifier 120 of interest points e.g. an identifying module, is configured to identify at least two interest points among the pixels in the image segment based on the calculated forces for the at least two pixels. This means that the identifier 120 is configured to select, or identify, the interest points based on the calculated forces.
  • a contour creator 130 e.g. a contour creating module, is configured to create at least one contour by connecting the at least two interest points by using at least one predefined pattern. This means that the contour creator 130 performs forming, or creation, of contours by connecting the identified interest points by some predefined patterns.
  • a contour extractor 140 e.g. a contour extracting module, of the device 100 is configured to extract the at least one created contour, e.g. obtained by connecting the interest points.
  • An object extractor 150 e.g. an object extracting module, is configured to extract the at least one object from the at least one extracted contour. Expressed somewhat differently, the object extractor 150 is configured to extract the objects from the extracted contours.
  • FIG. 12 is a schematic block diagram of a device 100 for object extraction according to another embodiment of the device 100.
  • the device 100 comprises a processor 160 and a memory 170, said memory containing instructions 180 executable by said processor whereby said device is operative to: calculate a force for at least two pixels in the image segment, identify at least two interest points among the pixels in the image segment based on the calculated forces of pixels, create at least one contour from the identified interest points, extract at least one created contour obtained by connecting the interest points and extract at least one object from the at least one extracted contour.
  • the device 100 may be implemented in hardware, in software or a combination of hardware and software.
  • the device may be implemented in, e.g. comprised in, user equipment, such as a mobile telephone, tablet, desktop, netbook, multimedia player, video streaming server, set-top box or computer.
  • FIG. 13 schematically illustrates an embodiment of a computer 200 having a processing unit 201, such as a DSP (Digital Signal Processor) or CPU (Central Processing Unit).
  • the processing unit 201 may be a single unit or a plurality of units for performing different steps of the method described herein.
  • the computer also comprises an input/output (I/O) unit 202 for receiving the image segments and for outputting the extracted objects from the input image segments.
  • I/O unit 202 has been illustrated as a single unit in FIG. 13 but can likewise be in the form of a separate input unit and a separate output unit.
  • the computer comprises at least one computer program product 203 in the form of a non- volatile memory, for instance an EEPROM (Electrically Erasable Programmable Read-Only Memory), a flash memory or a disk drive.
  • the computer program product 203 comprises a computer program 204, which comprises code means which when run on the computer, such as by the processing unit 201, causes the computer 200 to perform the steps of the method described in the foregoing.
  • the code means in the computer program 204 comprises a force calculator module 210 for calculating forces in an image segment, an identifier of interest points module 220 for identifying the interest points based on the calculated force, a contour creator module 230 for creating of contours by connecting the identified interest points by some predefined patterns, a contour extractor module 240 for extracting the contours obtained by connecting the interest points and an object extractor module 250 for extracting the objects from the extracted contours.
  • These modules essentially perform the steps of the flow diagram in FIG. 2 when run on, or executed by, the processing unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un dispositif (100) permettant d'extraire des objets d'un contenu multimédia. Le contenu multimédia comprend plusieurs segments d'image et chaque segment d'image comprend plusieurs pixels. Le dispositif calcule (S1) une force réfléchissant une résistance de bords pour au moins deux pixels dans un segment d'image des multiples segments d'image. Le dispositif (100) identifie (S2) au moins deux points d'intérêt parmi les pixels dans le segment d'image d'après les forces calculées pour les au moins deux pixels. Chaque point d'intérêt est associé à une directionnalité et à des points de sortie. La directionnalité donne un angle indiquant une intensité de la force. Les points de sortie sont utilisés pour connecter un point d'intérêt à un autre point d'intérêt. Le dispositif (100) crée (S3) au moins un contour en connectant les au moins deux points d'intérêt au moyen d'au moins un motif prédéfini, sélectionné parmi un certain nombre de motifs prédéfinis. Le dispositif (100) extrait (S4) le ou les contours créés. En outre, le dispositif (100) extrait (S5) au moins un objet à partir du ou des contours extraits. L'invention concerne également un programme informatique correspondant.
PCT/EP2015/057968 2014-04-25 2015-04-13 Procédé, dispositif, équipement utilisateur et programme informatique permettant d'extraire des objets d'un contenu multimédia WO2015162027A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461984155P 2014-04-25 2014-04-25
US61/984,155 2014-04-25

Publications (2)

Publication Number Publication Date
WO2015162027A2 true WO2015162027A2 (fr) 2015-10-29
WO2015162027A3 WO2015162027A3 (fr) 2016-10-06

Family

ID=53016585

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/EP2015/057968 WO2015162027A2 (fr) 2014-04-25 2015-04-13 Procédé, dispositif, équipement utilisateur et programme informatique permettant d'extraire des objets d'un contenu multimédia
PCT/EP2015/057970 WO2015162028A2 (fr) 2014-04-25 2015-04-13 Procédé, dispositif, équipement utilisateur et programme informatique permettant une extraction d'objet d'un contenu multimédia

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/057970 WO2015162028A2 (fr) 2014-04-25 2015-04-13 Procédé, dispositif, équipement utilisateur et programme informatique permettant une extraction d'objet d'un contenu multimédia

Country Status (1)

Country Link
WO (2) WO2015162027A2 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100442B (zh) * 2020-11-13 2021-02-26 腾讯科技(深圳)有限公司 用户倾向性识别方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Also Published As

Publication number Publication date
WO2015162028A2 (fr) 2015-10-29
WO2015162027A3 (fr) 2016-10-06

Similar Documents

Publication Publication Date Title
US9865063B2 (en) Method and system for image feature extraction
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
JP7490141B2 (ja) 画像検出方法、モデルトレーニング方法、画像検出装置、トレーニング装置、機器及びプログラム
US20200356818A1 (en) Logo detection
TWI395145B (zh) 手勢辨識系統及其方法
WO2019071976A1 (fr) Procédé de détection de relief dans une image panoramique, reposant sur une fusion de régions et sur un modèle de mouvement des yeux
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
CN111626163B (zh) 一种人脸活体检测方法、装置及计算机设备
CN103353881B (zh) 一种应用程序搜索方法及装置
CN110852311A (zh) 一种三维人手关键点定位方法及装置
Kalia et al. An analysis of the effect of different image preprocessing techniques on the performance of SURF: Speeded Up Robust Features
CN109033935B (zh) 抬头纹检测方法及装置
CN112101386B (zh) 文本检测方法、装置、计算机设备和存储介质
CN112651953A (zh) 图片相似度计算方法、装置、计算机设备及存储介质
Zeeshan et al. A newly developed ground truth dataset for visual saliency in videos
CN103955713B (zh) 一种图标识别方法和装置
CN108647605B (zh) 一种结合全局颜色与局部结构特征的人眼凝视点提取方法
CN113570615A (zh) 一种基于深度学习的图像处理方法、电子设备及存储介质
CN112348008A (zh) 证件信息的识别方法、装置、终端设备及存储介质
CN106997580B (zh) 图片处理方法和装置
CN109857897B (zh) 一种商标图像检索方法、装置、计算机设备及存储介质
CN115018886B (zh) 运动轨迹识别方法、装置、设备及介质
CN113228105A (zh) 一种图像处理方法、装置和电子设备
WO2015162027A2 (fr) Procédé, dispositif, équipement utilisateur et programme informatique permettant d'extraire des objets d'un contenu multimédia
KR102444172B1 (ko) 영상 빅 데이터의 지능적 마이닝 방법과 처리 시스템

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15719422

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 15719422

Country of ref document: EP

Kind code of ref document: A2