WO2020174447A1 - Multi-label placement for augmented and virtual reality and video annotations - Google Patents

Multi-label placement for augmented and virtual reality and video annotations Download PDF

Info

Publication number
WO2020174447A1
WO2020174447A1 PCT/IB2020/051706 IB2020051706W WO2020174447A1 WO 2020174447 A1 WO2020174447 A1 WO 2020174447A1 IB 2020051706 W IB2020051706 W IB 2020051706W WO 2020174447 A1 WO2020174447 A1 WO 2020174447A1
Authority
WO
WIPO (PCT)
Prior art keywords
label
video
placement
optimum location
pairs
Prior art date
Application number
PCT/IB2020/051706
Other languages
French (fr)
Inventor
Ramya Sugnana Murthy Hebbalaguppe
Srinidhi Hegde
Jitender Kumar Maurya
Original Assignee
Tata Consultancy Services Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tata Consultancy Services Limited filed Critical Tata Consultancy Services Limited
Publication of WO2020174447A1 publication Critical patent/WO2020174447A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation

Definitions

  • the disclosure herein generally relates to video processing, and, more particularly, to a method and system for finding optimum location for label placement in a video.
  • augmented/virtual reality objects in a video are labelled for the benefit of users. This helps the users understand what/who each object is, along with any additional information.
  • fusion of contextual synthetic data with the visual data (video) enriches perception and efficiency of a user who is performing a task.
  • Contextual data for example, labels, coordinates, and so on
  • overlays for example, labels, coordinates, and so on
  • Size and shape of such overlays may vary from one to other. When such overlays are fused to the visual data, it is possible that the overlays may cause occlusion of actual objects in the video. In addition to this, consider a scenario in which in a particular frame in the video multiple objects are present. If labels corresponding to all the objects are placed randomly, the user may find it confusing to understand labels matching each of the objects. SUMMARY
  • a processor implemented method for label placement in a video is provided.
  • the video is collected as input, by one or more hardware processors.
  • a plurality of object-label pairs are generated for the video, by the one or more hardware processors.
  • a saliency map is generated for the video, by the one or more hardware processors, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video. Further an optimum location for placing the label of each of the object-label pairs is detected.
  • a Fabel Occlusion over Saliency (LOS) score for each of the objects is calculated, and then each of the plurality of object- label pairs is ranked based on corresponding LOS score. Further, the optimum location is detected such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
  • LOS Fabel Occlusion over Saliency
  • a system for label placement in a video includes a memory module storing a plurality of instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory module via the one or more communication interfaces.
  • the one or more hardware processors are caused by the plurality of instructions to collect the video is collected as input. Further, a plurality of object-label pairs are generated for the video. Then a saliency map is generated for the video, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video. Further an optimum location for placing the label of each of the object-label pairs is detected.
  • a Fabel Occlusion over Saliency (LOS) score for each of the objects is calculated, and then each of the plurality of object-label pairs is ranked based on corresponding LOS score. Further, the optimum location is detected such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
  • LOS Fabel Occlusion over Saliency
  • a non-transitory computer readable medium for label placement in a video executes the following method to identify an optimum location for label placement.
  • the video is collected as input, by one or more hardware processors.
  • a plurality of object-label pairs are generated for the video, by the one or more hardware processors.
  • a saliency map is generated for the video, by the one or more hardware processors, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video. Further an optimum location for placing the label of each of the object-label pairs is detected.
  • a Label Occlusion over Saliency (LOS) score for each of the objects is calculated, and then each of the plurality of object- label pairs is ranked based on corresponding LOS score. Further, the optimum location is detected such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
  • LOS Label Occlusion over Saliency
  • FIG. 1 illustrates an exemplary block diagram of a system for determining optimum location for label placement, according to some embodiments of the present disclosure.
  • FIG. 2 is a flow diagram depicting steps involved in the process of determining optimum location for label placement, using the system of FIG. 1, according to some embodiments of the present disclosure.
  • FIG. 3 is an example diagram depicting data and data flow in the process of determining optimum location for label placement being performed using the system of FIG. 1, according to some embodiments of the present disclosure.
  • FIG. 4 (a through e) are example diagrams depicting different properties considered for diagonal heuristic and the central bias, according to some embodiments of the present disclosure.
  • FIG. 5 (a through e) are example diagrams depicting involved in the process of determining optimum location for label placement, using the system of FIG. 1, according to some embodiments of the present disclosure.
  • FIG. 1 through FIG. 5 where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
  • FIG. 1 illustrates an exemplary block diagram of a system for determining optimum location for label placement, according to some embodiments of the present disclosure.
  • the system 100 includes at least one memory module 101, at least one hardware processor 102, and at least one communication interface 103.
  • the one or more hardware processors 102 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, graphics controllers, logic circuitries, and/or any devices that manipulate signals based on operational instructions.
  • the hardware processor(s) 102 are configured to fetch and execute computer-readable instructions stored in the memory module 101, which causes the hardware processor(s) 102 to perform actions depicted in FIG. 2 for the purpose of identifying the optimum location for label placement.
  • the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
  • the communication interface(s) 103 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite.
  • the communication interface(s) 103 can include one or more ports for connecting a number of devices to one another or to another server.
  • the memory module(s) 101 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • volatile memory such as static random access memory (SRAM) and dynamic random access memory (DRAM)
  • non-volatile memory such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
  • ROM read only memory
  • erasable programmable ROM erasable programmable ROM
  • flash memories hard disks
  • optical disks optical disks
  • magnetic tapes magnetic tapes
  • the system 100 collects a video for processing.
  • the system 100 may collect and process more than one video at a time.
  • the video may be an RGB video V ⁇ f 1 , f 2 , . ..,f n > with a frame sequence of length‘n’ and each frame having a dimension of F w * F h .
  • the system 100 uses/executes any suitable mechanism/technique to process each frame of the video, one frame at a time or multiple frames at a time, to identify one or more objects in each frame.
  • all the identified objects are labelled by the system 100.
  • at least one object is selected as an Object of Interest and then only the selected at least one object of interest is labelled.
  • the system 100 uses appropriate mechanism(s) to generate at least one label for each object. For example, YOLOv2 mechanism may be used by the system 100 for identifying and labeling the objects.
  • the objects and the corresponding labels are used by the system 100 to generate a plurality of object-label pairs corresponding to the video being processed.
  • the system 100 uses a Saliency Attention Model (SAM) for generating a saliency map corresponding to the video being processed.
  • SAM predicts saliency of regions in each frame being processed, and this information is captured in the saliency map.
  • the saliency map may also include data pertaining to identified eye fixation points of the user on each frame of the video.
  • the object-label pairs and the saliency map(s) are then processed further by the system 100 to identify the optimum location (represented in terms of coordinates of the location) for placing each of the generated labels.
  • the system 100 considers each object sequentially, in decreasing order of saliency occlusion (as indicated in the saliency map). Every time an overlay (which may be the label or any other type of overlay) is placed, the corresponding region (i.e. the region occupied by the overlay) is marked as highly salient region, which in turn indicates that this region is not suitable for placing another overlay.
  • the system 100 calculates a Label Occlusion over Saliency (LOS) score of bounding box of each object being considered, wherein the LOS score of an object represents saliency occlusion by the object and corresponding label.
  • the LOS score is calculated as:
  • N is a set of pixels (x, y) that is occluded by the overlay and G is a ground truth saliency map.
  • the LOS score ranges from 0 to 1, where score of 0 represents no occlusion with any salient region and score of 1 represents complete overlap with the high salient region.
  • the system 100 places labels such that the system 100 avoids placement of the labels on the objects and previously placed labels.
  • the system 100 requires the optimum location (and the coordinates) to satisfy three other conditions, namely 1. Closeness of the label to corresponding object, 2. No/minimal intersection between connector lines of the plurality of object-label pairs, and 3. Conditions set in terms of diagonal heuristic and central bias.
  • the system 100 checks and verifies conditions in terms of the closeness of the labels to objects and the intersection between the connector lines using Voronoi partitioning of each frame being processed.
  • the system 100 performs Voronoi partitioning of each of the frames, by keeping centroids of the bounding boxes of the objects as seeding points.
  • the Voronoi partitioning divides each frame to a plurality of regions such that each object in the frame is encompassed in a corresponding region. By keeping centroids of each of the bounding boxes as the seed point for corresponding region, the system 100 is able to ensure that top left corner of a label is placed close to the corresponding object.
  • the system 100 further uses the Voronoi partitioning to ensure minimal/no intersection between the connector lines.
  • Connector lines are the lead lines that connect an object to corresponding label.
  • the start and end points of each connector lines may be selected such that the start and end points of a connector line of an object remains within region of that object.
  • Euclidean distance between top left corner of the label and the centroid of a bounding box of the corresponding object is minimum.
  • this approach ensures that the connector lines do not intersect, which in turn improves user experience. Given below is a proof that the intersection between the connector lines can be removed using this approach:
  • r1 and r2 be object bounding box centroids, which are also the seed points for the respective Voronoi partitions, V1 and V2.
  • V1 and V2 Given two distinct connectors, C ( r 1 , r' 1 ) between endpoints r 1 and r' 1 , and C (p 2 , r' 2 ) between endpointsp 2 and r' 2 .
  • Voronoi partitions are convex polygons. From the definition of convexity, all the points s on the line segment C(s 1 , s 2 ) also lie in the corresponding Voronoi region, i.e., if r lies on the line segment C ( r 1 , r' 1 ), then it also lies within V 1 .
  • the system further ensures that the optimum location satisfies the condition in terms of the diagonal heuristic and the central bias. Studies have indicates that placing labels on diagonal angle bisectors improves user experience, and that eye-fixation points tend to cluster towards centre of the screen, a property of the human eyes termed as the‘central bias’ . These properties are depicted in FIG. 4.
  • the system 100 outputs the optimum location (and corresponding coordinates in the frames), such that the label placement at these coordinates satisfies the aforementioned conditions, and in turn improves user experience.
  • the system 100 follows the aforementioned approach so as to place multiple labels (multi label placement) within a video, as part of annotating the video (or annotating the objects in the video), in applications such as but not limited to augmented reality /virtual reality.
  • FIG. 2 is a flow diagram depicting steps involved in the process of determining optimum location for label placement, using the system of FIG. 1, according to some embodiments of the present disclosure.
  • the system 100 collects a video for processing, and by processing the collected video, generates (202) a plurality of object-label pairs corresponding to each frame in the video. The system 100 then generates (204) at least one saliency map for the video, wherein the saliency map indicates saliency of regions in each frame being considered.
  • the system 100 then calculates (206) LOS score for each object being considered, and then each object is ranked (208) based on the corresponding LOS score.
  • the system 100 determines (210) optimum location for placing each label (corresponding to each object) such that, at the optimum location, (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at any other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
  • Deep learning models for object detection and SAM were trained in PyTorch.
  • YOLOv2 pre-trained on COCO dataset having 80 classes was used.
  • Input video size was resized to 608*608 resolution before feeding as input to YOLOv2.
  • the SAM that was used for computing saliency maps had been pre-trained on SALICON dataset containing eye fixation ground truth for images.
  • the subjects viewed 20 recorded videos with different video resolutions from DIEM dataset which contained labels placed using the proposed mechanism.
  • This datasets consisted of varieties of videos from different genres of advertisements, trailers, television-series, with scenes varying from nature to animated cartoons. Also with eye movements, this dataset provides detailed eye fixation saliency annotations.
  • the users were tasked to rate the following label placement objectives for each video on a rating scale ranging from 1 to 5, 5 being the highest rating.
  • the label placement objectives are also the subjective metrics as follows: (1) Occlusion Avoidance: Does the label cover/overlap with the regions of interest? Here, a rating of 5 means no occlusion with the salient regions of the videos (2) Proximity: Is the label placed close to the corresponding object?
  • a rating of 5 corresponds to the label being very close to the object of interest.
  • Clarity Do the connectors or the leader lines intersect? Answers could be Yes/No only.
  • the embodiments of present disclosure herein addresses unresolved problem of label placement in a video.
  • the embodiment thus provides a mechanism for identifying an optimum location for placing a label in a video being processed.
  • the hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof.
  • the device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • the means can include both hardware means and software means.
  • the method embodiments described herein could be implemented in hardware and software.
  • the device may also include software means.
  • the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
  • the embodiments herein can comprise hardware and software elements.
  • the embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc.
  • the functions performed by various modules described herein may be implemented in other modules or combinations of other modules.
  • a computer- usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • a computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored.
  • a computer- readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein.
  • the term“computer- readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.

Abstract

Typically when labels are randomly fused into a video, it results in occlusion of main subjects in every video frames. Further, random placement of labels corresponding to multiple objects in the frame may confuse the user as he/she may struggle to identify label corresponding to each object. Disclosed herein are a method and system for identifying optimum location for label placement in a video. For a given video, the system generates a plurality of object-label pairs, and also a saliency map. The object-label pairs and the saliency map are processed by the system to identify the optimum location for placing each label such that at the optimum location conditions related to occlusion, closeness to object, intersection between connectors, and diagonal heuristic and central bias are satisfied.

Description

MULTI-LABEL PLACEMENT FOR AUGMENTED AND VIRTUAL REALITY AND VIDEO ANNOTATIONS
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present PCT application claims priority to India Patent Application No. 201921007962, filed before Indian Patent Office on February 28, 2019. Entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
[002] The disclosure herein generally relates to video processing, and, more particularly, to a method and system for finding optimum location for label placement in a video.
BACKGROUND
[003] In various applications such as but not limited to augmented/virtual reality, objects in a video are labelled for the benefit of users. This helps the users understand what/who each object is, along with any additional information. In the augmented reality based applications, fusion of contextual synthetic data with the visual data (video) enriches perception and efficiency of a user who is performing a task. Contextual data (for example, labels, coordinates, and so on) that are inserted to the video are called overlays.
[004] Size and shape of such overlays may vary from one to other. When such overlays are fused to the visual data, it is possible that the overlays may cause occlusion of actual objects in the video. In addition to this, consider a scenario in which in a particular frame in the video multiple objects are present. If labels corresponding to all the objects are placed randomly, the user may find it confusing to understand labels matching each of the objects. SUMMARY
[005] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a processor implemented method for label placement in a video is provided. In this method, the video is collected as input, by one or more hardware processors. Further, a plurality of object-label pairs are generated for the video, by the one or more hardware processors. Then a saliency map is generated for the video, by the one or more hardware processors, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video. Further an optimum location for placing the label of each of the object-label pairs is detected. In the process of detecting the optimum location, a Fabel Occlusion over Saliency (LOS) score for each of the objects is calculated, and then each of the plurality of object- label pairs is ranked based on corresponding LOS score. Further, the optimum location is detected such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
[006] In another aspect, a system for label placement in a video is provided. The system includes a memory module storing a plurality of instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory module via the one or more communication interfaces. The one or more hardware processors are caused by the plurality of instructions to collect the video is collected as input. Further, a plurality of object-label pairs are generated for the video. Then a saliency map is generated for the video, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video. Further an optimum location for placing the label of each of the object-label pairs is detected. In the process of detecting the optimum location, a Fabel Occlusion over Saliency (LOS) score for each of the objects is calculated, and then each of the plurality of object-label pairs is ranked based on corresponding LOS score. Further, the optimum location is detected such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
[007] In yet another aspect, a non-transitory computer readable medium for label placement in a video is provided. The non-transitory computer readable medium executes the following method to identify an optimum location for label placement. In this method, the video is collected as input, by one or more hardware processors. Further, a plurality of object-label pairs are generated for the video, by the one or more hardware processors. Then a saliency map is generated for the video, by the one or more hardware processors, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video. Further an optimum location for placing the label of each of the object-label pairs is detected. In the process of detecting the optimum location, a Label Occlusion over Saliency (LOS) score for each of the objects is calculated, and then each of the plurality of object- label pairs is ranked based on corresponding LOS score. Further, the optimum location is detected such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
[008] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[010] FIG. 1 illustrates an exemplary block diagram of a system for determining optimum location for label placement, according to some embodiments of the present disclosure.
[011] FIG. 2 is a flow diagram depicting steps involved in the process of determining optimum location for label placement, using the system of FIG. 1, according to some embodiments of the present disclosure.
[012] FIG. 3 is an example diagram depicting data and data flow in the process of determining optimum location for label placement being performed using the system of FIG. 1, according to some embodiments of the present disclosure.
[013] FIG. 4 (a through e) are example diagrams depicting different properties considered for diagonal heuristic and the central bias, according to some embodiments of the present disclosure.
[014] FIG. 5 (a through e) are example diagrams depicting involved in the process of determining optimum location for label placement, using the system of FIG. 1, according to some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[015] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[016] Referring now to the drawings, and more particularly to FIG. 1 through FIG. 5, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[017] FIG. 1 illustrates an exemplary block diagram of a system for determining optimum location for label placement, according to some embodiments of the present disclosure. The system 100 includes at least one memory module 101, at least one hardware processor 102, and at least one communication interface 103.
[018] The one or more hardware processors 102 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, graphics controllers, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the hardware processor(s) 102 are configured to fetch and execute computer-readable instructions stored in the memory module 101, which causes the hardware processor(s) 102 to perform actions depicted in FIG. 2 for the purpose of identifying the optimum location for label placement. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[019] The communication interface(s) 103 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the communication interface(s) 103 can include one or more ports for connecting a number of devices to one another or to another server.
[020] The memory module(s) 101 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules (not shown) of the system 100 can be stored in the memory 101. The memory module(1) 101 stores a plurality of instructions which when executed, cause the one or more hardware processors 102 to perform one or more actions and corresponding the identification of optimum location for label placement being handled by the system 100.
[021] The system 100 collects a video for processing. In an embodiment the system 100 may collect and process more than one video at a time. The video may be an RGB video V <f1, f2,...,fn> with a frame sequence of length‘n’ and each frame having a dimension of Fw* Fh.
[022] The system 100 then uses/executes any suitable mechanism/technique to process each frame of the video, one frame at a time or multiple frames at a time, to identify one or more objects in each frame. In an embodiment, all the identified objects are labelled by the system 100. In another embodiment, out of a plurality of objects identified, at least one object is selected as an Object of Interest and then only the selected at least one object of interest is labelled. The system 100 uses appropriate mechanism(s) to generate at least one label for each object. For example, YOLOv2 mechanism may be used by the system 100 for identifying and labeling the objects. The objects and the corresponding labels are used by the system 100 to generate a plurality of object-label pairs corresponding to the video being processed.
[023] The system 100 then uses a Saliency Attention Model (SAM) for generating a saliency map corresponding to the video being processed. SAM predicts saliency of regions in each frame being processed, and this information is captured in the saliency map. In addition to the saliency information, the saliency map may also include data pertaining to identified eye fixation points of the user on each frame of the video.
[024] The object-label pairs and the saliency map(s) are then processed further by the system 100 to identify the optimum location (represented in terms of coordinates of the location) for placing each of the generated labels. At this stage, the system 100 considers each object sequentially, in decreasing order of saliency occlusion (as indicated in the saliency map). Every time an overlay (which may be the label or any other type of overlay) is placed, the corresponding region (i.e. the region occupied by the overlay) is marked as highly salient region, which in turn indicates that this region is not suitable for placing another overlay.
[025] The system 100 then calculates a Label Occlusion over Saliency (LOS) score of bounding box of each object being considered, wherein the LOS score of an object represents saliency occlusion by the object and corresponding label. The LOS score is calculated as:
Figure imgf000009_0001
Where N is a set of pixels (x, y) that is occluded by the overlay and G is a ground truth saliency map. The LOS score ranges from 0 to 1, where score of 0 represents no occlusion with any salient region and score of 1 represents complete overlap with the high salient region.
[026] The system 100 places labels such that the system 100 avoids placement of the labels on the objects and previously placed labels. In addition to minimizing occlusion, the system 100 requires the optimum location (and the coordinates) to satisfy three other conditions, namely 1. Closeness of the label to corresponding object, 2. No/minimal intersection between connector lines of the plurality of object-label pairs, and 3. Conditions set in terms of diagonal heuristic and central bias.
[027] The system 100 checks and verifies conditions in terms of the closeness of the labels to objects and the intersection between the connector lines using Voronoi partitioning of each frame being processed. The system 100 performs Voronoi partitioning of each of the frames, by keeping centroids of the bounding boxes of the objects as seeding points. The Voronoi partitioning divides each frame to a plurality of regions such that each object in the frame is encompassed in a corresponding region. By keeping centroids of each of the bounding boxes as the seed point for corresponding region, the system 100 is able to ensure that top left corner of a label is placed close to the corresponding object.
[028] The system 100 further uses the Voronoi partitioning to ensure minimal/no intersection between the connector lines. Connector lines are the lead lines that connect an object to corresponding label. As the system 100 uses the Voronoi partitioning data, the start and end points of each connector lines may be selected such that the start and end points of a connector line of an object remains within region of that object. As a result of this approach, Euclidean distance between top left corner of the label and the centroid of a bounding box of the corresponding object is minimum. As each object is within separate regions in the Voronoi partitioning, this approach ensures that the connector lines do not intersect, which in turn improves user experience. Given below is a proof that the intersection between the connector lines can be removed using this approach:
[029] Let r1 and r2 be object bounding box centroids, which are also the seed points for the respective Voronoi partitions, V1 and V2. Consider two distinct connectors, C ( r1, r'1) between endpoints r1 and r'1, and C (p2 , r'2 ) between endpointsp2 and r'2. Voronoi partitions are convex polygons. From the definition of convexity, all the points s on the line segment C(s1, s2) also lie in the corresponding Voronoi region, i.e., if r lies on the line segment C ( r1, r'1 ), then it also lies within V1. Assume that C ( r1, r'1 ) and C (p2 , r'2) intersect at x, which implies that x e V1 Ç V2. For a strict Voronoi partition, V1 Ç V2 = ø, hence, the connectors are the same. However this leads to a contradiction since C ( r1, r'1 ) and C (p2 , r'2) are distinct. Thus C ( r1, r'1 ) and C (p2 , r'2) never intersect.
[030] The system further ensures that the optimum location satisfies the condition in terms of the diagonal heuristic and the central bias. Studies have indicates that placing labels on diagonal angle bisectors improves user experience, and that eye-fixation points tend to cluster towards centre of the screen, a property of the human eyes termed as the‘central bias’ . These properties are depicted in FIG. 4.
[031] The system 100 outputs the optimum location (and corresponding coordinates in the frames), such that the label placement at these coordinates satisfies the aforementioned conditions, and in turn improves user experience. The system 100 follows the aforementioned approach so as to place multiple labels (multi label placement) within a video, as part of annotating the video (or annotating the objects in the video), in applications such as but not limited to augmented reality /virtual reality. [032] FIG. 2 is a flow diagram depicting steps involved in the process of determining optimum location for label placement, using the system of FIG. 1, according to some embodiments of the present disclosure. The system 100 collects a video for processing, and by processing the collected video, generates (202) a plurality of object-label pairs corresponding to each frame in the video. The system 100 then generates (204) at least one saliency map for the video, wherein the saliency map indicates saliency of regions in each frame being considered.
[033] The system 100 then calculates (206) LOS score for each object being considered, and then each object is ranked (208) based on the corresponding LOS score. The system 100 then determines (210) optimum location for placing each label (corresponding to each object) such that, at the optimum location, (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at any other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
[034] The optimum location(s) thus identified and the corresponding coordinates are then provided as output by the system 100. Data flow in this mechanism is depicted in FIG. 3 as well. Further, the different steps involved in the process of identifying the optimum location for label placement are schematically represented in FIG. 5.
Experimental Results:
[035] Deep learning models for object detection and SAM were trained in PyTorch. For object detection and label generation, YOLOv2 pre-trained on COCO dataset having 80 classes was used. Input video size was resized to 608*608 resolution before feeding as input to YOLOv2. The SAM that was used for computing saliency maps had been pre-trained on SALICON dataset containing eye fixation ground truth for images.
1) Saliency map computation:
[036] During the experiment conducted, accuracy of the saliency prediction being carried out by the system 100 was compared with multiple baseline methods such as NSS, CC, AUC (Judd), sAUC, and KL. Saliency evaluation was carried out on SALICON dataset, and results are shown in Table.1.
Figure imgf000012_0001
Table. 1
[037] Mean of three Gaussian Priors Ɲ (m1, s1), Ɲ (m2, s2), Ɲ (m3, s3) were used for modelling the central bias. Here m1 = m2 = m3 = (0.5 * Fw, 0.5 * Fh), s1= (0.5 * min (Fw, Fh), (0.5 * min (Fw, Fh)), s2= (0.75 * min (Fw, Fh), (0.25 * min (Fw, Fh)), and s3= (0.25 * min (Fw, Fh), (0.75 * min (Fw, Fh)). A weighted average of the saliency map, the central bias, and diagonal heuristic was performed. More weight (of 0.7) was given to the predicted saliency map, and less weight (of 0.3) was given to the mask. It was observed that using the predicted saliency map with the diagonal heuristics gave better LOS score in comparison with addition of central bias component with saliency map. This is evident from Table. 2.
Figure imgf000012_0002
Table. 2 (Comparison of performances with biases with linear and exponential decay for including diagonal heuristic and central bias)
2) Overlay location prediction:
[038] During the overlay prediction, in order to improve temporal consistency in label placement, label locations were computed after skipping k frames of the video. Experiments proved that keeping value of k as 20 for a 30 fps video gave best rating for temporal coherence.
User Evaluation:
[039] User evaluation was carried out to understand whether the users found overlay placement mechanism being claimed useful or not. 21 subjects in total were selected out of which 9 subjects belong to age group of 20-25, 4 subjects belong to age group of 26-30, 5 subjects belong to age group of 31-35, and 3 subjects belong to age group > 35. Out of the 21 subjects, 13 were male and 8 were female subjects.
[040] The subjects viewed 20 recorded videos with different video resolutions from DIEM dataset which contained labels placed using the proposed mechanism. This datasets consisted of varieties of videos from different genres of advertisements, trailers, television-series, with scenes varying from nature to animated cartoons. Also with eye movements, this dataset provides detailed eye fixation saliency annotations. The users were tasked to rate the following label placement objectives for each video on a rating scale ranging from 1 to 5, 5 being the highest rating. The label placement objectives are also the subjective metrics as follows: (1) Occlusion Avoidance: Does the label cover/overlap with the regions of interest? Here, a rating of 5 means no occlusion with the salient regions of the videos (2) Proximity: Is the label placed close to the corresponding object? A rating of 5 corresponds to the label being very close to the object of interest. (3)Temporal Coherence: Are the labels jittery or jumpy? A rating of 5 means seamless transitions of labels in videos. (4) Readability: Is the label readable in every frame? A rating of 5 corresponds to the highest ease with which one can read especially the color of overlay box and text. (5) Color Scheme: Does the label font color stand out with respect to the background? Here 5 means contrast between label and background is high. (6) Clarity: Do the connectors or the leader lines intersect? Answers could be Yes/No only.
[041] These metrics captured evaluate (a) user experience and (b) placement of overlays. In all the experiments, label dimensions D/K were used, where D is image dimension and K e {4, 8, 12, 32}. This could be customized as per the users’ needs. The videos were shown on a desktop and a laptop. Thereafter, we capture the mean opinion ratings for each of the six metrics.
[042] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[043] The embodiments of present disclosure herein addresses unresolved problem of label placement in a video. The embodiment, thus provides a mechanism for identifying an optimum location for placing a label in a video being processed.
[044] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[045] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer- usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[046] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words“comprising,”“having,”“containing,” and“including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms“a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[047] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer- readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term“computer- readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[048] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.

Claims

1. A processor implemented method for label placement in a video, comprising: collecting the video as input, by one or more hardware processors; generating a plurality of object-label pairs for the video, by the one or more hardware processors;
generating a saliency map for the video, by the one or more hardware processors, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video; and
detecting an optimum location for placing the label of each of the object-label pairs, comprising:
calculating a Label Occlusion over Saliency (LOS) score for each of the objects;
ranking each of the plurality of object-label pairs based on corresponding LOS score; and
detecting the optimum location such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at any other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
2. The method as claimed in claim 1, wherein the occlusion caused by placement of the label for an object-label pair is determined based on the LOS score of the object in the object-label pair.
3. The method as claimed in claim 1, wherein closeness of the label to the corresponding object is determined based on Voronoi partitioning, such that at the optimum location top left comer of the label is close to the corresponding object.
4. The method as claimed in claim 1, wherein the intersection between the connector lines of the plurality of object-label pairs is avoided based on
Voronoi partitioning, such that at the optimum location (i) Euclidean distance between top left corner of the label and the centroid of a bounding box of the corresponding object is minimum, and (ii) the connector line of the object stays within same Voronoi partition as that of the centroid of the bounding box of the object.
5. A system for label placement in a video, comprising:
a memory module (101) storing a plurality of instructions;
one or more communication interfaces (103); and
one or more hardware processors (102) coupled to the memory module
(101) via the one or more communication interfaces (103), wherein the one or more hardware processors are caused by the plurality of instructions to:
collect the video as input;
generate a plurality of object-label pairs for the video; generate a saliency map for the video, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video; and
detect an optimum location for placing the label of each of the object-label pairs, by:
calculating a Label Occlusion over Saliency (LOS) score for each of the objects;
ranking each of the plurality of object-label pairs based on corresponding LOS score; and
detecting the optimum location such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at one other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
6. The system as claimed in claim 5, wherein the occlusion caused by placement of the label for an object-label pair is determined based on the LOS score of the object in the object-label pair.
7. The system as claimed in claim 5, wherein the system determines the closeness of the label to the corresponding object based on Voronoi partitioning, such that at the optimum location top left comer of the label is close to the corresponding object.
8. The system as claimed in claim 5, wherein the system avoids intersection between the connector lines of the plurality of object-label pairs based on Voronoi partitioning, such that at the optimum location (i) Euclidean distance between top left corner of the label and the centroid of a bounding box of the corresponding object is minimum, and (ii) the connector line of the object stays within same Voronoi partition as that of the centroid of the bounding box of the object.
9. A non-transitory computer readable medium for label placement in a video, the non-transitory computer readable medium performs the label placement in the video by: collecting the video as input, by one or more hardware processors; generating a plurality of object-label pairs for the video, by the one or more hardware processors; generating a saliency map for the video, by the one or more hardware processors, wherein the saliency map indicates saliency of a plurality of regions in each frame of the video; and
detecting an optimum location for placing the label of each of the object-label pairs, comprising:
calculating a Label Occlusion over Saliency (LOS) score for each of the objects;
ranking each of the plurality of object-label pairs based on corresponding LOS score; and
detecting the optimum location such that at the optimum location (i) occlusion caused by placement of the label is minimum in comparison with the occlusion caused by the label when placed at any other location, (ii) the label is closer to the corresponding object, (iii) no intersection between connector lines of the plurality of object-label pairs, and (iv) the label placement satisfies conditions set in terms of diagonal heuristic and central bias.
10. The non-transitory computer readable medium as claimed in claim 9, wherein the occlusion caused by placement of the label for an object-label pair is determined based on the LOS score of the object in the object-label pair.
11. The non-transitory computer readable medium as claimed in claim 9, wherein closeness of the label to the corresponding object is determined based on Voronoi partitioning, such that at the optimum location top left corner of the label is close to the corresponding object.
12. The non-transitory computer readable medium as claimed in claim 9, wherein the intersection between the connector lines of the plurality of object-label pairs is avoided based on Voronoi partitioning, such that at the optimum location (i) Euclidean distance between top left comer of the label and the centroid of a bounding box of the corresponding object is minimum, and (ii) the connector line of the object stays within same Voronoi partition as that of the centroid of the bounding box of the object.
PCT/IB2020/051706 2019-02-28 2020-02-28 Multi-label placement for augmented and virtual reality and video annotations WO2020174447A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201921007962 2019-02-28
IN201921007962 2019-02-28

Publications (1)

Publication Number Publication Date
WO2020174447A1 true WO2020174447A1 (en) 2020-09-03

Family

ID=72239368

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2020/051706 WO2020174447A1 (en) 2019-02-28 2020-02-28 Multi-label placement for augmented and virtual reality and video annotations

Country Status (1)

Country Link
WO (1) WO2020174447A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030234782A1 (en) * 2002-06-21 2003-12-25 Igor Terentyev System and method for adaptively labeling multi-dimensional images
US7131060B1 (en) * 2000-09-29 2006-10-31 Raytheon Company System and method for automatic placement of labels for interactive graphics applications
US20080123945A1 (en) * 2004-12-21 2008-05-29 Canon Kabushiki Kaisha Segmenting Digital Image And Producing Compact Representation
US20140359656A1 (en) * 2013-05-31 2014-12-04 Adobe Systems Incorporated Placing unobtrusive overlays in video content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7131060B1 (en) * 2000-09-29 2006-10-31 Raytheon Company System and method for automatic placement of labels for interactive graphics applications
US20030234782A1 (en) * 2002-06-21 2003-12-25 Igor Terentyev System and method for adaptively labeling multi-dimensional images
US20080123945A1 (en) * 2004-12-21 2008-05-29 Canon Kabushiki Kaisha Segmenting Digital Image And Producing Compact Representation
US20140359656A1 (en) * 2013-05-31 2014-12-04 Adobe Systems Incorporated Placing unobtrusive overlays in video content

Similar Documents

Publication Publication Date Title
US10936905B2 (en) Method and system for automatic object annotation using deep network
US10646999B2 (en) Systems and methods for detecting grasp poses for handling target objects
US9865063B2 (en) Method and system for image feature extraction
US11270158B2 (en) Instance segmentation methods and apparatuses, electronic devices, programs, and media
US20120075433A1 (en) Efficient information presentation for augmented reality
US8442327B2 (en) Application of classifiers to sub-sampled integral images for detecting faces in images
US20130188869A1 (en) Image segmentation method using higher-order clustering, system for processing the same and recording medium for storing the same
US10636176B2 (en) Real time overlay placement in videos for augmented reality applications
US10049459B2 (en) Static image segmentation
US11544348B2 (en) Neural network based position estimation of target object of interest in video frames
RU2697649C1 (en) Methods and systems of document segmentation
US11450008B1 (en) Segmentation using attention-weighted loss and discriminative feature learning
CN111553923B (en) Image processing method, electronic equipment and computer readable storage medium
US20210209782A1 (en) Disparity estimation
CN110019912A (en) Graphic searching based on shape
CN112329762A (en) Image processing method, model training method, device, computer device and medium
CN114627173A (en) Data enhancement for object detection by differential neural rendering
CN113657518B (en) Training method, target image detection method, device, electronic device, and medium
US20160224859A1 (en) Fast color-brightness-based methods for image segmentation
WO2020174447A1 (en) Multi-label placement for augmented and virtual reality and video annotations
EP3709666A1 (en) Method for fitting target object in video frame, system, and device
CN111191580B (en) Synthetic rendering method, apparatus, electronic device and medium
US20220365963A1 (en) Method and system for feature based image retrieval
Shankar et al. A novel semantics and feature preserving perspective for content aware image retargeting
Pan et al. Accuracy improvement of deep learning 3D point cloud instance segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20762646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20762646

Country of ref document: EP

Kind code of ref document: A1