CA3018739A1 - System and method for bounding box tool - Google Patents

System and method for bounding box tool Download PDF

Info

Publication number
CA3018739A1
CA3018739A1 CA3018739A CA3018739A CA3018739A1 CA 3018739 A1 CA3018739 A1 CA 3018739A1 CA 3018739 A CA3018739 A CA 3018739A CA 3018739 A CA3018739 A CA 3018739A CA 3018739 A1 CA3018739 A1 CA 3018739A1
Authority
CA
Canada
Prior art keywords
bounding box
input
location
movement
input device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3018739A
Other languages
French (fr)
Inventor
Joseph Marinier
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ServiceNow Canada Inc
Original Assignee
Element AI Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Element AI Inc filed Critical Element AI Inc
Priority to CA3018739A priority Critical patent/CA3018739A1/en
Priority to PCT/CA2019/051378 priority patent/WO2020061702A1/en
Publication of CA3018739A1 publication Critical patent/CA3018739A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Abstract

Systems and methods for an improved bounding box tool. The system can have processors configured to display a user interface on a display of a device, the user interface displaying image data. The processor can activate a virtual tool to define a bounding box. The processor can receive a first input data point at a first location relative to the image data. This can be defined by actuation of the input device. The processor can receive a movement input in a direction relative to the first location. The movement input can be defined by movement commands from the input device. The processor can receive a second input data point at a second location relative to the image data. The second input data point can be triggered by a second actuation of the input device. The processor can display, at the user interface, a graphical object representing the bounding box as an overlay of the image data. The bounding box can have corners defined by the first location and the second location.
Edges (and angles thereof) of the bounding box can be defined by the direction of the movement input.

Description

TITLE: SYSTEM AND METHOD FOR BOUNDING BOX TOOL
FIELD
[0001] The present disclosure generally relates to the field of artificial intelligence, computer vision, image processing, and interfaces.
INTRODUCTION
[0002] Embodiments described herein relate to training artificial intelligence systems for tasks using interface tools to define bounding boxes or volumes around objects or subjects of images.
In order to train an artificial intelligence system for a particular task, such as a vision task, regions of images can be labelled by drawing bounding boxes (BB) around objects or subjects of images. In some cases, an oriented bounding box (OBB) is necessary to define an orientation for the object, but it can add com plexities to the labeling task.
SUMMARY
[0003] In accordance with an aspect, there is provided a system for a bounding box tool. The system has a server having non-transitory computer readable storage medium with executable instructions for causing one or more processors to: display a user interface on a display of a device, the user interface displaying image data; activate a virtual tool to define a bounding box, the virtual tool controlled by commands received from an input device;
display, at the user interface, an indicator for the virtual tool relative to the image data;
receive a first input data point at a first location relative to the image data, the first input data point triggered by a first actuation of the input device; receive a movement input in a direction relative to the first location, the movement input defined by movement commands from the input device during the actuation of the input device and a release of the first actuation of the input device; receive a second input data point at a second location relative to the image data, the second input data point triggered by a second actuation of the input device; compute the bounding box in a bounding box format using the first input data point, the movement input, and the second input data point, wherein the bounding box format defines a first corner at the first location, an opposite corner at the second location, and an adjacent corner connected to the first corner by an edge at an angle defined by the movement input; automatically generate a new data file for storing, at a data store, the bounding box in the bounding box format;
automatically generate a graphical object representing the bounding box using the new data file;
display, at the user interface, the graphical object representing the bounding box as an overlay of the image data, the bounding box having corners defined by the first location and the second location, the bounding box having edges connecting the corners, the edges defined by the direction of the movement input.
[0004] In accordance with an aspect, there is provided a system for a bounding box tool. The system can have a server having non-transitory computer readable storage medium with executable instructions for causing one or more processors to: display a user interface on a display of a device, the user interface displaying image data; activate a virtual tool to define a bounding box, the virtual tool controlled by commands received from an input device; display, at the user interface, an indicator for the virtual tool relative to the image data; receive a first input data point at a first location relative to the image data, the first input data point triggered by a first actuation of the input device; receive a movement input in a direction relative to the first location, the movement input defined by movement commands from the input device during the actuation of the input device and a release of the first actuation of the input device; receive a second input data point at a second location relative to the image data, the second input data .. point triggered by a second actuation of the input device; display, at the user interface, a graphical object representing the bounding box as an overlay of the image data, the bounding box having corners defined by the first location and the second location, the bounding box having edges connecting the corners, the edges defined by the dir ection of the movement input.
[0005] In various further aspects, the disclosure provides corresponding systems and devices, and logic structures such as machine-executable coded instruction sets for implementing such systems, devices, and methods.
[0006] In this respect, before explaining at least one embodiment in detail, it is to be understood that the embodiments are not limited in application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
[0007] Many further features and combinations thereof concerning embodiments described herein will appear to those skilled in the art following a reading of the instant disclosure.

DESCRIPTION OF THE FIGURES
[0008] Figure 1 is a diagram of an example bounding box displayed at an i nterface;
[0009] Figure 2 is a diagram of an example bounding box displayed at an i nterface;
[0010] Figure 3 is a diagram of an example tool and bounding box according to embodiments described herein;
[0011] Figure 4 is a diagram of an example tool and bounding box according to embodiments described herein;
[0012] Figure 5 is a diagram of an example tool and bounding box according to embodiments described herein;
[0013] Figure 7 is a diagram of an example bounding box; and
[0014] Figure 8 is a diagram of an example system for a bounding box tool according to embodiments described herein.
DETAILED DESCRIPTION
[0015] Embodiments of methods, systems, and apparatus are described through reference to the drawings.
[0016] Embodiments described herein relate to training artificial intelligence systems for tasks (e.g. vision tasks) using interface tools to define bounding boxes or volumes around objects or subjects of images. Training systems for vision tasks can involve labelling regions of images by drawing bounding boxes (BB) around objects or subjects of images. A BB can be a cuboid (in three dimensions) or a rectangle (in two dimensions) containing the object.
That is, a BB is a volume that bounds or encapsulates one or more objects. An example vision task is object collision detection. Intersection of two BBs can indicate a collision of two objects, for example.
The BB can be tight around the objects or subjects.
[0017] A BB can be aligned with the axes of the coordinate system (for the interface) and it can be referred to as an axis-aligned bounding box (AABB). The AABB can be a rectangular 4-sided box in two dimensions and 6-sided box in three dimensions categorized by having its faces oriented along the coordinate system. An AABB can be a rectangular box whose face normals are parallel to the axes of the coordinate system.
[0018] An oriented bounding box (OBB) is an arbitrarily oriented rectangular bound so that it uses an object's local coordinate system. In some cases, an OBB is necessary to define an orientation for the object such as if the object is tilted or rotated. For example, Figure 1 shows an interface with an image of a tilted TEXT object 100 to label. An OBB can add complexities to the labeling and vision task. For example, to detect collisions of objects the BBs for the object can be expressed within a common coordinate system. The objects may not have a common orientation and the OBBs may need to be converted for a particular vision task, for example.
OBBs can provide models that are rotation-invariant or models that do not need to learn to detect objects that are oriented in multiple angles. For instance, OCR models can be simplified when they can assume that characters are laid out on an imaginary horizontal line in an image.
[0019] An example workflow to draw an OBB using an interface tool can be similar to drawing a rectangle, for example. An interface tool might not be well adapted to the labeling task, making it inefficient. When labeling tasks consist of drawing thousands of boxes, time efficiency becomes important. An example workflow can involve two steps. First, the user can actuate an input device (e.g. press down the mouse button) to trigger the interface tool to define one corner of the BB 102. Next, the user can drag the input device and release the actuation to define the opposite corner of the BB 102 to define an AABB. Second, the user can actuate the input device to rotate the BB 102 to try to align it with the TEXT object 100 orientation, such as using a handle 104 of the interface (Figure 2). The rotated BB 102 can be referred to as an OBB as it is oriented to the object's local coordinate system.
[0020] In order to end up with the correct BB 102 (e.g. correctly aligned to the orientation of the TEXT object 102 and correct horizontal and vertical dimensions to contain the TEXT object 102), the user would have to guess at the first step the desired horizontal and vertical dimensions, as well as its horizontal and vertical center position, without any landmark. If BB
102 is not correct (which is common given the visual estimation required by this method) then the user must subsequently adjust each side, one by one. This can be inefficient.
[0021] Embodiments described herein provide an improved interface tool for drawing or defining a BB. For example, the improved method can enable drawing or defining an OBB using "one-shot" or user actuation of an input tool. The improved method can result in few or no subsequent adjustm ents.
[0022] Figure 3 shows an interface with a tilted TEXT object 100 to label using an improved virtual tool configured to define an OBB 102 according to some embodiments.
The virtual tool can be controlled (e.g. actuated, moved) by an input device. The virtual tool can be used to capture input data for defining the OBB 102. The input device can be a computer mouse, for .. example. The input device can be integrated with a touch sensitive display or surface, as another example. The user can interact with the input device using various gestures or movements to actuate the device. For example, the user can press down a mouse button (click) and drag or move the mouse to a location on the interface or image displayed thereon. As another example, the user can touch a touch sensitive display or surface and swipe his or her finder across the display or surface.
[0023] The workflow can involve two well-integrated steps. First, the user can actuate an input device (e.g. press down the mouse button) to trigger the interface tool to define one corner of the BB 102. Then the user can drag the input tool and release the actuation of the input device to define the angle of one edge of the BB 102. Second, the user can actuate the input device (e.g. click the mouse button) to define the opposite corner of the BB
102 to produce an OBB. The workflow can involve the following example gestures by the user can define the input data: click, drag, click; or touch, swipe, touch. The input device can have different hardware form factors to enable different gestures for the input data. For example, the input device can be integrated with a touch sensitive display device or surface. As another, example, the input device can be a computer mouse. The input device can have actuating buttons or gesture response. The input device can be dynamically adapted to different input tools to receive input data points. As another example, the input device can have natural motion gesture-based sensors for receiving input data. This can be in three dimensions, such as, remote gesture device, virtual reality device, depth camera, camera, stereo camera, and so on.
[0024] The example BB 102 around text 100 can be defined by a first input data point (for a first corner location), a movement input 302 of the virtual tool (for the edge angle), and a second input data point 304 (for the second corner location).
[0025] The interface can provide a virtual (drafting or bounding box) tool. The interface can receive a command for activation of the virtual tool to define the BB 102 and display the BB 102 as a graphical object of the interface. The interface can display an image (or more generally image data) and the virtual tool can move to different locations within the image to define a bounding box as an overlay of the image. That is, the bounding box can be defined using the virtual tool and displayed at the interface as a graphical object overlayed on the image. The location of the virtual tool within the interface can be controlled by the input device. The input device can also transmit commands to control, actuate, or manipulate the virtual tool. Movement of the input device can trigger corresponding movement of the virtual tool.
Actuation of the input device can trigger a command for the virtual tool. The first actuation of the input device can be referred to as a first input data point at a first location. The drag or movement of the input device can be referred to as movement or direction input which can be defined relative to the first input data point. The second actuation of the input device can be referred to as a second input data point at a second location. The first input data point at the first location can define the first corner of the BB 102. The movement input from the first location can define the edge angle of the BB 102. The second input data point at the second location can define another (opposite) corner of the BB 102. The other corner is opposite the first corner of the BB
102. If the user selected the adjacent corner with the second input (click) by accident this can result in a BB 102 (e.g. OBB) that would collapse to a line. Since this is unlikely to be usable, embodiments described herein can disallow the second click if the box height is not greater than a threshold e.g. a few pixels and/or a percentage of the width, for example.
[0026] The method can also enable drawing or defining an AABB using the same technique.
FIG. 4 shows an interface with a tilted TEXT object 100 to label using the improved method to define an AABB. At 402, the user actuates the input device (e.g. press down the mouse button) .. to trigger the interface (or virtual) tool to define one corner of the BB
102. In this example, the user does not drag the input device and the default angle is computed as null.
At 404, the user can actuate the input device (e.g. click the mouse button) to define the opposite corner of the BB 102 to produce an AABB.
[0027] During the first step, while the user moves (e.g. drags, swipes) the input device to define the first edge, the interface can dynamically update to display a line between the corner and the location of the virtual tool (e.g. which can be displayed as a cursor), soon to become an edge of the BB. Then, until the second actuation of the input device (e.g. to define the second input data point), the suggested box can be displayed. The suggested box can be defined based on a current location of the virtual tool (e.g. the current position of a cursor before the actuation). The displayed BB can be set or frozen when the user actuates the input device to capture the second input data point and define the (second) corner.
[0028] The user can have the freedom to start by any of the four corners, and then to drag the input device to any of the two possible adjacent edges. In practice, the angle of the first edge drawn can be wrapped between -45 and 45 . The width and height can be exchanged if the angle has been wrapped by an odd num ber of quadrants.
[0029] Embodiments described herein provide an improved virtual tool for an interface to improve the workflow for a user to draw an OBB around a region of an image.
The tool can also be used to define an AABB using the improved workflow to provide flexibility and avoid requiring the user to use different workflows for OBBs and AABBs.µ The OBB can be used for subsequent vision tasks (or other tasks). The OBB can be saved in different data formats.
For example, an example format for an OBB is "angle-center-extent", consisting of the angle, the center coordinates, and the extent (half-width and half-height). To extract the region of interest from an image, one can rotate the image around the OBB's center by the opposite of the OBB's angle, before cropping the image from pixel center-extent to pixel center+extent.
[0030] Figure 8 is a diagram of a system 200 for generating an interface with a virtual tool for BBs and an example physical environment.
[0031] The system 200 can include an I/O Unit 102, a processor 104, communication interface 106, and data storage 110. The processor 104 can execute instructions in memory 108 to implement aspects of processes described herein. The processor 104 can execute instructions in memory 108 to configure an interface controller 220 for generating and managing the interface for displaying image data 222, a virtual tool 224 for defining BBs and generating bounding box data 226, and other functions described herein. The system 200 may be software (e.g., code segments compiled into machine code), hardware, embedded firmware, or a combination of software and hardware, according to various embodiments.
[0032] In some embodiments, the system 200 can implement one or more vision tasks 228 using the image data 222 and bounding box data 226. In some embodiments, the system 200 can connect to one or more vision applications 230 that can use the bounding box data 226 to define regions with the image data 222 for various tasks. In some embodiments, the vision task 228 can be integrated with the vision application 230 to exchange data and control commands.
In some embodiments, the system 200 can connect to one or more entities 250 that can implement different image related processes, that can receive image data 222 and bounding box data 226, and/or that can display the interface with virtual tool 224, for example. The system 200 can connect to data sources to receive image data 222, for example.
[0033] The I/O unit 102 can enable the system 100 to interconnect with one or more input devices, such as a keyboard, mouse, camera, touch screen and a microphone, and/or with one or more output devices such as a display screen and a speaker. An input device can be used to control the virtual tool 224 at the interface and define bounding box data 226 relative to image data 222 displayed at the interface. The input device can be used to generate touch input data and movement data, for example.
[0034] The interface controller 220 can trigger the display of a user interface on a display device. The user interface can display an image (or a portion thereof) from image data 222. The user interface can enable selection of the image (or a portion thereof) from image data 222. The interface controller 220 can activate the virtual tool 224 at the interface to define a BB. The virtual tool 224 can be controlled by commands received from interaction between the input device and the interface. The interface controller 220 can trigger the display of an indicator for the virtual tool relative to the image data. The virtual tool 224 can be displayed as a graphical object such as a pointer, marker, and the like. The interface controller 220 can trigger the display of a graphical representation of a source image and a graphical representation of the indicator for the virtual tool relative to the image data.
[0035] The interface controller 220 can receive, from the interface, a first input data point at a first location relative to the image (or a portion thereof). The capture of the first input data point data can be triggered by actuation of the input device. This can be a click, selection, or a touch of the input device, for example. The first input data point can define a corner location for the BB. The input device can be integrated with a touch display, for example, and the first input data point can be referred to as a first touch input.
[0036] The interface controller 220 can receive, from the interface, movement input in a direction relative to the first location. The movement input can be defined by movement commands from the input device during the actuation of the input device and a release of the actuation of the input device. This can be a drag or swipe from the first corner location, for example. The movement input can define an edge angle from (and relative to) the corner .. location. The movement input can be in the direction of the adjacent corner for the BB, for example.
[0037] The interface controller 220 can receive, from the interface, a second input data point at a second location relative to the image. The capture of the second input data point can be triggered by actuation of the input device. This can be a click, selection, or a touch of the input device, for example. The second input data point can define another corner location for the BB.
The second input data point can define the location of the opposite corner to the first corner of the BB, for example.
[0038] The interface controller 220 can trigger display, at the interface, of a graphical object representing the BB as an overlay of the image data. The BB corners are defined by the first location and the second location. In particular, the first location indicates a corner of the BB and the second location can indicate the opposite corner of the BB. The BB has edges connecting adjacent corners. The edges can be defined by the direction of the movement input, and in particular, the angle of the edges relative to adjacent corners can be based on the movement input.
[0039] The interface controller 220 can receive the input data from the interface and transform the input data into a BB data format. The interface controller 220 can compile the input into code representing the BB data format. The transformed input data can define a BB
record. The BB record can be linked with the image. For example, the BB record can include metadata that includes an image identifier, for example. The interface can store the BB record as bounding box data 226. Different metadata can be stored in association with the BB. For example, the metadata can be the actual text written in the box if it is assumed to be text, the class of object e.g. car, truck, human, information about the BB itself e.g.
partial, occluded, and so on.
[0040] The BB data format can be an OBB data format. The BB data format can be an AABB
data format. The OBB format is a superset of the AABB format that contains an orientation component that can be expressed in multiple ways e.g. angle, distance, vector;
the format can simply be the 3 locations and thus have implicit orientation encoded. For example, the following are the computations that can be implemented by interface controller 220 to convert user inputs into an "angle-center-extent" OBB data format.
[0041] The interface can receive a command from the input device indicating actuation of virtual tool 224 (e.g. first input data point) at a first location and movement of the virtual tool 224 from the first location to another location (movement input) while the virtual tool 224 is continuously actuated (e.g. click, hold click while dragging input device to a new location). For example, the input data can be defined as a first click of the input device and drag of the virtual tool 224 from P
1,start to P1,end= The interface can receive a command indicating that the virtual tool 224 is released (e.g. no longer actuated). The interface can receive another command .. indicating another actuation of the virtual tool 224 (e.g. second input data point) at a second location. For example, the input data can be defined as a second click at P2.
The interface can provide the input data to the inter face controller 220 to compute the BB.
[0042] The interface controller 220 can use the input data to compute the BB angle 0 (e.g.
angle of the edge between adjacent corners). The angle of first drag can be computed as .. follows:
A := Pl,end P1,start LA := atan2 (Ay, itlx.)
[0043] The interface controller 220 can wrap the angle between ¨450 and 45 := ((LA + 45 ) mod 90 ) ¨ 45
[0044] The interface controller 220 can compute the B B center coordinates C:
CPl,start P2 :=
[0045] The interface controller 220 can compute the BB extent E (half-width, half-height). For example, the interface controller 220 can compute the BB extent E by computing the diagonal .. from center to P2, or alternatively half the full diagonal.

E := P2 ¨ C = ¨ Pl,start
[0046] If 0 is different from 0, the interface controller 220 can rotate E
in the opposite angle ¨0. This can be a change of basis towards a basis defined by the first input data point (click) and movement input (drag). This can be performed by multiplying a rotation matrix.
E [cos(-0) ¨ sin(-8)1E = r cos(0) .. sin(0)1 E
isin(-0) cos(-0) sin(0) cos(0)
[0047] The interface controller 220 can take the absolute value of each coor dinate [lExi E=
['EA
[0048] Considering an example with the origin (e.g. first input data point) at the bottom-left corner, the following indicates computations for the 0OBB example depicted in Figure 3.
[0049] The interface controller 220 can receive the following input data from actuation and movement of the virtual tool 224. The first input data point and the movement data can be:
= [6241to Pl,end = [11 52 421 Pl,start
[0050] The second input data point can be:
P2¨ [1193681
[0051] The interface controller 220 can compute the BB angle 0 by computing the angle of the first drag (movement of the virtual tool 224 within the interface) A := [1152 42] _ [624] [192001 LA := atan2(90, 120) 37,9
[0052] As shown, wrapping the angle between ¨45 and 45 does not have an effect := ((36,9 + 450) mod 90 ) ¨ 45 = (81,9 mod 90 ) ¨ 45 = 81,9 ¨ 45 = 36,9
[0053] The interface controller 220 can compute BB center coordinates C:
r 2 4_1-198i C1-641 1_1361 = [100i :=
[0054] The interface controller 220 can compute BB extent E (half-width, half-height). For example, the interface controller 220 can compute diagonal from center to P2 E := [193681 _ (1100001 = [93681
[0055] The interface controller 220 can rotate E in the opposite angle ¨O.
E [ 0,8 0,61 [98] [1001 L-0,6 0,81 L361 .. L-301
[0056] The interface controller 220 can take the absolute value of each coor dinate.
E :=[lExl 100 [lEy11-- [ 30
[0057] The virtual tool 224 may draw or define the same 0OBB by specifying the same two corners but the virtual tool 224 can be dragged to define the other edge. That is, different orders of input data points can be used to define the same 0OBB. Figure 5 illustrates an example BB
102 around text 100 that can be defined by a first input data point 502 (for a first corner location), a movement input 504 of the virtual tool 224 (for the edge angle), and a second input data point 506 (for the second corner location). The difference in computation can be limited to up until the angle is wrapped between -45 and 45 .
[0058] From the control commands generated at interface using the virtual tool 224, the interface controller 220 can receive the following example input data. The interface controller 220 can receive the first input data point (click) and movement input (drag) of the virtual tool 224 from Pl,start = [641 to "Lend = [447]
[0059] The interface controller 220 can receive the second input data point (click) at P2 =
[1981
[0060] The interface controller 220 can compute the BB angle O. The interface controller 220 can compute the angle of first movement input (drag) 71 [2 1 [ 45 1 A := 144 =
LA := atan2(-60, 45) ¨53,1
[0061] Wrapping the angle between ¨45 and 45 can compute the same angle as the example described in relation to Figure 3.
:= ((-53,1 + 45 ) mod 900) ¨ 450 = (-8,1 mod 90 ) ¨ 450 = 81,9 _450 = 36,9
[0062] The virtual tool 224 may draw or define the same 0OBB by starting by a completely different corner (e.g. the first input data point can be at a different corner location). The .. difference in computation carries up until the absolute value is finally performed on the extent.
Figure 6 illustrates an example BB 102 around text 100 that can be defined by a first input data point 602 (for a first corner location), a movement input 604 of the virtual tool 224 (for the edge angle), and a second input data point 606 (for the second corner location).
[0063] From the control commands generated at interface using the virtual tool 224, the interface controller 220 can receive the following example input data. The interface controller 220 can receive the first input data point (click) and movement input (drag) of the virtual tool 224 from i,start = [193861 to 13 Lend = [5248]
[0064] The interface controller 220 can receive the second input data point ( click) at P2 = [6241
[0065] The interface controller 220 can compute the BB angle O. The interface controller 220 can compute the angle of first movement input (drag) A [541_ [198] [-1441 LA := atan2(-108, ¨144) ==.: ¨143,1
[0066] Wrapping the angle between ¨45 and 45 can return the same angle := ((-126,9 + 45 ) mod 90 ) ¨ 45 = (-98,1 mod 90 ) ¨ 45 = 81,9 ¨ 45 =
36,9
[0067] The interface controller 220 can compute the B B center coordinates C:
[198] 4_ r 2 1 :=
11361 1641 = [1001
[0068] The interface controller 220 can compute the BB extent E (half-width, half-height). The interface controller 220 can compute the diagonal from center to P2 which indicates coordinates with signs that are different, but consistent E ..= [2 1 1-1001 r-981 L641 [low L-361
[0069] The interface controller 220 can rotating E in the opposite angle ¨O.
E := r 0,8 0,61 r-1001 L-0,6 0,811-361 +30 I
[0070] The interface controller 220 can compute the absolute value of each coordinate which returns the same extents.
IEXI E = [1001 ['EA 130 -I
[0071] Accordingly, the virtual tool 224 provides a flexible tool to define BB based on a set of input data that can define different corners and edge angles for the BB. The input data can be efficiently provided by the user to the system 200.
[0072] The processor 204 can save a bounding box record (as part of the bounding box data 226) for the bounding box with metadata with an identifier for the image data that was displayed at interface with the bounding box was defi ned using the virtual tool 224.
[0073] The processor 204 can compute the bounding box in a bounding box format using the first input data point, the movement input, and the second input data point.
For example, the format can define a first corner at the first location, an opposite corner at the second location, and an adjacent corner connected to the first corner by an edge at an angle defined by the movement input. As another example, the bounding box format is an angle-center-extent format defined by an angle of an edge between a corner and an adjacent corner, centre coordinates of the bounding box, and an extent of the bounding box.
[0074] The movement input can be from the first location towards an adjacent corner.
Accordingly, the direction of the movement input being towards an adjacent corner of the bounding box. The second location can indicate the opposite corner.
[0075] The processor 204 can dynamically update the interface to display a line between the first location and a current location of the virtual tool when defining the movement input. The processor 204 can dynamically update the interface to display a suggested bounding box defined by the first location, the movement input and a current location of the virtual tool when defining the second input data point and pr ior to the second actuation of the input device.
[0076] The processor 204 can extract a region of interest from the image data defined by the bounding box, and save the extracted region of interest in data storage 110 as part of the image data 222. The processor 204 can transmit the extracted region to vision task 228, vision application 230, or entity 250, for example.
[0077] The processor 204 can be, for example, any type of general-purpose microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, or any combination thereof.
[0078] Memory 208 may include a suitable combination of any type of computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read-only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically-erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like. Data storage devices 210 can include memory 208, databases 122 (e.g. graph database), and persistent storage 214.
.. [0079] The communication interface 206 can enable the system 200 to communicate with other components, to exchange data with other components, to access and connect to network resources, to serve applications, and perform other computing applications by connecting to a network 240 (or multiple networks) capable of carrying data including the Internet, Ethernet, plain old telephone service (POTS) line, public switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile, wireless (e.g. Wi-Fi, WiMAX), S57 signaling network, fixed line, local area network, wide area network, and others, including any combination of these.
[0080] The system 200 can be operable to register and authenticate users (using a login, unique identifier, and password for example) prior to providing access to applications, a local network, network resources, other networks and network security devices. The system 200 can connect to different machines, entities 250, and/or data sources 260 (linked to databases 270).

[0081] The data storage 210 may be configured to store information associated with or created by the system 200, such as for example image data 222 and bounding box data 226.
The data storage 210 may be a distributed storage system, for example. The data storage 210 can implement databases, for example. Storage 210 and/or persistent storage 114 may be provided using various types of storage technologies, such as solid state drives, hard disk drives, flash memory, and may be stored in various formats, such as relational databases, non-relational databases, fl at files, spreadsheets, extended markup files, and so on.
[0082] Figure 7 is a diagram of another example OBB that can be defined using a three-step workflow. Embodiments described herein provide an improved workflow and this three-step workflow is described for contrast. The three-step workflow involves two clicks to define two adjacent corners (e.g. corners 702, 704) and a third click defines the perpendicular dimension 706 of the BB. This three-step workflow requires additional actuations of the input device than the improved workflow of embodiments described herein. Further, the three-step workflow does not easily allow for an AABB, necessitating a different tool or workflow. In contrast, embodiments described herein can be used to define different types of BB such as an OBB and an AABB, avoiding the need for different tools.
[0083] The discussion provides many example embodiments of the inventive subject matter.
Although each embodiment represents a single combination of inventive elements, the inventive subject matter is considered to include all possible combinations of the disclosed elements.
Thus if one embodiment comprises elements A, B, and C, and a second embodiment comprises elements B and D, then the inventive subject matter is also considered to include other remaining combinations of A, B, C, or D, even if not explicitly disclosed.
[0084] The embodiments of the devices, systems and methods described herein may be implemented in a combination of both hardware and software. These embodiments may be implemented on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one communication interface.
[0085] Program code is applied to input data to perform the functions described herein and to generate output information. The output information is applied to one or more output devices. In some embodiments, the communication interface may be a network communication interface. In embodiments in which elements may be combined, the communication interface may be a software communication interface, such as those for inter-process communication. In still other embodiments, there may be a combination of communication interfaces implemented as hardware, software, and combination thereof.
[0086] Throughout the foregoing discussion, numerous references will be made regarding servers, services, interfaces, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor configured to execute software instructions stored on a computer readable tangible, non-transitory medium.
For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions.
[0087] The technical solution of embodiments may be in the form of a software product. The software product may be stored in a non-volatile or non-transitory storage medium, which can be a compact disk read-only memory (CD-ROM), a USB flash disk, or a removable hard disk.
The software product includes a number of instructions that enable a computer device (personal computer, server, or network device) to execute the methods provided by the embodiments.
[0088] The embodiments described herein are implemented by physical computer hardware, including computing devices, servers, receivers, transmitters, processors, memory, displays, and networks. The embodiments described herein provide useful physical machines and particularly configured computer hardware arrangements.
[0089] Although the embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein.
[0090] Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification.
[0091] As can be understood, the examples described above and illustrated are intended to be exemplary only.

Claims (22)

WHAT IS CLAIMED IS:

Any and all features of novelty or inventive step described, suggested, referred to, exemplified, or shown herein, including but not limited to processes, systems, devices, and computer-readable and -executable programming and/or other instruction sets suitable for use in implementing such features.
1. A system for a bounding box tool comprising:
a server having non-transitory computer readable storage medium with executable instructions for causing one or more processors to:
display a user interface on a display of a device, the user interface displaying image data;
activate a virtual tool to define a bounding box, the virtual tool controlled by commands received from an input device;
display, at the user interface, an indicator for the virtual tool relative to the image data;
receive a first input data point at a first location relative to the image data, the first input data point triggered by a first actuation of the input device;
receive a movement input in a direction relative to the first location, the movement input defined by movement commands from the input device during the actuation of the input device and a release of the first actuation of the input device;
receive a second input data point at a second location relative to the image data, the second input data point triggered by a second actuation of the input device;
compute the bounding box in a bounding box format using the first input data point, the movement input, and the second input data point, the bounding box having corners connected by edges, the corners defined by the first input data point, the movement input, and the second input data point, the edges defined by the direction of the movement input, wherein the bounding box format defines a first corner at the first location, an opposite corner at the second location, and an adjacent corner connected to the first corner by an edge at an angle defined by the movement input;
automatically generate a new data file for storing, at a data store, the bounding box in the bounding box for mat;
automatically generate a graphical object representing the bounding box using the new data file; and render display, at the user interface, of the graphical object representing the bounding box as an overlay of the image data, the bounding box having corners defined by the first location and the second location.
2. The system of claim 1 wherein the movement input is from the first corner of the bounding box at the first location, the direction of the movement input being towards an adjacent corner of the bounding box.
3. The system of claim 1 wherein the bounding box is an oriented bounding box.
4. The system of claim 1 wherein the processor is configured to dynamically update the interface to display a line between the first location and a current location of the virtual tool when defining the mov ement input.
5. The system of claim 1 wherein the processor is configured to dynamically update the interface to display a suggested bounding box defined by the first location, the movement input and a current location of the virtual tool when defining the second input data point and prior to the second actuation of the input device.
6. The system of claim 1 wherein the bounding box format is an angle-center-extent format defined by an angle of an edge between a corner and an adjacent corner, centre coordinates of the boun ding box, and an extent of the bounding box.
7. The system of claim 1 wherein the processor is configured to extract a region of interest from the image data defined by the bounding box, and save the extracted region of interest in data storage.
8. The system of claim 1 wherein the processor is configured to save a bounding box record for the bounding box with metadata with an identifier for the image data.
9. The system of claim 1 wherein the movement data can be in the direction of an adjacent edge to a corner at the first location.
10. The system of claim 1, wherein the processor is configured to combine the new data file for the bounding box with the image file for an image task profile.
11. A system for a bounding box tool comprising:
a server having non-transitory computer readable storage medium with executable instructions for causing one or m ore processors to:
display a user interface on a display of a device, the user interface displaying image data;
activate a virtual tool to define a bounding box, the virtual tool controlled by commands received from an input device;
display, at the user interface, an indicator for the virtual tool relative to the image data;
receive a first input data point at a fir st location relative to the image data, the first input data point tr iggered by a first actuation of the input device;
receive a movement input in a direction relative to the first location, the movement input defined by movement commands from the input device during the actuation of the input device and a release of the first actuation of the input device;
receive a second input data point at a second location relative to the image data, the second input data point triggered by a second actuation of the input device;
and display, at the user interface, a graphical object representing the bounding box as an overlay of the image data, the bounding box having corners defined by the first location and the second location, the bounding box having edges connecting the corners, the edges defined by the dir ection of the movement input.
12. The system of claim 11 wherein the processor is configured to compute the bounding box in a bounding box format using the first input data point, the movement input, and the second input data point.
13. The system of claim 11 wherein the processor is configured to compute the bounding box in a bounding box format to define a first corner at the first location, an opposite corner at the second location, and an adjacent corner connected to the first corner by an edge at an angle defined by the movement input.
14. The system of claim 11 wherein the movement input is from the first location, the first location defining a first corner of the bounding box, the direction of the movement input being towards an adjacent corner of the bounding box.
15. The system of claim 11 wherein the bounding bo x is an oriented bounding box.
16. The system of claim 11 wherein the processor is configured to dynamically update the interface to display a line between the first location and a current location of the virtual tool when defining the mov ement input.
17. The system of claim 11 wherein the processor is configured to dynamically update the interface to display a suggested bounding box defined by the first location, the movement input and a current location of the virtual tool when defining the second input data point and pr ior to the second actuation of the input device.
18. The system of claim 12 wherein the bounding box format is an angle-center-extent format defined by an angle of an edge between a corner and an adjacent corner, centre coordinates of the bounding box, and an extent of the bounding box.
19. The system of claim 11 wherein the processor is configured to extract a region of interest from the image data defined by the bounding box, and save the extracted region of interest in data storage.
20. The system of claim 11 wherein the processor is configured to save a bounding box record for the bounding box with metadata with an identifier for the image data.
21. The system of claim 11 wherein the movement data can be in the direction of an adjacent edge to a corner at the first location.
22. A non-transitory computer readable storage medium with executable instructions for causing one or more processors to:
display a user interface on a display of a device, the user interface displaying image data;
activate a virtual tool to define a bounding box, the virtual tool controlled by commands received from an input device;
display, at the user interface, an indicator for the virtual tool relative to the image data;
receive a first input data point at a fir st location relative to the image data, the first input data point tr iggered by actuation of the input device;
receive a movement input in a direction relative to the first location, the movement input defined by movement commands from the input device during the actuation of the input device and a release of the actuation of the input device;
receive a second input data point at a second location relative to the image data, the second input data point tr iggered by actuation of the input device; and display, at the user interface, a graphical object representing the bounding box as an overlay of the image data, the bounding box having corners defined by the first location and the second location, the bounding box having edges connecting the corners, the edges defined by the dir ection of the movement input.
CA3018739A 2018-09-26 2018-09-26 System and method for bounding box tool Pending CA3018739A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA3018739A CA3018739A1 (en) 2018-09-26 2018-09-26 System and method for bounding box tool
PCT/CA2019/051378 WO2020061702A1 (en) 2018-09-26 2019-09-26 System and method for bounding box tool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA3018739A CA3018739A1 (en) 2018-09-26 2018-09-26 System and method for bounding box tool

Publications (1)

Publication Number Publication Date
CA3018739A1 true CA3018739A1 (en) 2020-03-26

Family

ID=69948318

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3018739A Pending CA3018739A1 (en) 2018-09-26 2018-09-26 System and method for bounding box tool

Country Status (1)

Country Link
CA (1) CA3018739A1 (en)

Similar Documents

Publication Publication Date Title
US9495802B2 (en) Position identification method and system
JP7079231B2 (en) Information processing equipment, information processing system, control method, program
US8902225B2 (en) Method and apparatus for user interface communication with an image manipulator
US7881901B2 (en) Method and apparatus for holographic user interface communication
US11120592B2 (en) System and method for oriented bounding box tool defining an orientation of a tilted or rotated object
CN105637559A (en) Structural modeling using depth sensors
US9589385B1 (en) Method of annotation across different locations
US10438385B2 (en) Generating ink effects for a digital ink stroke
CN109189302B (en) Control method and device of AR virtual model
KR20090084900A (en) Interacting with 2d content on 3d surfaces
CN104081307A (en) Image processing apparatus, image processing method, and program
JP7337428B1 (en) CONTROL METHOD, CONTROL DEVICE, AND RECORDING MEDIUM FOR INTERACTIVE THREE-DIMENSIONAL REPRESENTATION OF OBJECT
CN109863467B (en) System, method and storage medium for virtual reality input
JP5925347B1 (en) Information processing system and program, server, terminal, and medium
CN113204301A (en) Method and device for processing application program content
US9881419B1 (en) Technique for providing an initial pose for a 3-D model
CA3018739A1 (en) System and method for bounding box tool
WO2020061702A1 (en) System and method for bounding box tool
KR101162703B1 (en) Method, terminal and computer-readable recording medium for remote control on the basis of 3d virtual space
KR102517919B1 (en) Method And Apparatus for Providing Advertisement Disclosure for Identifying Advertisements in 3-Dimensional Space
US20030001906A1 (en) Moving an object on a drag plane in a virtual three-dimensional space
CN116129085B (en) Virtual object processing method, device, storage medium, and program product
Ohta et al. Photo-based Desktop Virtual Reality System Implemented on a Web-browser
CN116126133A (en) Virtual object interaction method, device, equipment, storage medium and program product
CN104914981A (en) Information processing method and electronic equipment

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20230926