US20100295850A1 - Apparatus and method for finding visible points in a cloud point - Google Patents

Apparatus and method for finding visible points in a cloud point Download PDF

Info

Publication number
US20100295850A1
US20100295850A1 US12/471,381 US47138109A US2010295850A1 US 20100295850 A1 US20100295850 A1 US 20100295850A1 US 47138109 A US47138109 A US 47138109A US 2010295850 A1 US2010295850 A1 US 2010295850A1
Authority
US
United States
Prior art keywords
point
points
visible
determining
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/471,381
Other versions
US8531457B2 (en
US20110267345A9 (en
Inventor
Sagi Katz
Ayellet Tal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Technion Research and Development Foundation Ltd
Original Assignee
Technion Research and Development Foundation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/IL2007/001472 external-priority patent/WO2008065661A2/en
Application filed by Technion Research and Development Foundation Ltd filed Critical Technion Research and Development Foundation Ltd
Priority to US12/471,381 priority Critical patent/US8531457B2/en
Assigned to TECHNION RESEARCH AND DEVELOPMENT FOUNDATION LTD reassignment TECHNION RESEARCH AND DEVELOPMENT FOUNDATION LTD ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KATZ, SAGI, TAL, AYELLET
Publication of US20100295850A1 publication Critical patent/US20100295850A1/en
Assigned to OXFORD FINANCE CORPORATION reassignment OXFORD FINANCE CORPORATION SECURITY AGREEMENT Assignors: SUPERDIMENSION LTD.
Publication of US20110267345A9 publication Critical patent/US20110267345A9/en
Priority to US13/960,852 priority patent/US8896602B2/en
Application granted granted Critical
Publication of US8531457B2 publication Critical patent/US8531457B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • the present invention relates to computational geometry in general and to determining visibility of points from a predefined point of view in particular.
  • Computer graphics employ processing units to determine visibility of points. Visibility determination is the process of determining whether a particular point in space is visible from a specific point of view. Determining the visibility of a surface of an object quickly and efficiently has been a fundamental problem in computer graphics.
  • Determining visible points is useful for indicating field of view from a specific point, for defining shadow casting for the gaming industry, for the security industry and the like. For example, finding visible points may result in a more clearly visible object enabling to see better the particular features, curves and shapes of the object.
  • a computerized image depicting the face of a person can be processed such that the human face contours are visibly shown after the processing. Such processing may be changing the color of some points in case they are not visible, hence distinguishing visible points from non-visible points in the image. After changing the color of the points that are visible from a specific point of view, a person watching the image can then determine if the human face is shown or if the back of the human head is shown.
  • a point cloud is a set of three-dimensional points describing the outlines or surface features of an object that may be produced by a 3D digitizer.
  • point cloud can also represent some properties in N dimensional space.
  • points cannot occlude one another unless they are collinear with the viewpoint.
  • points in the point cloud are not considered hidden.
  • a point cloud inherently contains information from which it is possible to extract the visibility of the points to a viewer having a predetermined point of view.
  • visibility of point clouds has been addressed in the context of rendering images or representations of objects.
  • rendering visibility maps that indicates whether one point can be viewed from another before the data is actually required. This is done by a matrix having values representing the level of visibility from a viewpoint. This way, runtime of O(1) is required to receive an answer concerning visibility and runtime of at least O(n 2 ) is required to prepare the matrix. No solutions having runtime of O(n log n) are suggested in the art.
  • camera rotation or change in the field of view requires time-consuming visibility recalculation.
  • the subject matter discloses a method for determining points that are visible from a specific point of view.
  • the method comprises a step of inverting at least a portion of the image thus generating an inversed object.
  • Each point in the object has a parallel point in the inversed object.
  • the next step is determining the convex hull of the inversed object.
  • Each point in the convex hull has a parallel point in the original object that is likely to be visible from the point of view.
  • additional conditions are applied on the points in the convex hull. For example, the size of the angle between two neighboring points to the examined point in the convex hull.
  • Determining visible points is also useful for determining shadow casting since the viewpoint may also function as a light source. Hence, visible points are illuminated by a light source located in the viewpoint. Performing the convex hull requires runtime of O(n log n) while other methods for determining visible points require runtime of O(n 2 ).
  • An image-detecting device such as a camera preferably takes the image in case it is a 2D representation.
  • a memory unit within the camera or within a computing device to which the image is sent or copied preferably performs the process of determining the visible points.
  • the subject matter discloses a method of determining whether a specific point in a computerized image is visible from a viewpoint said image is represented as a point cloud, the method comprising: performing inversion on points located in the vicinity of the specific point thus creating a computerized inversed object, each point in the vicinity of the specific point is translated to a parallel point in the computerized inversed object; defining a convex hull of the inversed object; determining if the specific point is visible from the viewpoint according to the position of its parallel point on the convex hull relative to its neighbor points.
  • the method may further comprise a step of applying an at least one condition on the parallel point of the specific point before determining that the specific point is visible.
  • the condition is comparing the angle between the parallel point of the specific point and two neighboring points in the point set composing the convex hull to a predetermined value; a line between the point and the viewpoint divides the angle.
  • the method further comprises a step of coloring the specific point in case it is determined visible from the viewpoint.
  • the method further comprising a step removing the specific point from the computerized image in case said point is not determined visible from the viewpoint.
  • the method further comprises determining shadow casting of the specific point by determining the point is visible from a viewpoint representing a light source.
  • the inversion is a spherical inversion.
  • the method comprises determining visibility of the plurality of points by indicating the number or percentage of visible points of the plurality of points is higher than a threshold.
  • the number of points of the plurality of points that are visible from the determined location is higher than the number of points in the plurality of points that are visible from other locations.
  • the subject matter discloses a method for determining the amount of light falling on an at least one point using the method of claim 1 , the method comprises: determining direct illumination by determining visibility of the at least one point from a first set of points acting as a light source; locating a second set of points that are determined to be visible from the first set of points; determining indirect illumination by determining visibility of the at least one point from the second set of points acting as a source of reflecting light; setting the amount of light falling on the at least one point based on it being directly illuminated or indirectly illuminated.
  • FIG. 1 shows computational elements implementing a method for determining visible points from a viewpoint, in accordance with an exemplary embodiment of the disclosed subject matter
  • FIG. 2 illustrates a flow chart of a method for determining visibility of points from a viewpoint, in accordance with an exemplary embodiment of the disclosed subject matter
  • FIG. 3 is an illustration of a spherical inversion of a cat, in accordance with an exemplary embodiment of the disclosed subject matter
  • FIG. 4 shows a convex hull of the palm of a hand as disclosed in the subject matter, in accordance with an exemplary embodiment of the disclosed subject matter
  • FIGS. 5A and 5B show a convex hull performed on two different shapes performed with the same inversion, in accordance with an exemplary embodiment of the disclosed subject matter
  • FIGS. 6A and 6B illustrate the visibility of points from different viewpoints, in accordance with an exemplary embodiment of the disclosed subject matter.
  • FIG. 7 illustrates shadow casting of an image performed after visibility determination, in accordance with an exemplary embodiment of the disclosed subject matter.
  • the disclosed subject matter describes a novel and unobvious method for determining visible points in a point cloud referring to an object.
  • determining whether a specific point or portion of an image is visible from a predetermined point of view In a computerized image containing an object, it is complicated to determine whether a specific point or portion of an image is visible from a predetermined point of view.
  • a captured image may be insufficiently clear and a person watching the image or a computer handling watching the image cannot define important data related to the image. For example, determining whether a portion of the image is visible from a specific point of view or whether a point in the image may be illuminated by a light source located at another point, or whether an obstacle, such as another object, occludes them.
  • the technical solution to the above-discussed problem is performed in a two-step algorithm.
  • the first step is inversing an object in the image, a portion of the image, or the vicinity of a specific point. After an object is inverted, each point in the object has a parallel point in the inversion. Then, having an inversed shape, the next step is obtaining a convex hull of the inversed shape. The points in the original shape that have parallel points in the convex hull are likely to be visible.
  • FIG. 1 illustrates a computerized environment 100 implementing methods for determining the visibility of points in a computerize image represented by a point cloud, according to an exemplary embodiment of the subject matter.
  • the point cloud can be acquired from other sources such as 3D scanners, stereo or range cameras, databases and the like. Assuming each point in a point cloud can be distinguished by coordinates or by its location in the point cloud, such data is preferably stored prior to the process of finding the visible points.
  • Computerized environment 100 comprises an I/O device 110 for capturing an image using an imaging device 115 capturing a captured image 117 . The captured image 117 is transmitted to a memory 120 , where a processing unit 130 handles the image.
  • Processing unit 130 performs inversion on at least a portion of the captured image, for example, a spherical inversion. Multiple inversions and the rules concerning the resolution and other characteristics related to the inversion may be stored in storage 140 . Each point in the original object from captured image 117 has a parallel point in the inversed object. Next, processor 130 determines a convex hull of the inversed object. A point in captured image 117 is likely to be visible in case its parallel point is part of the point set composing the convex hull of the inversed image. After determining visible points, some of the pixels in the image may be colored or otherwise processed to better define the visible points from the non-visible points.
  • the non-visible points are removed from captured image 117 to generate a processed image 145 .
  • processed image 145 may modify data related to visible points, such as enlarging visible points, highlighting visible points and the like.
  • Processed image 145 may be displayed on monitor 150 . The results may be used from further computational units or for other applications such as navigation.
  • inversion Spherical Flipping as described in Mesh segmentation using feature point and core extraction KATZ, S., LEIFMAN, G., AND TAL, A. 2005. The article was published in Visual Computer 21, 8-10, 865-875, which is hereby incorporated by reference.
  • Another example of an inversion is a simple inversion in which the radius r is inversed into 1/r. Such simple inversion is performed using the equation
  • the computational entity when performing an inversion, the computational entity generates an approximately elliptical or spherical with the viewpoint in the center or in one of the focuses of the shape.
  • FIG. 2 is a flow chart of a method of determining visibility of points on an object from a viewpoint, according to an exemplary embodiment of the subject matter.
  • step 205 data related to the object and the point is stored in storage 140 .
  • Such data may be coordinates of the points in the point cloud, the number of points in the point cloud, coordinates of the viewpoint, and the like.
  • step 210 the application handling the process of determining visibility of points performs inversion on the object. As a result of the inversion, a new set of points is generated. In an exemplary embodiment, the number of points in the new set of point is equal to the number of points in the point cloud representing the object.
  • each point in the point cloud representing the object has a parallel point in the new set of point related to the inversed object.
  • a convex hull is defined from the inversed object.
  • all the points that have parallel points in the convex hull of the inversed object are visible from the viewpoint.
  • other conditions are applied to the points that have parallel points in the convex hull before determining visibility.
  • One example for a condition applied on a point is determining whether the angle between the parallel point in the convex hull and the two neighboring points in the hull is lower or higher than a threshold. A line between the parallel point and the viewpoint divides the angle.
  • the application determines whether the point is visible from a specific viewpoint.
  • the visible points are colored when shadow casting is performed.
  • the level of visibility may be determined as a function of the portion of the point occluded by other points in the point cloud.
  • visible points may be colored in a color different from non-visible points.
  • FIG. 3 illustrates a spherical inversion according to an exemplary embodiment of the subject matter, in which a cat-shaped circumference surrounding a viewer located at a viewpoint in the center 310 of approximate sphere 320 .
  • the circumference is inversed outside in an approximate sphere 320 having center 310 .
  • the result of this specific inversion is that each point that assembles the cat-shaped circumference has a parallel point outside approximate sphere 320 .
  • the article surrounding the object is approximate sphere 320 .
  • the parallel points may reside within the article surrounding the object.
  • a parallel point resides on the same line leading from center 310 to a point in the original object that was inverted.
  • the distance between an original point on the cat-like circumference of the object to approximate sphere 320 and the parallel point of the same original point to approximate sphere 320 is equal or has a constant ratio.
  • Each point on the cat is inside approximate sphere 320
  • the parallel points are outside the sphere, for example, the point indicating a portion of the cat's head 332 has a parallel point 334 outside approximate sphere 320 , satisfying the conditions described above.
  • the distance between point 332 and sphere 320 is equal to the distance between parallel point 334 and sphere 320 . It is noted that point 332 is in relative proximity to approximate sphere 320 ; hence, parallel point 334 is closer to approximate sphere 320 than the parallel points in its vicinity.
  • point 342 is relatively far from approximate sphere 320 and close to center 310
  • parallel point 344 resides in the proximity of the cat's inversion, relative to approximate sphere 320 .
  • Approximate sphere 320 may be a circle, elliptical, or combine a polygonal and elliptical shape.
  • FIG. 4 shows a convex hull of a palm of a hand 400 .
  • the term convex hull according to the disclosed subject matter refers to a set of points is the intersection of all convex sets which contain the points. Another definition may be a set of points that may reside on lines generated by tensing a band over an object.
  • An alternative definition may that the convex hull of shape S is the unique convex polygon which contains S and whose vertices are from SA convex hull can also be depicted as points creating a polygon outside an elliptical or semi-elliptical shape or volume, in two or three dimensions.
  • the result of the convex hull is a set of points.
  • points 410 , 420 , 430 440 , 450 , 460 and 470 are at least a subset of the points contained within the set of points assembling the convex hull of palm 400 .
  • the lines connecting the points in the convex hull belonging to the set of points are outside palm 400 .
  • line 415 connects point 410 and point 420 .
  • the lines connecting the points are useful in determining which of the points in the point set has a visible parallel point in the object.
  • data related to the lines such as directions, coordinates, angles toward a specific point or line, offsets and the like is stored in storage and preferably utilized when determining points' visibility.
  • FIGS. 5A and 5B show convex hulls 507 , 557 , respectively surrounding two different shapes performed with the same inversion.
  • the figures exemplify the difference in determining visible points in contrast to non-visible points from a similar viewpoint 510 , 550 using similar inversion methods.
  • FIG. 5A depicts a heart shaped object 503 having center 510 .
  • Center 510 is a viewpoint of the points in the point cloud that compose object 503 .
  • Parallel point 522 is the point generated by inverting point 520 .
  • Parallel point 522 lies on convex hull 507 and as a result, point 520 is likely to be visible from center 510 .
  • points 530 and 540 are likely to be visible from center 510 since parallel points 532 and 542 reside on convex hull 507 .
  • a Point is determined to be visible if the angle between the two neighboring points of the point towards the viewpoint is smaller than a threshold value. In this case, the angle is defined by summing ⁇ k and ⁇ j The center of the angle is parallel point 532 , so the point to be determined visible or not visible is point 530 . Since the angle is smaller than 180, which is the threshold in the exemplary embodiment, point 530 is determined visible from center 510 .
  • Another way for defining the angle between the two neighboring points and the point that is specifically determined as visible or non-visible is to determine whether the angle points at the viewpoint or not. For example, when determining if point 530 is visible, the angle is centered in point 532 , the parallel point of point 530 .
  • FIG. 5B discloses an object 555 viewed from a center 550 .
  • the points belonging to object 555 have parallel points in an inversed object 557 .
  • point 560 has parallel point 565 that resides within convex hull 557 , the inversion of object 555 .
  • points 570 and 580 of object 555 have parallel points 575 and 585 , respectively.
  • the angle between the two neighboring points of parallel point 575 is calculated and compared to a predetermined threshold. In this case, the threshold is 180 degrees and the angle is bigger than the threshold.
  • line 590 that passes between neighboring parallel points 565 and 585 fully resides within inversed object 557 . Determining the number or percentage of points contained in the line between the neighboring points that reside in the inversed object and comparing the result to a predetermined threshold is an alternative test in determining that a point in the convex hull is visible.
  • the steps described above, mainly of inversing the object, determining a convex hull and determining visibility of points that have parallel points on the convex hull are preferably performed by a computerized application.
  • the image processing applications comprise software components written in any programming language such as C, C#, C++, Matlab, Java, VB, VB.Net, or the like, and developed under any development environment, such as Visual Studio.Net, J2EE or the like. It will be appreciated that the applications can alternatively be implemented as firmware ported for a specific processor such as digital signal processor (DSP) or microcontrollers, or can be implemented as hardware or configurable hardware such as field programmable gate array (FPGA) or application specific integrated circuit (ASIC).
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • the methods can also be adapted to be executed on a computing platform, or any other type of computing platform that is provisioned with a memory device 120 , a CPU 130 , and several I/O
  • FIGS. 6A and 6B illustrating the result of the methods described above. Both figures show visibility of points from different viewpoints. Visible points are shown in gray, while non-visible points are shown in white.
  • FIG. 6A shows viewpoint 610 and a plurality of points near viewpoint 610 . Some points, such as point 612 , are visible from viewpoint 610 . Obstacles, such as obstacle 614 , prevent visibility of other points near point 610 .
  • One example for a non-visible point is point 616 , hidden behind an obstacle.
  • FIG. 6B shows the same obstacles as in FIG. 6A and a different viewpoint 620 . Hence, the visibility of points differs, by the number of points, as well as their location. Visible point 622 is not visible in the same location in FIG. 6A , while non-visible point 626 is visible in the same location in FIG. 6A .
  • FIG. 7 illustrates shadow casting of an image, performed after visibility determination as described above. Shadow casting is provided according to the visibility determination, since visibility can be transformed into an amount of light emitted to a point. For example, in case a point in a point cloud is visible, it may indicate that the sun or another light source lights the point or points in its vicinity. In other implementation of visibility determination, a visible point may be colored to better distinguish the visible point from other point.
  • FIG. 7 shows dinosaur 700 viewed from viewpoint 710 . Visible points, such as point 720 , are colored white, while non-visible points such as point 730 are colored in a dark color, such as black.
  • Coloring pixels or points in an object as a function of the points' visibility is parallel to determining shadow casting, since visibility from one viewpoint has a similar effect to light emitted from the same view point on an object.
  • the points that are visible from a viewpoint have correspond to points that are illuminated in case a light source resides in the viewpoint.
  • One technical effect of the subject matter is the ability to determine visibility for both dense point clouds and sparse point clouds without creating a new image or creating a three-dimensional surface.
  • Another technical effect is that the algorithm disclosed above can be applied on multi-dimensional representations.
  • the complexity of generating a convex hull is higher than O(n log n).
  • known methods such as reconstruction or image rendering are generally difficult and time consuming.
  • Another technical effect is that the methods described above are independent from change in the rotation or field of view of a camera or another image processing device used for capturing image data. Hence, the method and system of the disclosed subject matter do not require visibility recalculation.
  • the viewpoint can be positioned either within or outside the point cloud.
  • One aspect of the invention is that it adaptively defines the shape of a region between a point and a viewpoint, which indicates the amount of visibility. In other words. “how much” of a point is visible.
  • the methods described above are computationally less complex.
  • the first stage of the algorithm inversions requires runtime of O(n).
  • the second stage, convex hull computation, requires runtime of O(n log n). Therefore, the asymptotic complexity of the algorithm is O(n log n).
  • Another technical effect of the subject matter is the ability to distinguish between two possible positions that produce very similar projections—looking towards or away from the camera. This ability is achieved by determining which points in a 3D object are visible and which points are hidden. By removing the hidden points from the image, or data related to the hidden points from the image, the only pixels displayed are those viewed from the specific viewpoint. Hence, in case the face is shown from a specific point of view, it can be indicated, either automatically or by a human, whether an object faces a viewpoint or not.
  • Another technical effect in the subject matter is the ability to determine the desired location of cameras. This is achieved by determining points visibility from multiple locations using the above described method, and comparing the number or percentage of visible points in each location. For example, in case one location has 22 visible points, it is preferred on another location with 18 visible points.
  • the invention can be extended to 3D or to any other number of dimensions.
  • 3D instead of using two neighboring points, several neighboring points define the surface enclosing the empty volume.
  • the algorithm of the disclosed subject matter uses the same convex hull construction.
  • Another technical effect of the disclosed subject matter is the ability to determine the amount of light falling on a surface, which takes into account not only the light fallen directly from a light source as performed when determining direct illumination, but also light which has undergone reflection from other surfaces in the world as performed when determining indirect illumination.
  • Direct illumination can be obtained using the methods for determining points visibility to determine the visibility between a point source of light and the illuminated surface.
  • Indirect illumination can be obtained using the same methods to determine the visibility between a first surface acting as a source of reflecting light and a second surface that is being illuminated.
  • the global illumination of a surface can be determined as a function of the direct illumination and the indirect illumination determined using the methods described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The subject matter discloses a method of determining whether a point in a computerized image is visible from a viewpoint; said image is represented as a point cloud, the method comprising: performing inversion on a the vicinity of the point thus creating a computerized inversed objects each point in the vicinity of the point is related to a parallel point in the computerized inversed object and obtaining a convex hull of the inversed object; the point is likely to be visible from the viewpoint in case it belongs to the point set composing the convex hull. The method is also useful for shadow casting and for determining the location of an image-capturing device within a volume

Description

    RELATED APPLICATIONS
  • This application claims priority from provisional application No. 60/867,725 filed Nov. 27, 2006, which is hereby incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to computational geometry in general and to determining visibility of points from a predefined point of view in particular.
  • 2. Discussion of the Related Art
  • Computer graphics employ processing units to determine visibility of points. Visibility determination is the process of determining whether a particular point in space is visible from a specific point of view. Determining the visibility of a surface of an object quickly and efficiently has been a fundamental problem in computer graphics.
  • Determining visible points is useful for indicating field of view from a specific point, for defining shadow casting for the gaming industry, for the security industry and the like. For example, finding visible points may result in a more clearly visible object enabling to see better the particular features, curves and shapes of the object. In an exemplary case, a computerized image depicting the face of a person can be processed such that the human face contours are visibly shown after the processing. Such processing may be changing the color of some points in case they are not visible, hence distinguishing visible points from non-visible points in the image. After changing the color of the points that are visible from a specific point of view, a person watching the image can then determine if the human face is shown or if the back of the human head is shown.
  • A point cloud is a set of three-dimensional points describing the outlines or surface features of an object that may be produced by a 3D digitizer. Alternatively, point cloud can also represent some properties in N dimensional space. Evidently, points cannot occlude one another unless they are collinear with the viewpoint. As a result, points in the point cloud are not considered hidden. However, once the surface from which the points are sampled is reconstructed (in 2D or 3D), it is possible to define which of the points are visible to a viewer having a predetermined point of view. A point cloud inherently contains information from which it is possible to extract the visibility of the points to a viewer having a predetermined point of view.
  • Current solutions achieve points visibility by constructing a surface from the points in the point cloud, and using the surface to determine which of the points is visible. Reconstruction of the surface from the points requires considerable time and computation resources.
  • In addition, visibility of point clouds has been addressed in the context of rendering images or representations of objects. For example, rendering visibility maps that indicates whether one point can be viewed from another before the data is actually required. This is done by a matrix having values representing the level of visibility from a viewpoint. This way, runtime of O(1) is required to receive an answer concerning visibility and runtime of at least O(n2) is required to prepare the matrix. No solutions having runtime of O(n log n) are suggested in the art. Moreover, camera rotation or change in the field of view requires time-consuming visibility recalculation.
  • Therefore, it is desirable to provide a method and apparatus for efficiently determining the visible points without constructing a surface of points. Further, such method and apparatus are desired to be implemented using less memory, low complexity (e.g. O(n log n)) and be independent of camera rotation.
  • SUMMARY OF THE PRESENT INVENTION
  • The subject matter discloses a method for determining points that are visible from a specific point of view. The method comprises a step of inverting at least a portion of the image thus generating an inversed object. Each point in the object has a parallel point in the inversed object. The next step is determining the convex hull of the inversed object. Each point in the convex hull has a parallel point in the original object that is likely to be visible from the point of view. In some cases, additional conditions are applied on the points in the convex hull. For example, the size of the angle between two neighboring points to the examined point in the convex hull.
  • Determining visible points is also useful for determining shadow casting since the viewpoint may also function as a light source. Hence, visible points are illuminated by a light source located in the viewpoint. Performing the convex hull requires runtime of O(n log n) while other methods for determining visible points require runtime of O(n2). An image-detecting device such as a camera preferably takes the image in case it is a 2D representation. A memory unit within the camera or within a computing device to which the image is sent or copied preferably performs the process of determining the visible points.
  • The subject matter discloses a method of determining whether a specific point in a computerized image is visible from a viewpoint said image is represented as a point cloud, the method comprising: performing inversion on points located in the vicinity of the specific point thus creating a computerized inversed object, each point in the vicinity of the specific point is translated to a parallel point in the computerized inversed object; defining a convex hull of the inversed object; determining if the specific point is visible from the viewpoint according to the position of its parallel point on the convex hull relative to its neighbor points.
  • The method may further comprise a step of applying an at least one condition on the parallel point of the specific point before determining that the specific point is visible.
  • The condition is comparing the angle between the parallel point of the specific point and two neighboring points in the point set composing the convex hull to a predetermined value; a line between the point and the viewpoint divides the angle. The method further comprises a step of coloring the specific point in case it is determined visible from the viewpoint.
  • The method further comprising a step removing the specific point from the computerized image in case said point is not determined visible from the viewpoint.
  • The method further comprises determining shadow casting of the specific point by determining the point is visible from a viewpoint representing a light source. The inversion is a spherical inversion.
  • It is another aspect of the subject matter to disclose a method for determining an optimal location for positioning an image capturing device within a volume, the method comprising: obtaining a plurality of points to be visible from the image capturing device; performing inversion on points located in the vicinity of the plurality of points thus creating a computerized inversed object, each point in the vicinity of the plurality of point is translated to a parallel point in the computerized inversed object; defining a convex hull of the inversed object; determining if a point of the plurality of points is visible from the viewpoint according to the position of its parallel point on the convex hull relative to its neighbor points; repeating said determining for multiple locations within the volume, determining whether a predetermined set of points is visible from each location; selecting the optimal location of the image capturing device based on the results of said repeated determining.
  • The method comprises determining visibility of the plurality of points by indicating the number or percentage of visible points of the plurality of points is higher than a threshold.
  • The number of points of the plurality of points that are visible from the determined location is higher than the number of points in the plurality of points that are visible from other locations.
  • The subject matter discloses a method for determining the amount of light falling on an at least one point using the method of claim 1, the method comprises: determining direct illumination by determining visibility of the at least one point from a first set of points acting as a light source; locating a second set of points that are determined to be visible from the first set of points; determining indirect illumination by determining visibility of the at least one point from the second set of points acting as a source of reflecting light; setting the amount of light falling on the at least one point based on it being directly illuminated or indirectly illuminated.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Exemplary non-limited embodiments of the disclosed subject matter will be described, with reference to the following description of the embodiments, in conjunction with the figures. The figures are generally not shown to scale and any sizes are only meant to be exemplary and not necessarily limiting. Corresponding or like elements are optionally designated by the same numerals or letters.
  • FIG. 1 shows computational elements implementing a method for determining visible points from a viewpoint, in accordance with an exemplary embodiment of the disclosed subject matter;
  • FIG. 2, illustrates a flow chart of a method for determining visibility of points from a viewpoint, in accordance with an exemplary embodiment of the disclosed subject matter;
  • FIG. 3 is an illustration of a spherical inversion of a cat, in accordance with an exemplary embodiment of the disclosed subject matter;
  • FIG. 4 shows a convex hull of the palm of a hand as disclosed in the subject matter, in accordance with an exemplary embodiment of the disclosed subject matter;
  • FIGS. 5A and 5B show a convex hull performed on two different shapes performed with the same inversion, in accordance with an exemplary embodiment of the disclosed subject matter;
  • FIGS. 6A and 6B illustrate the visibility of points from different viewpoints, in accordance with an exemplary embodiment of the disclosed subject matter; and,
  • FIG. 7, illustrates shadow casting of an image performed after visibility determination, in accordance with an exemplary embodiment of the disclosed subject matter.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • The disclosed subject matter describes a novel and unobvious method for determining visible points in a point cloud referring to an object.
  • In a computerized image containing an object, it is complicated to determine whether a specific point or portion of an image is visible from a predetermined point of view. A captured image may be insufficiently clear and a person watching the image or a computer handling watching the image cannot define important data related to the image. For example, determining whether a portion of the image is visible from a specific point of view or whether a point in the image may be illuminated by a light source located at another point, or whether an obstacle, such as another object, occludes them.
  • The technical solution to the above-discussed problem is performed in a two-step algorithm. The first step is inversing an object in the image, a portion of the image, or the vicinity of a specific point. After an object is inverted, each point in the object has a parallel point in the inversion. Then, having an inversed shape, the next step is obtaining a convex hull of the inversed shape. The points in the original shape that have parallel points in the convex hull are likely to be visible.
  • FIG. 1 illustrates a computerized environment 100 implementing methods for determining the visibility of points in a computerize image represented by a point cloud, according to an exemplary embodiment of the subject matter. The point cloud can be acquired from other sources such as 3D scanners, stereo or range cameras, databases and the like. Assuming each point in a point cloud can be distinguished by coordinates or by its location in the point cloud, such data is preferably stored prior to the process of finding the visible points. Computerized environment 100 comprises an I/O device 110 for capturing an image using an imaging device 115 capturing a captured image 117. The captured image 117 is transmitted to a memory 120, where a processing unit 130 handles the image. Processing unit 130 performs inversion on at least a portion of the captured image, for example, a spherical inversion. Multiple inversions and the rules concerning the resolution and other characteristics related to the inversion may be stored in storage 140. Each point in the original object from captured image 117 has a parallel point in the inversed object. Next, processor 130 determines a convex hull of the inversed object. A point in captured image 117 is likely to be visible in case its parallel point is part of the point set composing the convex hull of the inversed image. After determining visible points, some of the pixels in the image may be colored or otherwise processed to better define the visible points from the non-visible points. In an exemplary embodiment of the subject matter, the non-visible points are removed from captured image 117 to generate a processed image 145. In other embodiments, processed image 145 may modify data related to visible points, such as enlarging visible points, highlighting visible points and the like. Processed image 145 may be displayed on monitor 150. The results may be used from further computational units or for other applications such as navigation.
  • One example of inversion is Spherical Flipping as described in Mesh segmentation using feature point and core extraction KATZ, S., LEIFMAN, G., AND TAL, A. 2005. The article was published in Visual Computer 21, 8-10, 865-875, which is hereby incorporated by reference. Another example of an inversion is a simple inversion in which the radius r is inversed into 1/r. Such simple inversion is performed using the equation
  • f ( x , y ) = 1 x 2 + y 2 ( x , y ) = ( cos ( θ ) , sin ( θ ) ) r
  • that can be modified into the equation
  • f ( x , y ) = 1 ( x 2 + y 2 ) r ( x , y ) .
  • Typically, when performing an inversion, the computational entity generates an approximately elliptical or spherical with the viewpoint in the center or in one of the focuses of the shape.
  • FIG. 2 is a flow chart of a method of determining visibility of points on an object from a viewpoint, according to an exemplary embodiment of the subject matter. In step 205, data related to the object and the point is stored in storage 140. Such data may be coordinates of the points in the point cloud, the number of points in the point cloud, coordinates of the viewpoint, and the like. Next, in step 210, the application handling the process of determining visibility of points performs inversion on the object. As a result of the inversion, a new set of points is generated. In an exemplary embodiment, the number of points in the new set of point is equal to the number of points in the point cloud representing the object. Preferably, each point in the point cloud representing the object has a parallel point in the new set of point related to the inversed object. On step 220, a convex hull is defined from the inversed object. In one embodiment of the subject matter, all the points that have parallel points in the convex hull of the inversed object are visible from the viewpoint. In other embodiments, as shown on step 230, other conditions are applied to the points that have parallel points in the convex hull before determining visibility. One example for a condition applied on a point is determining whether the angle between the parallel point in the convex hull and the two neighboring points in the hull is lower or higher than a threshold. A line between the parallel point and the viewpoint divides the angle. Once the condition is satisfied, on step 240 the application determines whether the point is visible from a specific viewpoint. On step 250, the visible points are colored when shadow casting is performed. In some exemplary embodiments, the level of visibility may be determined as a function of the portion of the point occluded by other points in the point cloud. Hence, visible points may be colored in a color different from non-visible points.
  • FIG. 3 illustrates a spherical inversion according to an exemplary embodiment of the subject matter, in which a cat-shaped circumference surrounding a viewer located at a viewpoint in the center 310 of approximate sphere 320. The circumference is inversed outside in an approximate sphere 320 having center 310. The result of this specific inversion is that each point that assembles the cat-shaped circumference has a parallel point outside approximate sphere 320. In FIG. 3, the article surrounding the object is approximate sphere 320. In some inversions, the parallel points may reside within the article surrounding the object. In spherical inversion, a parallel point resides on the same line leading from center 310 to a point in the original object that was inverted. Further, the distance between an original point on the cat-like circumference of the object to approximate sphere 320 and the parallel point of the same original point to approximate sphere 320 is equal or has a constant ratio. Each point on the cat is inside approximate sphere 320, the parallel points are outside the sphere, for example, the point indicating a portion of the cat's head 332 has a parallel point 334 outside approximate sphere 320, satisfying the conditions described above. In this exemplary embodiment, the distance between point 332 and sphere 320 is equal to the distance between parallel point 334 and sphere 320. It is noted that point 332 is in relative proximity to approximate sphere 320; hence, parallel point 334 is closer to approximate sphere 320 than the parallel points in its vicinity. Similarly, point 342 is relatively far from approximate sphere 320 and close to center 310, parallel point 344 resides in the proximity of the cat's inversion, relative to approximate sphere 320. Approximate sphere 320 may be a circle, elliptical, or combine a polygonal and elliptical shape.
  • FIG. 4, shows a convex hull of a palm of a hand 400. The term convex hull according to the disclosed subject matter refers to a set of points is the intersection of all convex sets which contain the points. Another definition may be a set of points that may reside on lines generated by tensing a band over an object. An alternative definition may that the convex hull of shape S is the unique convex polygon which contains S and whose vertices are from SA convex hull can also be depicted as points creating a polygon outside an elliptical or semi-elliptical shape or volume, in two or three dimensions.
  • The result of the convex hull is a set of points. In the exemplary embodiment shown in FIG. 4, points 410, 420, 430 440, 450, 460 and 470 are at least a subset of the points contained within the set of points assembling the convex hull of palm 400. The lines connecting the points in the convex hull belonging to the set of points are outside palm 400. For example, line 415 connects point 410 and point 420. The lines connecting the points are useful in determining which of the points in the point set has a visible parallel point in the object. Hence, data related to the lines, such as directions, coordinates, angles toward a specific point or line, offsets and the like is stored in storage and preferably utilized when determining points' visibility.
  • FIGS. 5A and 5B show convex hulls 507, 557, respectively surrounding two different shapes performed with the same inversion. The figures exemplify the difference in determining visible points in contrast to non-visible points from a similar viewpoint 510, 550 using similar inversion methods. FIG. 5A depicts a heart shaped object 503 having center 510. Center 510 is a viewpoint of the points in the point cloud that compose object 503. Parallel point 522 is the point generated by inverting point 520. Parallel point 522 lies on convex hull 507 and as a result, point 520 is likely to be visible from center 510. Similarly, points 530 and 540 are likely to be visible from center 510 since parallel points 532 and 542 reside on convex hull 507. A Point is determined to be visible if the angle between the two neighboring points of the point towards the viewpoint is smaller than a threshold value. In this case, the angle is defined by summing βk and βj The center of the angle is parallel point 532, so the point to be determined visible or not visible is point 530. Since the angle is smaller than 180, which is the threshold in the exemplary embodiment, point 530 is determined visible from center 510. Another way for defining the angle between the two neighboring points and the point that is specifically determined as visible or non-visible is to determine whether the angle points at the viewpoint or not. For example, when determining if point 530 is visible, the angle is centered in point 532, the parallel point of point 530.
  • FIG. 5B discloses an object 555 viewed from a center 550. The points belonging to object 555 have parallel points in an inversed object 557. For example, point 560 has parallel point 565 that resides within convex hull 557, the inversion of object 555. Similarly, points 570 and 580 of object 555 have parallel points 575 and 585, respectively. In order to determine the visibility of point 570 from center 550, the angle between the two neighboring points of parallel point 575 is calculated and compared to a predetermined threshold. In this case, the threshold is 180 degrees and the angle is bigger than the threshold. It is shown that line 590 that passes between neighboring parallel points 565 and 585, fully resides within inversed object 557. Determining the number or percentage of points contained in the line between the neighboring points that reside in the inversed object and comparing the result to a predetermined threshold is an alternative test in determining that a point in the convex hull is visible.
  • The steps described above, mainly of inversing the object, determining a convex hull and determining visibility of points that have parallel points on the convex hull are preferably performed by a computerized application. The image processing applications comprise software components written in any programming language such as C, C#, C++, Matlab, Java, VB, VB.Net, or the like, and developed under any development environment, such as Visual Studio.Net, J2EE or the like. It will be appreciated that the applications can alternatively be implemented as firmware ported for a specific processor such as digital signal processor (DSP) or microcontrollers, or can be implemented as hardware or configurable hardware such as field programmable gate array (FPGA) or application specific integrated circuit (ASIC). The methods can also be adapted to be executed on a computing platform, or any other type of computing platform that is provisioned with a memory device 120, a CPU 130, and several I/O ports 110 as noted above.
  • Referring now to FIGS. 6A and 6B, illustrating the result of the methods described above. Both figures show visibility of points from different viewpoints. Visible points are shown in gray, while non-visible points are shown in white. FIG. 6A shows viewpoint 610 and a plurality of points near viewpoint 610. Some points, such as point 612, are visible from viewpoint 610. Obstacles, such as obstacle 614, prevent visibility of other points near point 610. One example for a non-visible point is point 616, hidden behind an obstacle. FIG. 6B shows the same obstacles as in FIG. 6A and a different viewpoint 620. Hence, the visibility of points differs, by the number of points, as well as their location. Visible point 622 is not visible in the same location in FIG. 6A, while non-visible point 626 is visible in the same location in FIG. 6A.
  • FIG. 7, illustrates shadow casting of an image, performed after visibility determination as described above. Shadow casting is provided according to the visibility determination, since visibility can be transformed into an amount of light emitted to a point. For example, in case a point in a point cloud is visible, it may indicate that the sun or another light source lights the point or points in its vicinity. In other implementation of visibility determination, a visible point may be colored to better distinguish the visible point from other point. FIG. 7 shows dinosaur 700 viewed from viewpoint 710. Visible points, such as point 720, are colored white, while non-visible points such as point 730 are colored in a dark color, such as black. Coloring pixels or points in an object as a function of the points' visibility is parallel to determining shadow casting, since visibility from one viewpoint has a similar effect to light emitted from the same view point on an object. The points that are visible from a viewpoint have correspond to points that are illuminated in case a light source resides in the viewpoint.
  • One technical effect of the subject matter is the ability to determine visibility for both dense point clouds and sparse point clouds without creating a new image or creating a three-dimensional surface.
  • Another technical effect is that the algorithm disclosed above can be applied on multi-dimensional representations. In such cases, the complexity of generating a convex hull is higher than O(n log n). In both cases, known methods such as reconstruction or image rendering are generally difficult and time consuming.
  • Another technical effect is that the methods described above are independent from change in the rotation or field of view of a camera or another image processing device used for capturing image data. Hence, the method and system of the disclosed subject matter do not require visibility recalculation. The viewpoint can be positioned either within or outside the point cloud.
  • One aspect of the invention is that it adaptively defines the shape of a region between a point and a viewpoint, which indicates the amount of visibility. In other words. “how much” of a point is visible.
  • The methods described above are computationally less complex. The first stage of the algorithm, inversions requires runtime of O(n). The second stage, convex hull computation, requires runtime of O(n log n). Therefore, the asymptotic complexity of the algorithm is O(n log n).
  • Another technical effect of the subject matter is the ability to distinguish between two possible positions that produce very similar projections—looking towards or away from the camera. This ability is achieved by determining which points in a 3D object are visible and which points are hidden. By removing the hidden points from the image, or data related to the hidden points from the image, the only pixels displayed are those viewed from the specific viewpoint. Hence, in case the face is shown from a specific point of view, it can be indicated, either automatically or by a human, whether an object faces a viewpoint or not.
  • Another technical effect in the subject matter is the ability to determine the desired location of cameras. This is achieved by determining points visibility from multiple locations using the above described method, and comparing the number or percentage of visible points in each location. For example, in case one location has 22 visible points, it is preferred on another location with 18 visible points.
  • The invention can be extended to 3D or to any other number of dimensions. In 3D, instead of using two neighboring points, several neighboring points define the surface enclosing the empty volume. However, the algorithm of the disclosed subject matter uses the same convex hull construction.
  • Another technical effect of the disclosed subject matter is the ability to determine the amount of light falling on a surface, which takes into account not only the light fallen directly from a light source as performed when determining direct illumination, but also light which has undergone reflection from other surfaces in the world as performed when determining indirect illumination. Direct illumination can be obtained using the methods for determining points visibility to determine the visibility between a point source of light and the illuminated surface. Indirect illumination can be obtained using the same methods to determine the visibility between a first surface acting as a source of reflecting light and a second surface that is being illuminated. The global illumination of a surface can be determined as a function of the direct illumination and the indirect illumination determined using the methods described above.
  • While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings without departing from the essential scope thereof. Therefore, it is intended that the disclosed subject matter not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but only by the claims that follow.

Claims (13)

1. A method of determining whether a specific point in a computerized image is visible from a viewpoint; said image is represented as a point cloud, the method comprising:
a. performing inversion on points located in the vicinity of the specific point thus creating a computerized inversed object, each point in the vicinity of the specific point is translated to a parallel point in the computerized inversed object;
b. defining a convex hull of the inversed object;
c. determining if the specific point is visible from the viewpoint according to the position of its parallel point on the convex hull.
2. The method according to claim 1, wherein determining if the specific point is visible from the viewpoint is performed relative to the parallel point's neighboring points.
3. The method according to claim 1, further comprising a step of applying an at least one condition on the parallel point of the specific point before determining that the specific point is visible.
4. The method according to claim 3, wherein the condition is comparing the angle between the parallel point of the specific point and two neighboring points in the point set composing the convex hull to a predetermined value; a line between the point and the viewpoint divides the angle.
5. The method according to claim 1, further comprising a step of coloring the specific point in case it is determined visible from the viewpoint.
6. The method according to claim 1, further comprising a step removing the specific point from the computerized image in case said point is not determined visible from the viewpoint.
7. The method according to claim 1, wherein determining shadow casting of the specific point by determining the point is visible from a viewpoint representing a light source.
8. The method according to claim 1, wherein the inversion is a spherical inversion.
9. The method according to claim 1, wherein the method is applied to a three dimensional point cloud representation of the image.
10. A method for determining an optimal location for positioning an image capturing device within a volume, the method comprising:
a. obtaining a plurality of points to be visible from the image capturing device;
b. performing inversion on points located in the vicinity of the plurality of points thus creating a computerized inversed object, each point in the vicinity of the plurality of point is translated to a parallel point in the computerized inversed object;
c. defining a convex hull of the inversed object;
d. determining if a point of the plurality of points is visible from the viewpoint according to the position of its parallel point on the convex hull relative to its neighbor points,
e. repeating said determining for multiple locations within the volume, determining whether a predetermined set of points is visible from each location;
f. selecting the optimal location of the image capturing device based on the results of said repeated determining.
11. The method according to claim 10, wherein determining visibility of the plurality of points by indicating the number or percentage of visible points of the plurality of points is higher than a threshold.
12. The method according to claim 10, wherein the number of points of the plurality of points that are visible from the determined location is higher than the number of points in the plurality of points that are visible from other locations.
13. A method for determining the amount of light falling on an at least one point using the method of claim 1, the method comprises:
a. determining direct illumination by determining visibility of the at least one point from a first set of points acting as a light source;
b. locating a second set of points that are determined to be visible from the first set of points;
c. determining indirect illumination by determining visibility of the at least one point from the second set of points acting as a source of reflecting light;
d. setting the amount of light falling on the at least one point based on it being directly illuminated or indirectly illuminated.
US12/471,381 2006-11-29 2009-05-24 Apparatus and method for finding visible points in a cloud point Active 2029-08-13 US8531457B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/471,381 US8531457B2 (en) 2006-11-29 2009-05-24 Apparatus and method for finding visible points in a cloud point
US13/960,852 US8896602B2 (en) 2006-11-29 2013-08-07 Apparatus and method for finding visible points in a point cloud

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US86772506P 2006-11-29 2006-11-29
PCT/IL2007/001472 WO2008065661A2 (en) 2006-11-29 2007-11-29 Apparatus and method for finding visible points in a point cloud
US12/471,381 US8531457B2 (en) 2006-11-29 2009-05-24 Apparatus and method for finding visible points in a cloud point

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2007/001472 Continuation WO2008065661A2 (en) 2006-11-29 2007-11-29 Apparatus and method for finding visible points in a point cloud

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/960,852 Division US8896602B2 (en) 2006-11-29 2013-08-07 Apparatus and method for finding visible points in a point cloud

Publications (3)

Publication Number Publication Date
US20100295850A1 true US20100295850A1 (en) 2010-11-25
US20110267345A9 US20110267345A9 (en) 2011-11-03
US8531457B2 US8531457B2 (en) 2013-09-10

Family

ID=43124297

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/471,381 Active 2029-08-13 US8531457B2 (en) 2006-11-29 2009-05-24 Apparatus and method for finding visible points in a cloud point
US13/960,852 Active US8896602B2 (en) 2006-11-29 2013-08-07 Apparatus and method for finding visible points in a point cloud

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/960,852 Active US8896602B2 (en) 2006-11-29 2013-08-07 Apparatus and method for finding visible points in a point cloud

Country Status (1)

Country Link
US (2) US8531457B2 (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014108907A1 (en) * 2013-01-14 2014-07-17 Bar-Ilan University Location-based image retrieval
US20160133026A1 (en) * 2014-11-06 2016-05-12 Symbol Technologies, Inc. Non-parametric method of and system for estimating dimensions of objects of arbitrary shape
US9805240B1 (en) 2016-04-18 2017-10-31 Symbol Technologies, Llc Barcode scanning and dimensioning
US10140725B2 (en) 2014-12-05 2018-11-27 Symbol Technologies, Llc Apparatus for and method of estimating dimensions of an object associated with a code in automatic response to reading the code
US10145955B2 (en) 2016-02-04 2018-12-04 Symbol Technologies, Llc Methods and systems for processing point-cloud data with a line scanner
US10354411B2 (en) 2016-12-20 2019-07-16 Symbol Technologies, Llc Methods, systems and apparatus for segmenting objects
US10352689B2 (en) 2016-01-28 2019-07-16 Symbol Technologies, Llc Methods and systems for high precision locationing with depth values
US10451405B2 (en) 2016-11-22 2019-10-22 Symbol Technologies, Llc Dimensioning system for, and method of, dimensioning freight in motion along an unconstrained path in a venue
US10521914B2 (en) 2017-09-07 2019-12-31 Symbol Technologies, Llc Multi-sensor object recognition system and method
US10572763B2 (en) 2017-09-07 2020-02-25 Symbol Technologies, Llc Method and apparatus for support surface edge detection
US10591918B2 (en) 2017-05-01 2020-03-17 Symbol Technologies, Llc Fixed segmented lattice planning for a mobile automation apparatus
US10663590B2 (en) 2017-05-01 2020-05-26 Symbol Technologies, Llc Device and method for merging lidar data
US10721451B2 (en) 2016-03-23 2020-07-21 Symbol Technologies, Llc Arrangement for, and method of, loading freight into a shipping container
US10726273B2 (en) 2017-05-01 2020-07-28 Symbol Technologies, Llc Method and apparatus for shelf feature and object placement detection from shelf images
US10731970B2 (en) 2018-12-13 2020-08-04 Zebra Technologies Corporation Method, system and apparatus for support structure detection
US10740911B2 (en) 2018-04-05 2020-08-11 Symbol Technologies, Llc Method, system and apparatus for correcting translucency artifacts in data representing a support structure
US10776661B2 (en) 2016-08-19 2020-09-15 Symbol Technologies, Llc Methods, systems and apparatus for segmenting and dimensioning objects
US10809078B2 (en) 2018-04-05 2020-10-20 Symbol Technologies, Llc Method, system and apparatus for dynamic path generation
US10823572B2 (en) 2018-04-05 2020-11-03 Symbol Technologies, Llc Method, system and apparatus for generating navigational data
US10832436B2 (en) 2018-04-05 2020-11-10 Symbol Technologies, Llc Method, system and apparatus for recovering label positions
US10949798B2 (en) 2017-05-01 2021-03-16 Symbol Technologies, Llc Multimodal localization and mapping for a mobile automation apparatus
US11003188B2 (en) 2018-11-13 2021-05-11 Zebra Technologies Corporation Method, system and apparatus for obstacle handling in navigational path generation
US11010920B2 (en) 2018-10-05 2021-05-18 Zebra Technologies Corporation Method, system and apparatus for object detection in point clouds
US11015938B2 (en) 2018-12-12 2021-05-25 Zebra Technologies Corporation Method, system and apparatus for navigational assistance
US11042161B2 (en) 2016-11-16 2021-06-22 Symbol Technologies, Llc Navigation control method and apparatus in a mobile automation system
US11079240B2 (en) 2018-12-07 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for adaptive particle filter localization
US11080566B2 (en) 2019-06-03 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for gap detection in support structures with peg regions
US11093896B2 (en) 2017-05-01 2021-08-17 Symbol Technologies, Llc Product status detection system
US11090811B2 (en) 2018-11-13 2021-08-17 Zebra Technologies Corporation Method and apparatus for labeling of support structures
US11100303B2 (en) 2018-12-10 2021-08-24 Zebra Technologies Corporation Method, system and apparatus for auxiliary label detection and association
US11107238B2 (en) 2019-12-13 2021-08-31 Zebra Technologies Corporation Method, system and apparatus for detecting item facings
US11151743B2 (en) 2019-06-03 2021-10-19 Zebra Technologies Corporation Method, system and apparatus for end of aisle detection
US11200677B2 (en) 2019-06-03 2021-12-14 Zebra Technologies Corporation Method, system and apparatus for shelf edge detection
US11327504B2 (en) 2018-04-05 2022-05-10 Symbol Technologies, Llc Method, system and apparatus for mobile automation apparatus localization
US11341663B2 (en) 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions
US11367092B2 (en) 2017-05-01 2022-06-21 Symbol Technologies, Llc Method and apparatus for extracting and processing price text from an image set
US11392891B2 (en) 2020-11-03 2022-07-19 Zebra Technologies Corporation Item placement detection and optimization in material handling systems
US11402846B2 (en) 2019-06-03 2022-08-02 Zebra Technologies Corporation Method, system and apparatus for mitigating data capture light leakage
US11416000B2 (en) 2018-12-07 2022-08-16 Zebra Technologies Corporation Method and apparatus for navigational ray tracing
US11450024B2 (en) 2020-07-17 2022-09-20 Zebra Technologies Corporation Mixed depth object detection
US11449059B2 (en) 2017-05-01 2022-09-20 Symbol Technologies, Llc Obstacle detection for a mobile automation apparatus
US11506483B2 (en) 2018-10-05 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for support structure depth determination
US11507103B2 (en) 2019-12-04 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for localization-based historical obstacle handling
US11592826B2 (en) 2018-12-28 2023-02-28 Zebra Technologies Corporation Method, system and apparatus for dynamic loop closure in mapping trajectories
US11593915B2 (en) 2020-10-21 2023-02-28 Zebra Technologies Corporation Parallax-tolerant panoramic image generation
US11600084B2 (en) 2017-05-05 2023-03-07 Symbol Technologies, Llc Method and apparatus for detecting and interpreting price label text
US11662739B2 (en) 2019-06-03 2023-05-30 Zebra Technologies Corporation Method, system and apparatus for adaptive ceiling-based localization
US11822333B2 (en) 2020-03-30 2023-11-21 Zebra Technologies Corporation Method, system and apparatus for data capture illumination control
US11847832B2 (en) 2020-11-11 2023-12-19 Zebra Technologies Corporation Object classification for autonomous navigation systems
US11954882B2 (en) 2021-06-17 2024-04-09 Zebra Technologies Corporation Feature-based georegistration for mobile computing devices
US11960286B2 (en) 2019-06-03 2024-04-16 Zebra Technologies Corporation Method, system and apparatus for dynamic task sequencing
US11978011B2 (en) 2017-05-01 2024-05-07 Symbol Technologies, Llc Method and apparatus for object status detection

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150046862A1 (en) * 2013-08-11 2015-02-12 Silicon Graphics International Corp. Modifying binning operations
US20150199420A1 (en) * 2014-01-10 2015-07-16 Silicon Graphics International, Corp. Visually approximating parallel coordinates data
US11144184B2 (en) 2014-01-23 2021-10-12 Mineset, Inc. Selection thresholds in a visualization interface
CN107967710B (en) * 2016-10-20 2021-05-25 株式会社理光 Three-dimensional object description method and device
JP2019128641A (en) * 2018-01-22 2019-08-01 キヤノン株式会社 Image processing device, image processing method and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5546515A (en) * 1992-07-08 1996-08-13 Matsushita Electric Industrial Co., Ltd. Image processing apparatus
US6266064B1 (en) * 1998-05-29 2001-07-24 Microsoft Corporation Coherent visibility sorting and occlusion cycle detection for dynamic aggregate geometry
US20020164067A1 (en) * 2001-05-02 2002-11-07 Synapix Nearest neighbor edge selection from feature tracking
US20070078636A1 (en) * 2005-10-04 2007-04-05 Rdv Systems Ltd. Method and Apparatus for Virtual Reality Presentation of Civil Engineering, Land Planning and Infrastructure
US20070103460A1 (en) * 2005-11-09 2007-05-10 Tong Zhang Determining camera motion
US20070247460A1 (en) * 2006-04-19 2007-10-25 Pixar Systems and methods for light pruning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5546515A (en) * 1992-07-08 1996-08-13 Matsushita Electric Industrial Co., Ltd. Image processing apparatus
US6266064B1 (en) * 1998-05-29 2001-07-24 Microsoft Corporation Coherent visibility sorting and occlusion cycle detection for dynamic aggregate geometry
US20020164067A1 (en) * 2001-05-02 2002-11-07 Synapix Nearest neighbor edge selection from feature tracking
US20070078636A1 (en) * 2005-10-04 2007-04-05 Rdv Systems Ltd. Method and Apparatus for Virtual Reality Presentation of Civil Engineering, Land Planning and Infrastructure
US20070103460A1 (en) * 2005-11-09 2007-05-10 Tong Zhang Determining camera motion
US20070247460A1 (en) * 2006-04-19 2007-10-25 Pixar Systems and methods for light pruning

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10025798B2 (en) 2013-01-14 2018-07-17 Bar Ilan University Location-based image retrieval
WO2014108907A1 (en) * 2013-01-14 2014-07-17 Bar-Ilan University Location-based image retrieval
US20160133026A1 (en) * 2014-11-06 2016-05-12 Symbol Technologies, Inc. Non-parametric method of and system for estimating dimensions of objects of arbitrary shape
US9600892B2 (en) * 2014-11-06 2017-03-21 Symbol Technologies, Llc Non-parametric method of and system for estimating dimensions of objects of arbitrary shape
US10140725B2 (en) 2014-12-05 2018-11-27 Symbol Technologies, Llc Apparatus for and method of estimating dimensions of an object associated with a code in automatic response to reading the code
US10352689B2 (en) 2016-01-28 2019-07-16 Symbol Technologies, Llc Methods and systems for high precision locationing with depth values
US10145955B2 (en) 2016-02-04 2018-12-04 Symbol Technologies, Llc Methods and systems for processing point-cloud data with a line scanner
US10721451B2 (en) 2016-03-23 2020-07-21 Symbol Technologies, Llc Arrangement for, and method of, loading freight into a shipping container
US9805240B1 (en) 2016-04-18 2017-10-31 Symbol Technologies, Llc Barcode scanning and dimensioning
US10776661B2 (en) 2016-08-19 2020-09-15 Symbol Technologies, Llc Methods, systems and apparatus for segmenting and dimensioning objects
US11042161B2 (en) 2016-11-16 2021-06-22 Symbol Technologies, Llc Navigation control method and apparatus in a mobile automation system
US10451405B2 (en) 2016-11-22 2019-10-22 Symbol Technologies, Llc Dimensioning system for, and method of, dimensioning freight in motion along an unconstrained path in a venue
US10354411B2 (en) 2016-12-20 2019-07-16 Symbol Technologies, Llc Methods, systems and apparatus for segmenting objects
US11978011B2 (en) 2017-05-01 2024-05-07 Symbol Technologies, Llc Method and apparatus for object status detection
US10949798B2 (en) 2017-05-01 2021-03-16 Symbol Technologies, Llc Multimodal localization and mapping for a mobile automation apparatus
US10726273B2 (en) 2017-05-01 2020-07-28 Symbol Technologies, Llc Method and apparatus for shelf feature and object placement detection from shelf images
US11367092B2 (en) 2017-05-01 2022-06-21 Symbol Technologies, Llc Method and apparatus for extracting and processing price text from an image set
US11449059B2 (en) 2017-05-01 2022-09-20 Symbol Technologies, Llc Obstacle detection for a mobile automation apparatus
US10591918B2 (en) 2017-05-01 2020-03-17 Symbol Technologies, Llc Fixed segmented lattice planning for a mobile automation apparatus
US11093896B2 (en) 2017-05-01 2021-08-17 Symbol Technologies, Llc Product status detection system
US10663590B2 (en) 2017-05-01 2020-05-26 Symbol Technologies, Llc Device and method for merging lidar data
US11600084B2 (en) 2017-05-05 2023-03-07 Symbol Technologies, Llc Method and apparatus for detecting and interpreting price label text
US10572763B2 (en) 2017-09-07 2020-02-25 Symbol Technologies, Llc Method and apparatus for support surface edge detection
US10521914B2 (en) 2017-09-07 2019-12-31 Symbol Technologies, Llc Multi-sensor object recognition system and method
US10832436B2 (en) 2018-04-05 2020-11-10 Symbol Technologies, Llc Method, system and apparatus for recovering label positions
US10823572B2 (en) 2018-04-05 2020-11-03 Symbol Technologies, Llc Method, system and apparatus for generating navigational data
US10809078B2 (en) 2018-04-05 2020-10-20 Symbol Technologies, Llc Method, system and apparatus for dynamic path generation
US10740911B2 (en) 2018-04-05 2020-08-11 Symbol Technologies, Llc Method, system and apparatus for correcting translucency artifacts in data representing a support structure
US11327504B2 (en) 2018-04-05 2022-05-10 Symbol Technologies, Llc Method, system and apparatus for mobile automation apparatus localization
US11010920B2 (en) 2018-10-05 2021-05-18 Zebra Technologies Corporation Method, system and apparatus for object detection in point clouds
US11506483B2 (en) 2018-10-05 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for support structure depth determination
US11003188B2 (en) 2018-11-13 2021-05-11 Zebra Technologies Corporation Method, system and apparatus for obstacle handling in navigational path generation
US11090811B2 (en) 2018-11-13 2021-08-17 Zebra Technologies Corporation Method and apparatus for labeling of support structures
US11079240B2 (en) 2018-12-07 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for adaptive particle filter localization
US11416000B2 (en) 2018-12-07 2022-08-16 Zebra Technologies Corporation Method and apparatus for navigational ray tracing
US11100303B2 (en) 2018-12-10 2021-08-24 Zebra Technologies Corporation Method, system and apparatus for auxiliary label detection and association
US11015938B2 (en) 2018-12-12 2021-05-25 Zebra Technologies Corporation Method, system and apparatus for navigational assistance
US10731970B2 (en) 2018-12-13 2020-08-04 Zebra Technologies Corporation Method, system and apparatus for support structure detection
US11592826B2 (en) 2018-12-28 2023-02-28 Zebra Technologies Corporation Method, system and apparatus for dynamic loop closure in mapping trajectories
US11151743B2 (en) 2019-06-03 2021-10-19 Zebra Technologies Corporation Method, system and apparatus for end of aisle detection
US11341663B2 (en) 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions
US11960286B2 (en) 2019-06-03 2024-04-16 Zebra Technologies Corporation Method, system and apparatus for dynamic task sequencing
US11402846B2 (en) 2019-06-03 2022-08-02 Zebra Technologies Corporation Method, system and apparatus for mitigating data capture light leakage
US11662739B2 (en) 2019-06-03 2023-05-30 Zebra Technologies Corporation Method, system and apparatus for adaptive ceiling-based localization
US11080566B2 (en) 2019-06-03 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for gap detection in support structures with peg regions
US11200677B2 (en) 2019-06-03 2021-12-14 Zebra Technologies Corporation Method, system and apparatus for shelf edge detection
US11507103B2 (en) 2019-12-04 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for localization-based historical obstacle handling
US11107238B2 (en) 2019-12-13 2021-08-31 Zebra Technologies Corporation Method, system and apparatus for detecting item facings
US11822333B2 (en) 2020-03-30 2023-11-21 Zebra Technologies Corporation Method, system and apparatus for data capture illumination control
US11450024B2 (en) 2020-07-17 2022-09-20 Zebra Technologies Corporation Mixed depth object detection
US11593915B2 (en) 2020-10-21 2023-02-28 Zebra Technologies Corporation Parallax-tolerant panoramic image generation
US11392891B2 (en) 2020-11-03 2022-07-19 Zebra Technologies Corporation Item placement detection and optimization in material handling systems
US11847832B2 (en) 2020-11-11 2023-12-19 Zebra Technologies Corporation Object classification for autonomous navigation systems
US11954882B2 (en) 2021-06-17 2024-04-09 Zebra Technologies Corporation Feature-based georegistration for mobile computing devices

Also Published As

Publication number Publication date
US20130321421A1 (en) 2013-12-05
US8531457B2 (en) 2013-09-10
US20110267345A9 (en) 2011-11-03
US8896602B2 (en) 2014-11-25

Similar Documents

Publication Publication Date Title
US8896602B2 (en) Apparatus and method for finding visible points in a point cloud
US6518963B1 (en) Method and apparatus for generating patches from a 3D mesh model
CN110728740B (en) virtual photogrammetry
Lawonn et al. A survey of surface‐based illustrative rendering for visualization
US8432435B2 (en) Ray image modeling for fast catadioptric light field rendering
CN111508052B (en) Rendering method and device of three-dimensional grid body
US8289318B1 (en) Determining three-dimensional shape characteristics in a two-dimensional image
US8743114B2 (en) Methods and systems to determine conservative view cell occlusion
EP1059611A1 (en) Image processing apparatus
US8294713B1 (en) Method and apparatus for illuminating objects in 3-D computer graphics
US9508191B2 (en) Optimal point density using camera proximity for point-based global illumination
US20120001911A1 (en) Method for generating shadows in an image
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
US8698799B2 (en) Method and apparatus for rendering graphics using soft occlusion
CN112819941B (en) Method, apparatus, device and computer readable storage medium for rendering water surface
US20090309877A1 (en) Soft shadow rendering
US6791544B1 (en) Shadow rendering system and method
US20190378321A1 (en) Glyph Rendering in Three-Dimensional Space
US7952592B2 (en) System and method for view-dependent cutout geometry for importance-driven volume rendering
Kroes et al. Smooth probabilistic ambient occlusion for volume rendering
CN116664422A (en) Image highlight processing method and device, electronic equipment and readable storage medium
KR20100075351A (en) Method and system for rendering mobile computer graphic
WO2008065661A2 (en) Apparatus and method for finding visible points in a point cloud
US20110074777A1 (en) Method For Displaying Intersections And Expansions of Three Dimensional Volumes
US20240135645A1 (en) Appearance Capture

Legal Events

Date Code Title Description
AS Assignment

Owner name: TECHNION RESEARCH AND DEVELOPMENT FOUNDATION LTD,

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KATZ, SAGI;TAL, AYELLET;REEL/FRAME:022734/0749

Effective date: 20090514

AS Assignment

Owner name: OXFORD FINANCE CORPORATION, VIRGINIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:SUPERDIMENSION LTD.;REEL/FRAME:026572/0849

Effective date: 20100331

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 8