WO2024052862A1 - Iterative analysis and processing of 3d representations of objects - Google Patents

Iterative analysis and processing of 3d representations of objects Download PDF

Info

Publication number
WO2024052862A1
WO2024052862A1 PCT/IB2023/058888 IB2023058888W WO2024052862A1 WO 2024052862 A1 WO2024052862 A1 WO 2024052862A1 IB 2023058888 W IB2023058888 W IB 2023058888W WO 2024052862 A1 WO2024052862 A1 WO 2024052862A1
Authority
WO
WIPO (PCT)
Prior art keywords
representation
modified
obtaining
point cloud
characteristic
Prior art date
Application number
PCT/IB2023/058888
Other languages
French (fr)
Inventor
Annie-Pier LAVALLEE
Asma Iben HOURIIA
Marie-Eve DESROCHERS
Bryan Martin
Laurent Juppe
Sherif Esmat Omar ABUELWAFA
Original Assignee
Applications Mobiles Overview Inc.
Overview Sas
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Applications Mobiles Overview Inc., Overview Sas filed Critical Applications Mobiles Overview Inc.
Publication of WO2024052862A1 publication Critical patent/WO2024052862A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Definitions

  • the present technology relates to systems and methods for the iterative analysis and processing of a three-dimensional (3D) representation of an object.
  • Three-dimensional (“3D”) digital data may be produced by a variety of devices that involve three-dimensional scanning or sampling and/or numerical modeling.
  • 3D laser scanners generate 3D digital data.
  • a long-range laser scanner is fixed in one location and rotated to scan objects around it.
  • a short-range laser scanner is mounted on a device that moves around an object while scanning it.
  • the location of each point scanned is represented as a polar coordinate since the angle between the scanner and the object and distance from the scanner to the object are known.
  • the polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner.
  • 3D digital data Other examples to generate 3D digital data are depth cameras or 3D scanner to generate 3D digital data by collecting a complete point set of (x, y, z) locations that represent the shape of an object. Once collected, these point sets, also known as 3D point clouds, are sent to an image rendering system, which then processes the point data to generate a 3D representation of the object.
  • Typical 3D point clouds obtained from acquisition devices or as the outputs of reconstruction algorithms and related processes comprise densely-packed, high resolution data containing a variety of associated distortions.
  • Such point clouds often require substantial processing that is both time and computationally intensive to correct the data extracted therefrom.
  • Embodiments of the present technology have been developed based on developers’ appreciation of at least one technical problem associated with the prior art solutions.
  • 3D point clouds obtained from acquisition devices or as the outputs of reconstruction algorithms and related processes comprise densely-packed, high resolution data containing a variety of associated distortions.
  • Such point clouds often require substantial processing that is both time and computationally intensive to correct the data extracted therefrom.
  • Characteristics of poor-quality point clouds can include, but are not limited to, noise, holes, missing or unidentified planar surfaces, a lack of/or missing faces, irregular or missing topologies, poor alignment, sparse or inconsistent density of points, outliers, noise inherent or caused by limitations of acquisition devices and sensors, artifacts due to lighting and the reflective nature of surfaces, artifacts in the scene, reconstruction deformations.
  • non-specialized hardware such as a mobile device including a camera (e.g., an iPhone® mobile phone from Apple or a Galaxy® mobile phone or tablet from Samsung) to acquire a 3D point cloud.
  • a mobile device including a camera e.g., an iPhone® mobile phone from Apple or a Galaxy® mobile phone or tablet from Samsung
  • point clouds obtained from such non-specialized hardware may contain even more noise than point clouds obtained using specialized 3D scanning hardware, thereby requiring further noise removal processes.
  • a computer-implemented method for the iterative analysis and processing of a stored 3D representation includes accessing the stored 3D representation, applying at least a first operation to the accessed 3D representation to obtain a modified 3D representation, applying at least a second operation to the modified 3D representation to obtain at least one characteristic of the modified 3D representation, and applying the at least one characteristic obtained from the at least one modified 3D representation to the at least one 3D representation to obtain feature data of the object from the 3D representation.
  • the at least one 3D representations may include, but are not limited to, 3D point clouds, 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or Computer-aided design (CAD) files.
  • 3D point clouds 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or Computer-aided design (CAD) files.
  • CAD Computer-aided design
  • the at least one 3D representations may include, but are not limited to partial, 3D point clouds, 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or Computer-aided design (CAD) files.
  • CAD Computer-aided design
  • operations are applied to the at least one 3D representation to obtain the at least one modified 3D representation.
  • the operations may include, but are not limited to, downsampling; segmentation; detecting planar surfaces; removing vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the at least one modified 3D representation may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
  • operations are applied to the at least one 3D representation to obtain the at least one characteristic from the at least one 3D representation.
  • the at least one operation is applied to a plurality of 3D representations to obtain the at least one modified 3D representation.
  • the at least one characteristic obtained from the at least one 3D representation may include, but are not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
  • the at least one characteristic of the at least one modified 3D representation is accessed to obtain data from a second 3D representation.
  • the at least one characteristic of the at least one modified 3D representation is accessed to improve the quality of a second 3D representation.
  • the second 3D representation is a modified version of the first 3D representation.
  • the at least one characteristic obtained from the at least one modified 3D representation is utilized to obtain a plurality of characteristics from the at least one 3D representation.
  • a plurality of operations is applied to the at least one characteristic obtained from the at least one modified 3D representation to obtain a plurality of characteristics from the at least one 3D representation.
  • a plurality of operations is applied to the at least one characteristic obtained from the at least one modified 3D representation to obtain the at least one modified 3D representation from the at least one 3D representation
  • the iterative analysis and processing of a 3D representation may comprise the down-sampling of a full or partial 3D point cloud, the detection of the at least one planar surface in the down-sampled 3D point cloud, and constraining the at least one planar surface in the original 3D point cloud with the the at least one planar equation from the down- sampled 3D point cloud.
  • the iterative analysis and processing of a 3D representation may comprise the removal of the at least one vertex from a 3D point cloud to generate a sparse 3D point cloud.
  • operations utilized in the removal of the at least one vertex may comprise, but are not limited to, Statistical Outlier Removal (SOR), filtering, region growing and/or clustering.
  • SOR Statistical Outlier Removal
  • further operations may be applied on a sparse 3D point cloud to obtain the at least one specified characteristic that may be utilized to obtain data from the original 3D point cloud.
  • a bounding box may be aligned with the obtained sparse 3D point cloud. Subsequently, the aligned bounding box may be applied to the original 3D point cloud to the enable the operations of cropping, isolation, and/or segmentation of an object-of-interest.
  • the iterative analysis and processing of a 3D representation may comprise the projection of a 3D point cloud onto a 2D plane, the transformation of the 2D projection of the 3D point cloud into a 2D projected contour, obtaining the at least one characteristic from the 2D projected contour, and applying the at least one operation utilizing the obtained at least one characteristic on the original 3D point cloud to obtain a modified version of the original 3D point cloud.
  • the iterative analysis and processing of a 3D representation may comprise obtaining a 3D mesh of a 3D point cloud, obtaining a 2D and/or 3D skeleton from the 3D mesh, obtaining the at least one spline from the 2D and/or 3D skeleton, obtaining the at least one characteristic from the at least one 2D and/or 3D spline, and applying the at least one operation utilizing the obtained the at least one characteristic to the original 3D point cloud to obtain a modified version of the original 3D point cloud.
  • the iterative analysis and processing of a 3D representation may comprise executing a plurality of parallel operations on a 3D representation.
  • the sequential pipeline operations of (i) obtaining a 3D mesh from a 3D point cloud, obtaining a 2D and/or 3D skeleton from the 3D mesh, and obtaining a 2D and/or 3D spline skeleton maybe executed in parallel with (ii) the projection of a 3D point cloud onto a 2D plane, the transformation of the 2D projection of the 3D point cloud into a 2D projected contour; obtaining characteristics from each operation pipeline, and applying the at least one operation utilizing the obtained the at least one characteristic on the original 3D point cloud to obtain a modified version of the original 3D point cloud.
  • the iterative analysis and processing of a 3D representation may comprise the modification of a 3D point cloud, obtaining the at least one characteristic of the modified 3D point cloud and applying the at least one operation utilizing the obtained the at least one characteristic to the 3D model template to obtain a modified version of the original 3D model template.
  • the iterative analysis and processing of a 3D representation may comprise segmentation.
  • a 3D point cloud may be segmented into the at least one segment possessed by the at least one bounding box.
  • a textured mesh may be obtained from the 3D point cloud.
  • the at least one bounding box obtained from the 3D point cloud is applied to the textured mesh to obtain the at least one object.
  • the iterative analysis and processing of a 3D representation may comprise the modification of a 3D point cloud and/or the modification of a 3D model template, obtaining the at least one characteristic from the modified 3D point cloud and/or the modified 3D model template, and applying the at least one operation utilizing the obtained the at least one characteristic to the 3D point cloud and/or 3D model template to obtain a modified version of the 3D point cloud and/or 3D model template.
  • the iterative analysis and processing of a 3D representation may comprise the modification of a 3D model template and a 3D point cloud to generate a reposed 3D point cloud.
  • this operation may include transferring the at least one pattern and the at least one characteristic from a 3D point cloud to a 3D model template to obtain a reposed 3D model template.
  • the at least one operation is applied to the reposed 3D model template to obtain the at least one bounding box and/or the at least one landmark, and to align to the at least one bounding box and/or the at least one landmark the reposed 3D model.
  • the at least one aligned bounding box may be applied to the original 3D point cloud to obtain a reposed 3D point cloud.
  • the at least one landmark may be applied to a reposed 3D point cloud to obtain a reposed 3D point cloud possessing the at least one landmark.
  • the iterative analysis and processing of a 3D representation may comprise the modification of a 3D model template and a 3D point cloud utilizing principal components analysis (PCA).
  • PCA principal components analysis
  • This method may comprise applying the operation of the at least one translation to align the 3D model template with the 3D point cloud. Applying the operation of the at least one rotation to further align the 3D model template with respect to the 3D point cloud.
  • FIG. 1 depicts a functional block diagram of a device for executing a method of iterative analysis and processing of a 3D representation, in accordance with embodiments of the present disclosure
  • FIG. 2 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to obtain data from a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 3 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to detect planes in a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 4 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to crop a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 5 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation utilizing plurality of parallel operations on a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 6 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to obtain data from a second 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 7 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to obtain objects from a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 8 depicts a high-level flow diagram of the method of iterative analysis and processing of a first and second 3D representation to obtain data from a first and/or second 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 9 depicts a high-level flow diagram of the method of iterative analysis and processing of the at least one 3D representation to repose a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG. 10 depicts a high-level flow diagram of the method of iterative analysis and processing of the at least one 3D representation to morph a 3D representation, in accordance with the embodiments of the present disclosure
  • FIG 11 A depicts a visual of a 3D model template of a human body
  • FIG 1 IB is a visual of a 3D reconstructed point cloud of a human body
  • FIG 11C is a visual of a 3D model template of a human body morphed and aligned to a 3D reconstructed point cloud of a human body
  • FIG. 12A is a visual of a 3D reconstructed point cloud containing multiple objects, such as, a human hand and wrist, and a planar surface;
  • FIG. 12B is a visual of a 3D reconstructed point cloud containing a human hand and wrist subsequent to the removal of a planar surface
  • FIG. 13 A is a visual of a 3D mesh containing a human hand and wrist
  • FIG. 13B is a visual of a 3D mesh containing a human hand and wrist with a skeleton
  • FIG. 14A is a visual of a 2D projection of a human hand and wrist with contour
  • FIG. 14B is a visual of a 2D contour of a human hand and wrist with landmarks
  • FIG. 15 A is a visual of a 2D point cloud of a human hand and wrist with a 2D contour line, landmarks and segmentation;
  • FIG. 15B is a visual of a 2D projection of a human hand and wrist with contour and finger segments
  • FIG. 16 is a visual of a 3D point cloud of a human finger with skeleton
  • FIG. 17A is a visual of a 2D slice of a human finger
  • FIG. 17B is a visual of a median-fitting circle applied on a 2D slice of a human finger
  • FIG. 17C is a visual of an exterior contour-fitting circle applied on a 2D slice of a human finger
  • FIG. 18 A-B is a visual of a user interface utilized for finger selection.
  • FIG. 19 is a visual of a user interface displaying finger measurements.
  • processor may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
  • the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared.
  • the processor may be a general purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a digital signal processor (DSP).
  • CPU central processing unit
  • DSP digital signal processor
  • processor should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • ROM read-only memory
  • RAM random access memory
  • non-volatile storage Other hardware, conventional and/or custom, may also be included.
  • modules may be represented herein as any combination of flowchart elements or other elements indicating the specified functionality and performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown. Moreover, it should be understood that these modules may, for example include, without being limitative, computer program logic, computer program instructions, software, stack, firmware, hardware circuitry or any combinations thereof that are configured to provide the required capabilities and specified functionality.
  • the iterative processes and related functionality described herein may include, but are not limited to: down-sampling; segmentation; detecting planar surfaces; removal of vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • FIG. 1 provides depicts a functional block diagram of a device 10 configured for generating and/or processing a three-dimensional (3D) point cloud, in accordance with embodiments of the present disclosure.
  • the device 10 as depicted is merely an illustrative implementation of the present technology.
  • modifications to the device 10 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and, as a person skilled in the art would understand, other modifications are likely possible.
  • the device 10 comprises a computing unit 100 that may receive captured images of an object to be characterized.
  • the computing unit 100 may be configured to generate the 3D point cloud as a representation of the object to be characterized.
  • the computing unit 100 is described in greater details hereinbelow.
  • the computing unit 100 may be implemented by any of a conventional personal computer, a controller, and/or an electronic device (e.g., a server, a controller unit, a control device, a monitoring device etc.) and/or any combination thereof appropriate to the relevant task at hand.
  • the computing unit 100 comprises various hardware components including one or more single or multi-core processors collectively represented by a processor 110, a solid-state drive (SSD) 150, a random access memory (RAM) 130, a dedicated memory 140, and an input/output interface 160.
  • the computing unit 100 may be a computer specifically designed to operate a machine learning algorithm (MLA) and/or deep learning algorithms (DLA) or may be a generic computer system.
  • MLA machine learning algorithm
  • DLA deep learning algorithms
  • the computing unit 100 may be an "off the shelf generic computer system. In some embodiments, the computing unit 100 may also be distributed amongst multiple processing systems. The computing unit 100 may also be specifically dedicated to the implementation of the present technology. As a person in the art of the present technology may appreciate, multiple variations as to how the computing unit 100 is implemented may be envisioned without departing from the scope of the present technology.
  • the communications between the various components of the computing unit 100 may be enabled by one or more internal and/or external facilities 170 (e.g., a PCI bus, universal serial bus, IEEE 1394 "Firewire” bus, SCSI bus, Serial-ATA bus, ARINC bus, etc ), to which the various hardware components are electronically coupled.
  • internal and/or external facilities 170 e.g., a PCI bus, universal serial bus, IEEE 1394 "Firewire" bus, SCSI bus, Serial-ATA bus, ARINC bus, etc .
  • the input/output interface 160 may provide networking capabilities, such as wired or wireless access.
  • the input/output interface 160 may comprise a networking interface such as, but not limited to, one or more network ports, one or more network sockets, one or more network interface controllers and the like. Multiple examples of how the networking interface may be implemented will become apparent to the person skilled in the art of the present technology.
  • the networking interface may implement specific physical layer and data link layer standard such as Ethernet, Fibre Channel, Wi-Fi or Token Ring.
  • the specific physical layer and the data link layer may provide a base for a full network protocol stack, allowing communication among small groups of computers on the same local area network (LAN) and large-scale network communications through routable protocols, such as Internet Protocol (IP).
  • LAN local area network
  • IP Internet Protocol
  • the solid-state drive (SSD) 150 stores program instructions suitable for being loaded into the RAM 130 and executed by the processor 110.
  • SSD 150 any type of suitable memory type may be used, such as, for example, hard disk, optical disk, and/or removable storage media.
  • the SSD 150 stores program instructions suitable for being loaded into the RAM 130 and executed by the processor 110 for executing the generation of 3D representation of objects.
  • the program instructions may be part of a library or an application.
  • the processor 110 may be a general-purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a digital signal processor (DSP).
  • the processor 110 may also rely on an accelerator 120 dedicated to expediting certain processing tasks, such as executing the methods described below.
  • the processor 110 and/or the accelerator 120 may be implemented as one or more field programmable gate arrays (FPGAs).
  • FPGAs field programmable gate arrays
  • the iterative analysis and processing of a 3D representation 100 may be implemented by an imaging device or any sensing device configured to optically sense or detect certain features of an object-of-interest, such as, but not limited to, a camera, a video camera, a microscope, endoscope, etc.
  • imaging systems may be implemented as a user computing and communication-capable device, such as, but not limited to, a camera, a video camera, endoscope, a mobile device, tablet device, a microscope, server, controller unit, control device, monitoring device, etc.
  • the device 10 comprises an imaging system 18 that may be configured to capture Red-Green-Blue (RGB) images.
  • RGB Red-Green-Blue
  • the device 10 may be referred to as the "imaging mobile device" 10.
  • the imaging system 18 may comprise image sensors such as, but not limited to, Charge-Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) sensors and/or digital cameras. Imaging system 18 may convert an optical image into an electronic or digital image and may send captured images to the computing unit 100. In the same or other embodiments, the imaging system 18 may be a single-lens camera providing RGB pictures. In some embodiments, the device 10 comprises depth sensors to acquire RGB-Depth (RGBD) pictures. Broadly speaking, any device suitable for generating a 3D point cloud may be used as the imaging system 18 including but not limited to depth sensors, 3D scanners or any other suitable devices.
  • RGBD RGB-Depth
  • imaging system 18 may send captured data to the computing unit 100.
  • imaging system 18 may convert an optical image into an electronic or digital image and may send captured images to the computing unit 100.
  • the imaging system 18 may be a single-lens camera providing RGB pictures.
  • the device 10 comprises depth sensors to acquire RGB-Depth (RGBD) pictures.
  • RGBBD RGB-Depth
  • any device suitable for generating a 3D point cloud may be used as the imaging system 18 including but not limited to depth sensors, 3D scanners or any other suitable devices.
  • the device 10 may further comprise an Inertial Sensing Unit (ISU) 14 configured to be used in part by the computing unit 100 to determine a position of the imaging system 18 and/or the device 10.
  • the computing unit 100 may determine a set of coordinates describing the location of the imaging system 18, and thereby the location of the device 10, in a coordinate system based on the output of the ISU 14. Generation of the coordinate system is described hereinafter.
  • the ISU 14 may comprise 3 -axis accelerometer(s), 3 -axis gyroscope(s), and/or magnetometer(s) and may provide velocity, orientation, and/or other position related information to the computing unit 100.
  • the ISU 14 may output measured information in synchronization with the capture of each image by the imaging system 18.
  • the ISU 14 may be used to determine the set of coordinates describing the location of the device 10 for each captured image of a series of images. Therefore, each image may be associated with a set of coordinates of the device 10 corresponding to a location of the device 10 when the corresponding image was captured.
  • information provided by the ISU 14 may be used to determine a coordinate system and/or a scale corresponding of the object to be characterized. Other approaches may be used to determine said scale, for instance by including a reference object whose size is known in the captured images, near the object to be characterized.
  • the device 10 may include a screen or display 16 capable of rendering color images, including 3D images.
  • the display 16 may be used to display live images captured by the imaging system 18, 3D point clouds, Augmented Reality (AR) images, Graphical User Interfaces (GUIs), program outputs, etc.
  • display 16 may comprise a touchscreen display to permit users to input data via some combination of virtual keyboards, icons, menus, or other Graphical User Interfaces (GUIs).
  • GUIs Graphical User Interfaces
  • display 16 may be implemented using a Liquid Crystal Display (LCD) display or a Light Emitting Diode (LED) display, such as an Organic LED (OLED) display.
  • LCD Liquid Crystal Display
  • LED Light Emitting Diode
  • OLED Organic LED
  • display 16 may be remotely communicatively connected to the device 10 via a wired or a wireless connection not shown), so that outputs of the computing unit 100 may be displayed at a location different from the location of the device 10.
  • the display 16 may be operationally coupled to, but housed separately from, other functional units and systems in device 10.
  • the device 10 comprises a mobile communication device, such as, for example, a mobile phone, handheld computer, a personal digital assistant, tablet, a network base station, a media player, a navigation device, an e-mail device, a game console, or any other device whose features are similar or equivalent to providing the aforementioned capabilities.
  • a mobile communication device such as, for example, a mobile phone, handheld computer, a personal digital assistant, tablet, a network base station, a media player, a navigation device, an e-mail device, a game console, or any other device whose features are similar or equivalent to providing the aforementioned capabilities.
  • the device 10 may comprise a memory 12 communicatively connected to the computing unit 100 and configured to store without limitation data, captured images, depth values, sets of coordinates of the device 10, 3D point clouds, and raw data provided by ISU 14 and/or the imaging system 18.
  • the memory 12 may be embedded in the device 10 as in the illustrated embodiment of Figure 1 or located in an external physical location.
  • the computing unit 100 may be configured to access a content of the memory 12 via a network (not shown) such as a Local Area Network (LAN) and/or a wireless connection such as a Wireless Local Area Network (WLAN).
  • LAN Local Area Network
  • WLAN Wireless Local Area Network
  • the device 10 may also include a power system (not shown) for powering the various components.
  • the power system may include a power management system, one or more power sources, such as, for example, a battery, alternating current (AC) source, a recharging system, a power failure detection circuit, a power converter or inverter, and any other components associated with the generation, management and distribution of power in mobile or non-mobile devices.
  • a power management system such as, for example, a battery, alternating current (AC) source, a recharging system, a power failure detection circuit, a power converter or inverter, and any other components associated with the generation, management and distribution of power in mobile or non-mobile devices.
  • the device 10 may also be suitable for generating the 3D point cloud, based on images of an object.
  • images may have been captured by the imaging system 18, such as, for example, by generating 3D point cloud images according to the teachings of the Patent Cooperation Treaty Patent Publication No. 2020/240497, the entirety of the contents presented therein which is hereby incorporated by reference.
  • device 10 may perform the operations and steps of the methods and processes described in the present disclosure. More specifically, the device 10 may execute the capturing images of the object to be characterized, the generating a 3D point cloud including data points representative of the object, as well as executing methods for the characterization of the 3D point cloud.
  • the device 10 is communicatively connected (e.g, via any wired or wireless communication link including, for example, 4G, LTE, Wi-Fi, or any other suitable connection) to an external computing device 23 (e.g., a server) adapted to perform some or all of the methods for characterization of the 3D point cloud.
  • an external computing device 23 e.g., a server
  • operation of the computing unit 100 may be shared with the external computing device 23.
  • the device 10 accesses the 3D point cloud by retrieving information about the data points of the 3D point cloud from the RAM 130 and/or the memory 12. In other embodiments, the device 10 accesses a 3D point cloud by receiving information about the data points of the 3D point cloud from the external computing device 23.
  • FIG. 2 depicts a high-level flow diagram of computer-implemented method 200 of iterative analysis and processing of a 3D representation to obtain data from a 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 200 is directed to obtaining at least one characteristic indicative of a desired feature of an object. It is to be expressly understood that the method 200 as depicted are merely an illustrative implementation of the present technology. In some cases, what are believed to be helpful examples of modifications to the method 200 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology.
  • the method 200 or one or more steps thereof may be performed by a computer system, such as the computer system 100.
  • the method 200, or one or more steps thereof, may be embodied in computer-executable instructions that are stored in a computer-readable medium, such as a non-transitory mass storage device, loaded into memory and executed by a CPU. Some steps or portions of steps in the flow diagram may be omitted or changed in order.
  • the method 200 incorporates a 3D representation module 201, an operation module 202 to obtain a modified 3D representation 203, an operations module 204 to obtain at least one characteristic from a modified 3D representation 203, an operation module 205 to apply the at least one characteristic obtained from a modified 3D representation 203 to a 3D representation 201, and data module 206 to contain 3D representation 201 data obtained by operation module 205.
  • a 3D representation module 201 an operation module 202 to obtain a modified 3D representation 203
  • an operations module 204 to obtain at least one characteristic from a modified 3D representation 203
  • an operation module 205 to apply the at least one characteristic obtained from a modified 3D representation 203 to a 3D representation 201
  • data module 206 to contain 3D representation 201 data obtained by operation module 205.
  • the method of iterative analysis and processing of a 3D representation 200 may be applied to partial 3D point clouds, 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or computer-aided design (CAD) files.
  • CAD computer-aided design
  • synthetic objects may comprise, for example, synthetic 3D models, CAD, 3D models acquired from industrial oriented 3D software, or medical oriented 3D software, and/or non-specialized 3D software, 3D models generated by processes such as RGB photogrammetry, RGB-D photogrammetry, and/or other reconstruction techniques from real objects.
  • non-synthetic objects may comprise any 3D point cloud generated by operations such as: RGB based-sensors; photogrammetry techniques such as ColrnapTM, Visual SFMTM, Open3DTM; computational depth-based techniques such as machine learning-based depth, and disparity-based depth utilizing stereo cameras or multiple positions of a single camera; and RGB-D sensors, such as LiDAR and/or depth cameras, etc.
  • a "non-synthetic object” may refer to any object in the real-world.
  • Non-synthetic objects are not synthesized using any computer rendering techniques rather are scanned, captured by any non-limiting means such as using suitable sensor, such as camera, optical sensor, depth sensor or the like, to generate or reconstruct 3D point cloud representation of the non-synthetic 3D object using any "off the shelf technique, including but not limited to photogrammetry, machine learning based techniques, depth maps or the like.
  • non-synthetic 3D object may be any real-world objects such as a computer screen, a table, a chair, a coffee mug, a mechanical component on an assembly line, or any type of inanimate object or entity.
  • a non-synthetic 3D object may also by an animated entity such as an animal, a plant, a human entity or a portion thereof.
  • the 3D representation may be acquired or may have been previously acquired and is accessed from a memory or storage device upon executing a de-noising process.
  • a 3D representation may refer to a simple 3D representation of an object where the vertices are not necessarily connected to each other. If they are not connected to each other, the information contained in this kind of representation is the coordinates (e.g., x, y, z in the case of a cartesian coordinate system) of each vertex, and its color (e.g., components in the Red- Green-Blue color space).
  • the 3D point cloud reconstruction may be the result of 3D scanning, and a common format for storing such point clouds is the Polygon File Format (PLY). Point cloud reconstructions are seldom used directly by users, as they typically do not provide a realistic representation of an object, but rather a set of 3D points without relations to each other aside from their positions and colors.
  • acquiring a 3D point cloud may involve using a plurality of depth maps, each depth map corresponding to an image within a stream of images acquired using, e.g., the imaging system 18 as shown in FIG. 1.
  • Generating each depth map may include executing a machine learning algorithm (MLA), which may include a deep learning algorithm (DLA), based on the image and a second image of the stream of images.
  • generating each depth map may include utilizing depth information provided by one or more depth sensor(s) integrated within the imaging system.
  • generating each depth map may include utilizing at least two images at the same position and coordinates, within the imaging system’s coordinate system, using at least two lenses (e.g., using a dual or triple lens Red-Green- Blue (RGB) imaging system).
  • RGB Red-Green- Blue
  • a plurality of fragments of an object’s 3D point cloud can be generated based on depth map information.
  • Each fragment may be a 3D point cloud generated based on at least one image, the corresponding depth maps of the at least one image, and the corresponding positions of the imaging system of the at least one image. These fragments can be merged to form a single 3D point cloud.
  • the method 200 begins at operation module 201, wherein a 3D representation is accessed.
  • the 3D representation 201 may include, but is not limited to, a complete or partial 3D point cloud, a 3D template, a 3D model, a 3D mesh, a synthetic 3D model, a non-synthetic 3D model, a synthetic 3D object, a non- synthetic 3D object, a generated 3D model, a 3D scan, voxels, continuous functions, and/or Computer-aided design (CAD) file.
  • CAD Computer-aided design
  • the method 200 continues at operation module 202, wherein at least one operation is applied to obtain a modified 3D representation 203 of the 3D representation 201.
  • the at least one operation of 202 to obtain a modified 3D representation 203 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the modified 3D representation 203 may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
  • the method 200 continues at operation module 204, wherein at least one operation is applied to the modified 3D representation 203 to obtain at least one characteristic indicative of a desired feature of an object of the modified 3D representation 203.
  • the at least one operation of 204 to obtain the at least one characteristic of the modified 3D representation 203 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the at least one characteristic obtained from the modified 3D representation 203 may include, but is not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
  • FIG. 3 depicts a high-level flow diagram of the method 300 of iterative analysis and processing of a 3D representation to detect planes in a 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 300 is directed to detecting at least one planar surface related to desired feature object data. As shown, the method 300 commences at module 301 where planar surfaces are detected in a 3D point cloud 301.
  • the method 300 continues at operation module 302, in which the 3D point cloud 301 is down-sampled to obtain a modified 3D point cloud 303.
  • the modified 3D point cloud 303 reduces the density of the 3D point cloud 301 to facilitate the detection of salient desired feature object data.
  • the method 300 then proceeds to operation module 304, in which the at least one operation is applied to the modified 3D representation 303 to detect at least one planar surface related to the desired feature object data in the modified 3D point cloud 303.
  • the at least one operation may comprise down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the method 300 continues at operation module 305, in which at least one planar surface equation is obtained from the modified 3D point cloud 303.
  • the at least one planar surface equation is obtained from modified 3D point cloud 303 is applied to the 3D point cloud 301 to detect the at least one planar surface related to desired feature object data or remove irrelevant or undesired planar surfaces of the at least one planar surface in the 3D point cloud 301.
  • the method 300 concludes at the 3D point cloud 307, which includes the 3D point cloud 301 with the at least one planar surface data obtained by operation module 306.
  • FIG. 4 depicts a high-level flow diagram of the method 400 of iterative analysis and processing of a 3D representation to crop a 3D representation based on a sparse-based operation , in accordance with the embodiments of the present disclosure.
  • the method 400 commences at operation module 402, wherein a plurality of vertices related to the desired feature object data is removed from 3D point cloud 401 to obtain a modified 3D point cloud 403.
  • the method 400 continues at operation module 404, in which at least one bounding box indicative of a periphery related to the desired feature object data is obtained from the modified 3D representation 403. Further, at operation module 404, the at least one bounding box is aligned with the modified 3D representation 403.
  • the method 400 proceeds to operation module 405, in which the aligned bounding box obtained from modified 3D point cloud 403 is applied to 3D point cloud 401 to obtain cropped 3D point cloud 406.
  • FIG. 5 depicts a high-level flow diagram of the method 500 of iterative analysis and processing of a 3D representation utilizing plurality of parallel operations on a 3D representation, in accordance with the embodiments of the present disclosure.
  • a first implementation of method 500 begins at operation module 502, in which a 3D point cloud 501 is projected onto a 2D plane representation to obtain a 2D projection of 3D point cloud 501.
  • the method 500 continues at operation module 503, in which at least one transformation is applied to the 2D projection of the 3D point cloud 501 to obtain a 2D contour of the 3D point cloud 501.
  • the applied transformation may comprise, for example, a geometrical transformation.
  • the method 500 continues at operation module 504, wherein the 2D contour of the 3D point cloud 501 is applied to the 3D point cloud 501 to obtain at least one characteristic indicative of a desired feature of an object of the 3D point cloud 501.
  • method 500 stores the at least one characteristic of 3D point cloud 501 in data module 509.
  • a second implementation of method 500 begins at operation module 505, wherein the at least one operation is applied to 3D point cloud 501 to obtain a 2D mesh and/or 3D mesh of the 3D point cloud 501.
  • the method 500 continues at operation module 506, wherein at least one operation is applied to a 2D and/or 3D mesh of 3D point cloud 501 to obtain a 2D skeleton representation and/or 3D skeletal representation of the 3D point cloud 501.
  • the details of the at least one operation have been described above.
  • the method 500 continues at operation module 507, in which the at least one operation is applied to 2D and/or 3D skeletal representation of the 3D point cloud 501 to obtain a 2D and/or 3D spline skeleton of 3D point cloud 501.
  • the method 500 continues at operation module 508, wherein the 2D and/or 3D spline skeleton of the 3D point cloud 501 is applied to the 3D point cloud 501 to obtain at least one characteristic indicative of a desired feature of an object of the 3D point cloud 501.
  • the at least one characteristic of 3D point cloud 501 obtained in 508 is then stored in data module 509.
  • the first and second implementations are conducted in parallel to obtain data from 3D point cloud 501.
  • the data obtained from 3D point cloud 501 may include, but is not limited to, the at least one 2D and/or 3D point cloud segment, the at least one cropped 2D and/or 3D point cloud, the at least one area-of- interest of a 2D and/or 3D point cloud, , the at least on fragment of a 2D and/or 3D point cloud, the at least one 2D projection, the at least one 2D contour, the at least one 2D and/or 3D mesh, the at least one 2D and/or 3D skeleton, the at least one 2D slice, the at least one landmark, the at least one 2D and/or 3D spline and/or the at least one 2D measurement, the at least one geometrical transformation.
  • the first and second implementations are conducted sequentially to obtain data from 3D point cloud 501 .
  • the data obtained from 3D point cloud 501 may include, but is not limited to, the at least one 2D and/or 3D point cloud segment, the at least one cropped 2D and/or 3D point cloud, the at least one area-of- interest of a 2D and/or 3D point cloud, the at least on fragment of a 2D and/or 3D point cloud, the at least one 2D projection, the at least one 2D contour, the at least one 2D and/or 3D mesh, the at least one 2D and/or 3D skeleton, the at least one 2D slice, the at least one landmark, the at least one 2D and/or 3D spline and/or the at least one 2D measurement, the at least one geometrical transformation.
  • FIG. 6 depicts a high-level flow diagram of method 600 of iterative analysis and processing of a first 3D representation to obtain data from a second 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 600 of iterative analysis and processing of a 3D representation incorporates a first 3D representation 601, an operation module 602 to obtain a modified 3D representation 603, an operation module 604 to obtain the at least one characteristic from a modified 3D representation 603, an operation module 605 to apply the at least on characteristic obtained from a modified 3D representation 603 to a second 3D representation 606, and data module 607 containing data obtained from second 3D representation 606.
  • the elements of the method of iterative analysis and processing of a 3D representation 600 will be described in detail below.
  • the method 600 begins at operation module 602, wherein a 3D representation 601 is accessed.
  • the 3D representation 601 and/or the second 3D representation 606 may include, but is not limited to, a complete or partial 3D point cloud, a 3D template, a 3D model, a 3D mesh, synthetic 3D model, a non- synthetic 3D model, a synthetic 3D object, a non-synthetic 3D object, a generated 3D model, a 3D scan, voxels, continuous functions, and/or computer-aided design (CAD) file data.
  • CAD computer-aided design
  • the method 600 continues at operation module 602, wherein at least one operation is applied to obtain a modified 3D representation 603 of the 3D representation 601.
  • the at least one operation of 602 to obtain a modified 3D representation 603 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the modified 3D representation 603 may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
  • the method 600 continues at operation module 604, in which the at least one operation is applied to the modified 3D representation 603 to obtain at least one characteristic indicative of a desired feature of an object of the modified 3D representation 603.
  • the obtaining of the at least one characteristic of the modified 3D representation 603 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the at least one characteristic obtained from the modified 3D representation 603 may include, but is not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
  • the method 600 continues at operation module 605, wherein the at least one characteristic obtained from the modified 3D representation 603 is applied to a second 3D representation 606 to obtain data from the 3D representation 606.
  • the method 600 concludes at data module 607, wherein the 3D representation 606 data obtained by operation module 605 is stored.
  • FIG. 7 depicts a high-level flow diagram of method 700 of iterative analysis and processing of a 3D representation to obtain objects from a 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 700 is directed to obtaining a plurality of modified 3D representations such as modified 3D point cloud and a 3D textured mesh.
  • the method 700 begins at operation module 702, wherein at least one operation is applied to a 3D point cloud 701 to obtain a modified 3D point cloud 703.
  • the method 700 continues at operation module 704, wherein the at least one operation is applied to the modified 3D point cloud 703 to obtain at least one bounding box indicative of a periphery related to the desired feature object data in the modified 3D point cloud 703.
  • the method 700 proceeds to operation module 707, in which the at least one bounding box obtained from modified 3D point cloud 703 is applied to 3D textured mesh 706 to obtain the at least on object from 3D textured mesh 706.
  • the method 700 concludes at data module 708, wherein the at least one object obtained from 3D mesh 706 is stored.
  • FIG. 8 depicts a high-level flow diagram of method 800 of iterative analysis and processing of a first and second 3D representation to obtain data from a first and/or second 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 800 is directed to obtaining at least one characteristic from a plurality of 3D representations, such as a first 3D representation 801 and a second 3D representation 802, and subsequent operations are utilized to obtain data from a first 3D representation 801 and/or a second 3D representation 802.
  • the method 800 of iterative analysis and processing of a 3D representation incorporates a first 3D representation 801, a second 3D representation 802, an operation module 803 to obtain the at least one characteristic from 801 and/or 802, an operation module 804 to obtain a modified 3D representation 805, an operations module 806 to obtain the at least one characteristic from a modified 3D representation 805, an operation module 807 to apply the at least one characteristic obtained from a modified 3D representation 805 to a first 3D representation 801 and/or a second 3D representation 802, and data module 808 to contain data obtained from a first 3D representation 801 and/or a second 3D representation 802.
  • the elements of the method of iterative analysis and processing of a 3D representation 800 will be described in detail below.
  • the 3D representation 801 and/or 802 may include, but is not limited to, a complete or partial 3D point cloud, 3D template, 3D model, 3D mesh, synthetic 3D model, non- synthetic 3D model, synthetic 3D object, non- synthetic 3D object, generated 3D model, 3D scan, voxels, continuous functions, and/or computer-aided design (CAD) file.
  • CAD computer-aided design
  • the method 800 begins at operation module 803, in which a first 3D representation 801 and/or a second 3D representation 802 are accessed to obtain the at least one characteristic indicative of a desired feature of an object from 801 and/or 802.
  • the at least one operation of 803 to obtain the at least one characteristic from the first 3D representation 801 and/or the second 3D representation 802 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the at least one characteristic obtained from the first 3D representation 801 and/or the second 3D representation 802 may include, but is not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
  • the method 800 continues at operation module 804, wherein the at least one operation is applied to obtain a modified 3D representation 805 from the first 3D representation 801 and/or the second 3D representation 802.
  • the modified 3D representation 805 may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
  • the method 800 continues at operation module 806, wherein the at least one operation is applied to the modified 3D representation 805 to obtain the at least one characteristic of the modified 3D representation 805.
  • the at least one operation of 806 to obtain a modified 3D representation 203 may include, but is not limited to, down-sampling, segmentation; detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
  • SOR statistical outlier removal
  • the method 800 continues at operation module 807, wherein the at least one characteristic obtained from the modified 3D representation 805 is applied to 3D representation 801 and/or 3D representation 802 to obtain data from 3D representation 801 and/or 3D representation 802.
  • the method 800 concludes at data module 808, in which the data obtained from 3D representation 801 and/or 3D representation 802 by operation module 807 is stored.
  • FIG. 9 depicts a high-level flow diagram of method 900 of iterative analysis and processing of the at least one 3D representation to repose a 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 900 is directed to obtaining a reposed 3D point cloud.
  • method 900 begins at operation module 903, wherein a 3D model template 901 and a 3D point cloud 902 are accessed to obtain at least one characteristic from 901 and/or 902.
  • the method 900 continues at operation module 904, which transfers the at least one characteristic and related patterns to a 3D model template 901 to obtain a reposed 3D model template 905.
  • the method 900 continues at operation module 906, wherein at least one operation is applied to the reposed 3D model template 905 to align the at least one bounding box to the reposed 3D model template 905. Further, the at least one operation is applied to a reposed 3D model template 905 to apply the at least one landmark to a 3D model template 905.
  • the method 900 continues at operation module 907, wherein the at least one bounding box obtained in 906 is applied to a 3D point cloud 902 to obtain the reposed 3D point cloud 908. Further the at least one operation is applied to reposed 3D point cloud 908 to apply the at least one landmark obtained in 906 to a reposed 3D point cloud 908.
  • FIG. 10 depicts a high-level flow diagram of method 1000 of iterative analysis and processing of an at least one 3D representation to a morphed a 3D representation, in accordance with the embodiments of the present disclosure.
  • the method 1000 begins at operation module 1003, wherein at least one principal components analysis operation is applied to a 3D model template 1001 and/or a 3D point cloud 1002 to obtain an aligned 3D model template 1004.
  • the at least one principal components analysis operation applied in operation module 1003 may include, but is not limited to, translation, rotation, and/or registration.
  • the method 1000 continues at operation module 1005, wherein the at least one operation of morphing is applied to align the vertices of an aligned 3D model template 1004 to the position of the vertices of a 3D point cloud 1002 to obtain a morphed 3D model template 1006.
  • the disclosed embodiments of the iterative analysis and processing of a 3D representation may be utilized to obtain user body information containing user-specific representations, measurements, and characteristics of the user body and/or the at least one user body part.
  • the obtained user body information may be utilized in the at least one transaction between a user and a vendor.
  • the obtained user body information utilized in the at least one transaction between a user and a vendor may be incorporated to execute an electronic purchasing transaction between a user and a vendor on a mobile communication device for an item relating to the user’ s physical characteristics, in which the method comprises executing instructions, by a processor, initiated by user requests; and establishing wireless communications, via a communication interface, with a potential vendor and external entities.
  • FIGs. 11A-C provide illustrative examples in which method 1000 is utilized to align and morph a vendor provided 3D model template 1101 to a 3D point cloud of a user body 1102 to obtain an aligned and morphed 3D model template 1103.
  • the aligned and morphed 3D model template 1103 may be utilized by the vendor to provide accurate sizing and fitting for items of apparel available to the user for purchase.
  • FIGs. 12-19 provide an illustrative example of the at least one embodiment of the iterative analysis and processing of a 3D representation, wherein user body information containing user-specific representations, measurements, and characteristics of the user body and/or the at least one user body part is utilized for executing an electronic purchasing transaction between a user and a vendor on a mobile communication device for an item relating to the user’s physical characteristics.
  • the transaction between a user and a vendor begins, wherein a mobile communication device is utilized to obtain a 3D representation of the user body and/or the at least one user body part.
  • the transaction between a user and a vendor further continues, wherein the method 300 is utilized on a 3D point cloud 1201 containing a planar surfacel202 and an 3D object- of-interest 1203 as depicted in FIG. 12 A.
  • the object-of-interest is a human hand and wrist.
  • the method 300 may be utilized to remove a planar surface 1202 in 3D point cloud 1201 to obtain a 3D point cloud object-of-interest 1203 as depicted in FIG. 12B.
  • the transaction between a user and a vendor may further continue with method 500, wherein (i) a 3D mesh of object-of-interest 1301 is obtained from 3D point cloud object-of- interest 1203 as depicted in FIG. 13 A; (ii) a skeleton 1302 is applied to 3D mesh 1301 as depicted in FIG. 13B; (lii) a 2D projection with contour 1401 is obtained from 3D mesh 1301 as depicted in FIG. 14 A; (iv) a 2D contour with landmarks 1402 is obtained from a 2D projection with contour 1401 as depicted in FIG.
  • FIG. 15B illustrates a 2D proj ection with finger segments obtained by method 500 from a 3D point cloud object-of-interest 1203.
  • the method 500 applies the at least one 2D finger segment obtained in (v) to a 3D object-of-interest 1203 to obtain a 3D object-of-interest 1601.
  • the 3D object-of-interest 1601 is a human finger.
  • the method 500 further applies skeleton 1602 to 3D object-of-interest 1601.
  • the method 500 continues with reference to FIG. 17, wherein the at least one 2D segment 1701 it obtained from a 3D object-of-interest a 1601 and skeleton 1602 as depicted in FIG. 17A.
  • the method 500 further continues, wherein a median-fitting circle 1702 is obtained from 2D segment 1701 as depicted in FIG. 17B.
  • the method 500 is further utilized to refine median-fitting circle 1702 to obtain exterior contour-fitting circle 1703 as depicted in FIG. 17C.
  • the exterior contour-fitting circle 1703 is labeled the best-fit match referenced to the at least one finger obtained from 3D object-of-interest 1203.
  • FIG. 18 illustrates an embodiment of a user interface for the user purchase of and article of jewelry.
  • the article of jewelry may be a ring.
  • FIG. A-B depict a user interface 1801 containing a 3D representation of the user human hand and wrist obtained from a 3D object- of-interest 1203.
  • the user may select the finger segment 1802, which may be highlighted to identify the selected finger.
  • the at least one finger segment 1802 corresponds to the at least one 3D object-of- interest 1601 obtained from a 3D object-of-interest 1203.
  • the user may be presented with an embodiment of a user interface displaying the at least one best-fit match measure for the finger-of-interest.
  • the best-fit measure corresponds to exterior contour-fitting circle 1703 obtained from a 3D object-of-interest 1203.
  • the best-fit measure for the finger of interest if presented in a plurality of common measurement metrics 1901.
  • a dropdown window is available for the user to select a specified ring model for purchase.
  • a quality operation 1903 may be available for user assessment of the best-fit measure displayed in 1901.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A computer-implemented method and system for the iterative analysis and processing of a 3D representation of an object is presented. The method includes accessing the stored 3D representation, applying at least a first operation to the accessed 3D representation to obtain a modified 3D representation, applying at least a second operation to the modified 3D representation to obtain at least one characteristic of the modified 3D representation, and applying the at least one characteristic obtained from the at least one modified 3D representation to the at least one 3D representation to obtain feature data of the object from the 3D representation. The at least one characteristic comprising at least one of: feature data comprising information concerning color, depth, heat, two dimensions, three dimensions, a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.

Description

ITERATIVE ANALYSIS AND PROCESSING OF 3D REPRESENTATIONS OF OBJECTS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims priority from European Patent Application No. 22194722.9, filed on September 8, 2022, the disclosure of which is incorporated by reference herein it is entirety.
FIELD OF TECHNOLOGY
[0002] The present technology relates to systems and methods for the iterative analysis and processing of a three-dimensional (3D) representation of an object.
BACKGROUND
[0003] Three-dimensional ("3D") digital data may be produced by a variety of devices that involve three-dimensional scanning or sampling and/or numerical modeling. In one example, 3D laser scanners generate 3D digital data. A long-range laser scanner is fixed in one location and rotated to scan objects around it. Alternatively, a short-range laser scanner is mounted on a device that moves around an object while scanning it. In any of the scenarios, the location of each point scanned is represented as a polar coordinate since the angle between the scanner and the object and distance from the scanner to the object are known. The polar coordinates are then converted to 3D Cartesian coordinates and stored along with a corresponding intensity or color value for the data point collected by the scanner.
[0004] Other examples to generate 3D digital data are depth cameras or 3D scanner to generate 3D digital data by collecting a complete point set of (x, y, z) locations that represent the shape of an object. Once collected, these point sets, also known as 3D point clouds, are sent to an image rendering system, which then processes the point data to generate a 3D representation of the object.
[0005] Typical 3D point clouds obtained from acquisition devices or as the outputs of reconstruction algorithms and related processes comprise densely-packed, high resolution data containing a variety of associated distortions. Such point clouds often require substantial processing that is both time and computationally intensive to correct the data extracted therefrom. To this end, there is an interest in developing efficient systems and methods that provide precise, quality 3D representations extracted from 3D point clouds.
SUMMARY
[0006] Embodiments of the present technology have been developed based on developers’ appreciation of at least one technical problem associated with the prior art solutions. For example, 3D point clouds obtained from acquisition devices or as the outputs of reconstruction algorithms and related processes comprise densely-packed, high resolution data containing a variety of associated distortions. Such point clouds often require substantial processing that is both time and computationally intensive to correct the data extracted therefrom. Characteristics of poor-quality point clouds can include, but are not limited to, noise, holes, missing or unidentified planar surfaces, a lack of/or missing faces, irregular or missing topologies, poor alignment, sparse or inconsistent density of points, outliers, noise inherent or caused by limitations of acquisition devices and sensors, artifacts due to lighting and the reflective nature of surfaces, artifacts in the scene, reconstruction deformations.
[0007] Additionally, it is now possible to use non-specialized hardware, such as a mobile device including a camera (e.g., an iPhone® mobile phone from Apple or a Galaxy® mobile phone or tablet from Samsung) to acquire a 3D point cloud. However, point clouds obtained from such non-specialized hardware may contain even more noise than point clouds obtained using specialized 3D scanning hardware, thereby requiring further noise removal processes.
[0008] With these fundamentals in place, we will now consider some non-limiting examples to illustrate various embodiments of aspects of the present technology.
[0009] In accordance with a first aspect of the present technology, there is provided a computer- implemented method for the iterative analysis and processing of a stored 3D representation. The computer-implemented method includes accessing the stored 3D representation, applying at least a first operation to the accessed 3D representation to obtain a modified 3D representation, applying at least a second operation to the modified 3D representation to obtain at least one characteristic of the modified 3D representation, and applying the at least one characteristic obtained from the at least one modified 3D representation to the at least one 3D representation to obtain feature data of the object from the 3D representation. [00010] In some embodiments, the at least one 3D representations may include, but are not limited to, 3D point clouds, 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or Computer-aided design (CAD) files.
[00011] In some embodiments, the at least one 3D representations may include, but are not limited to partial, 3D point clouds, 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or Computer-aided design (CAD) files.
[00012] In some embodiments, operations are applied to the at least one 3D representation to obtain the at least one modified 3D representation.
[00013] In some embodiments, the operations may include, but are not limited to, downsampling; segmentation; detecting planar surfaces; removing vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[00014] In some embodiments, the at least one modified 3D representation may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
[00015] In some embodiments, operations are applied to the at least one 3D representation to obtain the at least one characteristic from the at least one 3D representation.
[00016] In some embodiments, the at least one operation is applied to a plurality of 3D representations to obtain the at least one modified 3D representation. [00017] In some embodiments, the the at least one characteristic obtained from the at least one 3D representation may include, but are not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
[00018] In some embodiments, the at least one characteristic of the at least one modified 3D representation is accessed to obtain data from a second 3D representation.
[00019] In some embodiments, the at least one characteristic of the at least one modified 3D representation is accessed to improve the quality of a second 3D representation.
[00020] In some embodiments, the second 3D representation is a modified version of the first 3D representation.
[00021] In some embodiments, the at least one characteristic obtained from the at least one modified 3D representation is utilized to obtain a plurality of characteristics from the at least one 3D representation.
[00022] In some embodiments, a plurality of operations is applied to the at least one characteristic obtained from the at least one modified 3D representation to obtain a plurality of characteristics from the at least one 3D representation.
[00023] In some embodiments, a plurality of operations is applied to the at least one characteristic obtained from the at least one modified 3D representation to obtain the at least one modified 3D representation from the at least one 3D representation
[00024] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the down-sampling of a full or partial 3D point cloud, the detection of the at least one planar surface in the down-sampled 3D point cloud, and constraining the at least one planar surface in the original 3D point cloud with the the at least one planar equation from the down- sampled 3D point cloud. [00025] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the removal of the at least one vertex from a 3D point cloud to generate a sparse 3D point cloud.
[00026] In some embodiments, operations utilized in the removal of the at least one vertex may comprise, but are not limited to, Statistical Outlier Removal (SOR), filtering, region growing and/or clustering.
[00027] In some embodiments, further operations may be applied on a sparse 3D point cloud to obtain the at least one specified characteristic that may be utilized to obtain data from the original 3D point cloud. For example, a bounding box may be aligned with the obtained sparse 3D point cloud. Subsequently, the aligned bounding box may be applied to the original 3D point cloud to the enable the operations of cropping, isolation, and/or segmentation of an object-of-interest.
[00028] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the projection of a 3D point cloud onto a 2D plane, the transformation of the 2D projection of the 3D point cloud into a 2D projected contour, obtaining the at least one characteristic from the 2D projected contour, and applying the at least one operation utilizing the obtained at least one characteristic on the original 3D point cloud to obtain a modified version of the original 3D point cloud.
[00029] In some embodiments, the iterative analysis and processing of a 3D representation may comprise obtaining a 3D mesh of a 3D point cloud, obtaining a 2D and/or 3D skeleton from the 3D mesh, obtaining the at least one spline from the 2D and/or 3D skeleton, obtaining the at least one characteristic from the at least one 2D and/or 3D spline, and applying the at least one operation utilizing the obtained the at least one characteristic to the original 3D point cloud to obtain a modified version of the original 3D point cloud.
[00030] In some embodiments, the iterative analysis and processing of a 3D representation may comprise executing a plurality of parallel operations on a 3D representation. For example the sequential pipeline operations of (i) obtaining a 3D mesh from a 3D point cloud, obtaining a 2D and/or 3D skeleton from the 3D mesh, and obtaining a 2D and/or 3D spline skeleton, maybe executed in parallel with (ii) the projection of a 3D point cloud onto a 2D plane, the transformation of the 2D projection of the 3D point cloud into a 2D projected contour; obtaining characteristics from each operation pipeline, and applying the at least one operation utilizing the obtained the at least one characteristic on the original 3D point cloud to obtain a modified version of the original 3D point cloud.
[00031] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the modification of a 3D point cloud, obtaining the at least one characteristic of the modified 3D point cloud and applying the at least one operation utilizing the obtained the at least one characteristic to the 3D model template to obtain a modified version of the original 3D model template.
[00032] In some embodiments, the iterative analysis and processing of a 3D representation may comprise segmentation. For example, a 3D point cloud may be segmented into the at least one segment possessed by the at least one bounding box. In parallel a textured mesh may be obtained from the 3D point cloud. Subsequently the at least one bounding box obtained from the 3D point cloud is applied to the textured mesh to obtain the at least one object.
[00033] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the modification of a 3D point cloud and/or the modification of a 3D model template, obtaining the at least one characteristic from the modified 3D point cloud and/or the modified 3D model template, and applying the at least one operation utilizing the obtained the at least one characteristic to the 3D point cloud and/or 3D model template to obtain a modified version of the 3D point cloud and/or 3D model template.
[00034] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the modification of a 3D model template and a 3D point cloud to generate a reposed 3D point cloud. For example, this operation may include transferring the at least one pattern and the at least one characteristic from a 3D point cloud to a 3D model template to obtain a reposed 3D model template. Subsequently, the at least one operation is applied to the reposed 3D model template to obtain the at least one bounding box and/or the at least one landmark, and to align to the at least one bounding box and/or the at least one landmark the reposed 3D model. Further the at least one aligned bounding box may be applied to the original 3D point cloud to obtain a reposed 3D point cloud. Further the at least one landmark may be applied to a reposed 3D point cloud to obtain a reposed 3D point cloud possessing the at least one landmark. [00035] In some embodiments, the iterative analysis and processing of a 3D representation may comprise the modification of a 3D model template and a 3D point cloud utilizing principal components analysis (PCA). This method may comprise applying the operation of the at least one translation to align the 3D model template with the 3D point cloud. Applying the operation of the at least one rotation to further align the 3D model template with respect to the 3D point cloud. Subsequently applying iterative closest point registration to refine the 3D model template to obtain an aligned 3D model template. Subsequently applying the operation of morphing to align the position of the vertices of the 3D model template to the position of the vertices of the 3D point cloud to obtain a morphed 3D model template.
[00036] The terminology used herein is only intended to describe particular representative embodiments and is not intended to be limiting of the present technology. As used herein, the singular forms "a," "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated characteristics, features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other characteristics, features, integers, steps, operations, elements, components, and/or groups thereof.
BRIEF DESCRIPTION OF THE DRAWINGS
[00037] The features, aspects, and advantages of the present disclosure will become better understood with regard to the following description, appended claims, and accompanying drawings, in which:
[00038] FIG. 1 depicts a functional block diagram of a device for executing a method of iterative analysis and processing of a 3D representation, in accordance with embodiments of the present disclosure;
[00039] FIG. 2 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to obtain data from a 3D representation, in accordance with the embodiments of the present disclosure; [00040] FIG. 3 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to detect planes in a 3D representation, in accordance with the embodiments of the present disclosure;
[00041] FIG. 4 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to crop a 3D representation, in accordance with the embodiments of the present disclosure;
[00042] FIG. 5 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation utilizing plurality of parallel operations on a 3D representation, in accordance with the embodiments of the present disclosure;
[00043] FIG. 6 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to obtain data from a second 3D representation, in accordance with the embodiments of the present disclosure;
[00044] FIG. 7 depicts a high-level flow diagram of the method of iterative analysis and processing of a 3D representation to obtain objects from a 3D representation, in accordance with the embodiments of the present disclosure;
[00045] FIG. 8 depicts a high-level flow diagram of the method of iterative analysis and processing of a first and second 3D representation to obtain data from a first and/or second 3D representation, in accordance with the embodiments of the present disclosure;
[00046] FIG. 9 depicts a high-level flow diagram of the method of iterative analysis and processing of the at least one 3D representation to repose a 3D representation, in accordance with the embodiments of the present disclosure;
[00047] FIG. 10 depicts a high-level flow diagram of the method of iterative analysis and processing of the at least one 3D representation to morph a 3D representation, in accordance with the embodiments of the present disclosure;
[00048] FIG 11 A depicts a visual of a 3D model template of a human body;
[00049] FIG 1 IB is a visual of a 3D reconstructed point cloud of a human body; [00050] FIG 11C is a visual of a 3D model template of a human body morphed and aligned to a 3D reconstructed point cloud of a human body;
[00051] FIG. 12A is a visual of a 3D reconstructed point cloud containing multiple objects, such as, a human hand and wrist, and a planar surface;
[00052] FIG. 12B is a visual of a 3D reconstructed point cloud containing a human hand and wrist subsequent to the removal of a planar surface;
[00053] FIG. 13 A is a visual of a 3D mesh containing a human hand and wrist;
[00054] FIG. 13B is a visual of a 3D mesh containing a human hand and wrist with a skeleton;
[00055] FIG. 14A is a visual of a 2D projection of a human hand and wrist with contour;
[00056] FIG. 14B is a visual of a 2D contour of a human hand and wrist with landmarks;
[00057] FIG. 15 A is a visual of a 2D point cloud of a human hand and wrist with a 2D contour line, landmarks and segmentation;
[00058] FIG. 15B is a visual of a 2D projection of a human hand and wrist with contour and finger segments;
[00059] FIG. 16 is a visual of a 3D point cloud of a human finger with skeleton;
[00060] FIG. 17A is a visual of a 2D slice of a human finger;
[00061] FIG. 17B is a visual of a median-fitting circle applied on a 2D slice of a human finger;
[00062] FIG. 17C is a visual of an exterior contour-fitting circle applied on a 2D slice of a human finger;
[00063] FIG. 18 A-B is a visual of a user interface utilized for finger selection; and
[00064] FIG. 19 is a visual of a user interface displaying finger measurements. DETAILED DESCRIPTION
[00065] Various exemplary embodiments of the described technology will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. The present inventive concept may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that the disclosure will be thorough and complete, and will fully convey the scope of the present inventive concept to those skilled in the art. In the drawings, the sizes and relative sizes of layers and regions may be exaggerated for clarity. Like numerals refer to like elements throughout.
[00066] It will be understood that, although the terms first, second, third etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first element discussed below could be termed a second element without departing from the teachings of the present inventive concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
[00067] It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between" versus "directly between," "adjacent" versus "directly adjacent," etc.).
[00068] The terminology used herein is only intended to describe particular exemplary embodiments and is not intended to be limiting of the present inventive concept. As used herein, the singular forms "a," "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. [00069] Moreover, all statements herein reciting principles, aspects, and embodiments of the present technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof, whether they are currently known or developed in the future. Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the present technology. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo-code, and the like represent various processes which may be substantially represented in computer-readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
[00070] The functionality of the various elements shown in the figures, including any functional block labeled as a "processor", may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. In some embodiments of the present technology, the processor may be a general purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a digital signal processor (DSP). Moreover, explicit use of the term a "processor" should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
[00071] Software modules, or simply modules, which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating the specified functionality and performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown. Moreover, it should be understood that these modules may, for example include, without being limitative, computer program logic, computer program instructions, software, stack, firmware, hardware circuitry or any combinations thereof that are configured to provide the required capabilities and specified functionality. [00072] Moreover, the iterative processes and related functionality described herein, may include, but are not limited to: down-sampling; segmentation; detecting planar surfaces; removal of vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[00073] As noted above, typical 3D point clouds comprise densely-packed, high resolution data containing a variety of associated distortions that often require extensive corrections to object features data extracted therefrom. Therefore, the inventive concepts and related aspects presented herein are directed to efficient systems and methods that provide precise, quality 3D representations of object feature data extracted from 3D point clouds, in accordance with the embodiments of the present disclosure.
[00074] To this end, FIG. 1 provides depicts a functional block diagram of a device 10 configured for generating and/or processing a three-dimensional (3D) point cloud, in accordance with embodiments of the present disclosure. It is to be expressly understood that the device 10 as depicted is merely an illustrative implementation of the present technology. In some cases, what are believed to be helpful examples of modifications to the device 10 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and, as a person skilled in the art would understand, other modifications are likely possible. Further, where this has not been done (i.e., where no examples of modifications have been set forth), it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology. As a person skilled in the art would understand, this is likely not the case. In addition, it is to be understood that the device 10 may provide in certain instances simple embodiments of the present technology, and that where such is the case they have been presented in this manner as an aid to understanding. As persons skilled in the art would understand, various embodiments of the present technology may be of a greater complexity. [00075] As shown, the device 10 comprises a computing unit 100 that may receive captured images of an object to be characterized. The computing unit 100 may be configured to generate the 3D point cloud as a representation of the object to be characterized. The computing unit 100 is described in greater details hereinbelow.
[00076] In some embodiments, the computing unit 100 may be implemented by any of a conventional personal computer, a controller, and/or an electronic device (e.g., a server, a controller unit, a control device, a monitoring device etc.) and/or any combination thereof appropriate to the relevant task at hand. In some embodiments, the computing unit 100 comprises various hardware components including one or more single or multi-core processors collectively represented by a processor 110, a solid-state drive (SSD) 150, a random access memory (RAM) 130, a dedicated memory 140, and an input/output interface 160. The computing unit 100 may be a computer specifically designed to operate a machine learning algorithm (MLA) and/or deep learning algorithms (DLA) or may be a generic computer system.
[00077] In some other embodiments, the computing unit 100 may be an "off the shelf generic computer system. In some embodiments, the computing unit 100 may also be distributed amongst multiple processing systems. The computing unit 100 may also be specifically dedicated to the implementation of the present technology. As a person in the art of the present technology may appreciate, multiple variations as to how the computing unit 100 is implemented may be envisioned without departing from the scope of the present technology.
[00078] The communications between the various components of the computing unit 100 may be enabled by one or more internal and/or external facilities 170 (e.g., a PCI bus, universal serial bus, IEEE 1394 "Firewire" bus, SCSI bus, Serial-ATA bus, ARINC bus, etc ), to which the various hardware components are electronically coupled.
[00079] The input/output interface 160 may provide networking capabilities, such as wired or wireless access. As an example, the input/output interface 160 may comprise a networking interface such as, but not limited to, one or more network ports, one or more network sockets, one or more network interface controllers and the like. Multiple examples of how the networking interface may be implemented will become apparent to the person skilled in the art of the present technology. For example, but without being limitative, the networking interface may implement specific physical layer and data link layer standard such as Ethernet, Fibre Channel, Wi-Fi or Token Ring. The specific physical layer and the data link layer may provide a base for a full network protocol stack, allowing communication among small groups of computers on the same local area network (LAN) and large-scale network communications through routable protocols, such as Internet Protocol (IP).
[00080] According to embodiments of the present technology, the solid-state drive (SSD) 150 stores program instructions suitable for being loaded into the RAM 130 and executed by the processor 110. Although represented as an SSD 150, any type of suitable memory type may be used, such as, for example, hard disk, optical disk, and/or removable storage media. According to embodiments of the present technology, the SSD 150 stores program instructions suitable for being loaded into the RAM 130 and executed by the processor 110 for executing the generation of 3D representation of objects. For example, the program instructions may be part of a library or an application.
[00081] The processor 110 may be a general-purpose processor, such as a central processing unit (CPU) or a processor dedicated to a specific purpose, such as a digital signal processor (DSP). In some embodiments, the processor 110 may also rely on an accelerator 120 dedicated to expediting certain processing tasks, such as executing the methods described below. In some embodiments, the processor 110 and/or the accelerator 120 may be implemented as one or more field programmable gate arrays (FPGAs). Moreover, explicit use of the term "processor", should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, application specific integrated circuit (ASIC), read-only memory (ROM) for storing software, RAM, and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
[00082] In some embodiments, the iterative analysis and processing of a 3D representation 100 may be implemented by an imaging device or any sensing device configured to optically sense or detect certain features of an object-of-interest, such as, but not limited to, a camera, a video camera, a microscope, endoscope, etc. In some embodiments, imaging systems may be implemented as a user computing and communication-capable device, such as, but not limited to, a camera, a video camera, endoscope, a mobile device, tablet device, a microscope, server, controller unit, control device, monitoring device, etc. [00083] To this end, the device 10 comprises an imaging system 18 that may be configured to capture Red-Green-Blue (RGB) images. As such, the device 10 may be referred to as the "imaging mobile device" 10. The imaging system 18 may comprise image sensors such as, but not limited to, Charge-Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS) sensors and/or digital cameras. Imaging system 18 may convert an optical image into an electronic or digital image and may send captured images to the computing unit 100. In the same or other embodiments, the imaging system 18 may be a single-lens camera providing RGB pictures. In some embodiments, the device 10 comprises depth sensors to acquire RGB-Depth (RGBD) pictures. Broadly speaking, any device suitable for generating a 3D point cloud may be used as the imaging system 18 including but not limited to depth sensors, 3D scanners or any other suitable devices.
[00084] In the same or other embodiments, imaging system 18 may send captured data to the computing unit 100. In the same or other embodiments, imaging system 18 may convert an optical image into an electronic or digital image and may send captured images to the computing unit 100.
[00085] In the same or other embodiments, the imaging system 18 may be a single-lens camera providing RGB pictures. In some embodiments, the device 10 comprises depth sensors to acquire RGB-Depth (RGBD) pictures. Broadly speaking, any device suitable for generating a 3D point cloud may be used as the imaging system 18 including but not limited to depth sensors, 3D scanners or any other suitable devices.
[00086] Returning to FIG. 1, the device 10 may further comprise an Inertial Sensing Unit (ISU) 14 configured to be used in part by the computing unit 100 to determine a position of the imaging system 18 and/or the device 10. As such, the computing unit 100 may determine a set of coordinates describing the location of the imaging system 18, and thereby the location of the device 10, in a coordinate system based on the output of the ISU 14. Generation of the coordinate system is described hereinafter. The ISU 14 may comprise 3 -axis accelerometer(s), 3 -axis gyroscope(s), and/or magnetometer(s) and may provide velocity, orientation, and/or other position related information to the computing unit 100.
[00087] The ISU 14 may output measured information in synchronization with the capture of each image by the imaging system 18. The ISU 14 may be used to determine the set of coordinates describing the location of the device 10 for each captured image of a series of images. Therefore, each image may be associated with a set of coordinates of the device 10 corresponding to a location of the device 10 when the corresponding image was captured. Furthermore, information provided by the ISU 14 may be used to determine a coordinate system and/or a scale corresponding of the object to be characterized. Other approaches may be used to determine said scale, for instance by including a reference object whose size is known in the captured images, near the object to be characterized.
[00088] Additionally, the device 10 may include a screen or display 16 capable of rendering color images, including 3D images. In some embodiments, the display 16 may be used to display live images captured by the imaging system 18, 3D point clouds, Augmented Reality (AR) images, Graphical User Interfaces (GUIs), program outputs, etc. In some embodiments, display 16 may comprise a touchscreen display to permit users to input data via some combination of virtual keyboards, icons, menus, or other Graphical User Interfaces (GUIs). In some embodiments, display 16 may be implemented using a Liquid Crystal Display (LCD) display or a Light Emitting Diode (LED) display, such as an Organic LED (OLED) display.
[00089] In other embodiments, display 16 may be remotely communicatively connected to the device 10 via a wired or a wireless connection not shown), so that outputs of the computing unit 100 may be displayed at a location different from the location of the device 10. In these embodiments, the display 16 may be operationally coupled to, but housed separately from, other functional units and systems in device 10.
[00090] The device 10 comprises a mobile communication device, such as, for example, a mobile phone, handheld computer, a personal digital assistant, tablet, a network base station, a media player, a navigation device, an e-mail device, a game console, or any other device whose features are similar or equivalent to providing the aforementioned capabilities.
[00091] As shown, the device 10 may comprise a memory 12 communicatively connected to the computing unit 100 and configured to store without limitation data, captured images, depth values, sets of coordinates of the device 10, 3D point clouds, and raw data provided by ISU 14 and/or the imaging system 18. The memory 12 may be embedded in the device 10 as in the illustrated embodiment of Figure 1 or located in an external physical location. The computing unit 100 may be configured to access a content of the memory 12 via a network (not shown) such as a Local Area Network (LAN) and/or a wireless connexion such as a Wireless Local Area Network (WLAN).
[00092] The device 10 may also include a power system (not shown) for powering the various components. The power system may include a power management system, one or more power sources, such as, for example, a battery, alternating current (AC) source, a recharging system, a power failure detection circuit, a power converter or inverter, and any other components associated with the generation, management and distribution of power in mobile or non-mobile devices.
[00093] With this configuration, the device 10 may also be suitable for generating the 3D point cloud, based on images of an object. Such images may have been captured by the imaging system 18, such as, for example, by generating 3D point cloud images according to the teachings of the Patent Cooperation Treaty Patent Publication No. 2020/240497, the entirety of the contents presented therein which is hereby incorporated by reference.
[00094] Equally notable, device 10 may perform the operations and steps of the methods and processes described in the present disclosure. More specifically, the device 10 may execute the capturing images of the object to be characterized, the generating a 3D point cloud including data points representative of the object, as well as executing methods for the characterization of the 3D point cloud. In at least some embodiments, the device 10 is communicatively connected (e.g, via any wired or wireless communication link including, for example, 4G, LTE, Wi-Fi, or any other suitable connection) to an external computing device 23 (e.g., a server) adapted to perform some or all of the methods for characterization of the 3D point cloud. As such, operation of the computing unit 100 may be shared with the external computing device 23.
[00095] In one embodiment, the device 10 accesses the 3D point cloud by retrieving information about the data points of the 3D point cloud from the RAM 130 and/or the memory 12. In other embodiments, the device 10 accesses a 3D point cloud by receiving information about the data points of the 3D point cloud from the external computing device 23.
[00096] FIG. 2 depicts a high-level flow diagram of computer-implemented method 200 of iterative analysis and processing of a 3D representation to obtain data from a 3D representation, in accordance with the embodiments of the present disclosure. The method 200 is directed to obtaining at least one characteristic indicative of a desired feature of an object. It is to be expressly understood that the method 200 as depicted are merely an illustrative implementation of the present technology. In some cases, what are believed to be helpful examples of modifications to the method 200 may also be set forth below. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and, as a person skilled in the art would understand, other modifications are likely possible. Further, where this has not been done (i.e., where no examples of modifications have been set forth), it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology. As a person skilled in the art would understand, this is likely not the case. In addition, it is to be understood that the device 10 may provide in certain instances simple embodiments of the present technology, and that where such is the case they have been presented in this manner as an aid to understanding. As persons skilled in the art would understand, various embodiments of the present technology may be of a greater complexity.
[00097] It will be understood that the method 200 or one or more steps thereof may be performed by a computer system, such as the computer system 100. The method 200, or one or more steps thereof, may be embodied in computer-executable instructions that are stored in a computer-readable medium, such as a non-transitory mass storage device, loaded into memory and executed by a CPU. Some steps or portions of steps in the flow diagram may be omitted or changed in order.
[00098] As shown, the method 200 incorporates a 3D representation module 201, an operation module 202 to obtain a modified 3D representation 203, an operations module 204 to obtain at least one characteristic from a modified 3D representation 203, an operation module 205 to apply the at least one characteristic obtained from a modified 3D representation 203 to a 3D representation 201, and data module 206 to contain 3D representation 201 data obtained by operation module 205. The elements of the method of iterative analysis and processing of a 3D representation 200 will be described in detail below.
[00099] The method of iterative analysis and processing of a 3D representation 200 may be applied to partial 3D point clouds, 3D templates, 3D models, 3D meshes, synthetic 3D models, non- synthetic 3D models, synthetic 3D objects, non- synthetic 3D objects, generated 3D models, 3D scans, voxels, continuous functions, and/or computer-aided design (CAD) files.
[000100] In some embodiments, synthetic objects may comprise, for example, synthetic 3D models, CAD, 3D models acquired from industrial oriented 3D software, or medical oriented 3D software, and/or non-specialized 3D software, 3D models generated by processes such as RGB photogrammetry, RGB-D photogrammetry, and/or other reconstruction techniques from real objects. In some embodiments, non-synthetic objects may comprise any 3D point cloud generated by operations such as: RGB based-sensors; photogrammetry techniques such as Colrnap™, Visual SFM™, Open3D™; computational depth-based techniques such as machine learning-based depth, and disparity-based depth utilizing stereo cameras or multiple positions of a single camera; and RGB-D sensors, such as LiDAR and/or depth cameras, etc.
[000101] In the context of the present technology, a "non-synthetic object" may refer to any object in the real-world. Non-synthetic objects are not synthesized using any computer rendering techniques rather are scanned, captured by any non-limiting means such as using suitable sensor, such as camera, optical sensor, depth sensor or the like, to generate or reconstruct 3D point cloud representation of the non-synthetic 3D object using any "off the shelf technique, including but not limited to photogrammetry, machine learning based techniques, depth maps or the like. Certain non-limiting examples of a non-synthetic 3D object may be any real-world objects such as a computer screen, a table, a chair, a coffee mug, a mechanical component on an assembly line, or any type of inanimate object or entity. Without limitations, a non-synthetic 3D object may also by an animated entity such as an animal, a plant, a human entity or a portion thereof.
[000102] In some embodiments, the 3D representation may be acquired or may have been previously acquired and is accessed from a memory or storage device upon executing a de-noising process. As used herein, a 3D representation, may refer to a simple 3D representation of an object where the vertices are not necessarily connected to each other. If they are not connected to each other, the information contained in this kind of representation is the coordinates (e.g., x, y, z in the case of a cartesian coordinate system) of each vertex, and its color (e.g., components in the Red- Green-Blue color space). The 3D point cloud reconstruction may be the result of 3D scanning, and a common format for storing such point clouds is the Polygon File Format (PLY). Point cloud reconstructions are seldom used directly by users, as they typically do not provide a realistic representation of an object, but rather a set of 3D points without relations to each other aside from their positions and colors.
[000103] In some embodiments, acquiring a 3D point cloud may involve using a plurality of depth maps, each depth map corresponding to an image within a stream of images acquired using, e.g., the imaging system 18 as shown in FIG. 1. Generating each depth map may include executing a machine learning algorithm (MLA), which may include a deep learning algorithm (DLA), based on the image and a second image of the stream of images. In some embodiments, generating each depth map may include utilizing depth information provided by one or more depth sensor(s) integrated within the imaging system. In some embodiments, generating each depth map may include utilizing at least two images at the same position and coordinates, within the imaging system’s coordinate system, using at least two lenses (e.g., using a dual or triple lens Red-Green- Blue (RGB) imaging system).
[000104] In some embodiments, a plurality of fragments of an object’s 3D point cloud can be generated based on depth map information. Each fragment may be a 3D point cloud generated based on at least one image, the corresponding depth maps of the at least one image, and the corresponding positions of the imaging system of the at least one image. These fragments can be merged to form a single 3D point cloud.
[000105] The method 200 begins at operation module 201, wherein a 3D representation is accessed. In some embodiments, the 3D representation 201 may include, but is not limited to, a complete or partial 3D point cloud, a 3D template, a 3D model, a 3D mesh, a synthetic 3D model, a non-synthetic 3D model, a synthetic 3D object, a non- synthetic 3D object, a generated 3D model, a 3D scan, voxels, continuous functions, and/or Computer-aided design (CAD) file.
[000106] The method 200 continues at operation module 202, wherein at least one operation is applied to obtain a modified 3D representation 203 of the 3D representation 201. In some embodiments, the at least one operation of 202 to obtain a modified 3D representation 203 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000107] In some embodiments, the modified 3D representation 203 may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
[000108] The method 200 continues at operation module 204, wherein at least one operation is applied to the modified 3D representation 203 to obtain at least one characteristic indicative of a desired feature of an object of the modified 3D representation 203. In some embodiments, the at least one operation of 204 to obtain the at least one characteristic of the modified 3D representation 203 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000109] In some embodiments, the at least one characteristic obtained from the modified 3D representation 203 may include, but is not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
[000110] The method 200 continues at operation module 205, wherein the at least one characteristic obtained from the modified 3D representation 203 is applied to the 3D representation 201 to obtain the desired feature object data from the 3D representation 201. The method 200 then proceeds to data module 206, where 3D representation 201 feature data obtained by operation module 205 is stored. [000111] FIG. 3 depicts a high-level flow diagram of the method 300 of iterative analysis and processing of a 3D representation to detect planes in a 3D representation, in accordance with the embodiments of the present disclosure. The method 300 is directed to detecting at least one planar surface related to desired feature object data. As shown, the method 300 commences at module 301 where planar surfaces are detected in a 3D point cloud 301.
[000112] The method 300 continues at operation module 302, in which the 3D point cloud 301 is down-sampled to obtain a modified 3D point cloud 303. The modified 3D point cloud 303 reduces the density of the 3D point cloud 301 to facilitate the detection of salient desired feature object data. The method 300 then proceeds to operation module 304, in which the at least one operation is applied to the modified 3D representation 303 to detect at least one planar surface related to the desired feature object data in the modified 3D point cloud 303. As noted above, the at least one operation may comprise down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000113] The method 300 continues at operation module 305, in which at least one planar surface equation is obtained from the modified 3D point cloud 303. At operation module 306, the at least one planar surface equation is obtained from modified 3D point cloud 303 is applied to the 3D point cloud 301 to detect the at least one planar surface related to desired feature object data or remove irrelevant or undesired planar surfaces of the at least one planar surface in the 3D point cloud 301. The method 300 concludes at the 3D point cloud 307, which includes the 3D point cloud 301 with the at least one planar surface data obtained by operation module 306.
[000114] FIG. 4 depicts a high-level flow diagram of the method 400 of iterative analysis and processing of a 3D representation to crop a 3D representation based on a sparse-based operation , in accordance with the embodiments of the present disclosure. The method 400 commences at operation module 402, wherein a plurality of vertices related to the desired feature object data is removed from 3D point cloud 401 to obtain a modified 3D point cloud 403. [000115] The method 400 continues at operation module 404, in which at least one bounding box indicative of a periphery related to the desired feature object data is obtained from the modified 3D representation 403. Further, at operation module 404, the at least one bounding box is aligned with the modified 3D representation 403.
[000116] The method 400 proceeds to operation module 405, in which the aligned bounding box obtained from modified 3D point cloud 403 is applied to 3D point cloud 401 to obtain cropped 3D point cloud 406.
[000117] FIG. 5 depicts a high-level flow diagram of the method 500 of iterative analysis and processing of a 3D representation utilizing plurality of parallel operations on a 3D representation, in accordance with the embodiments of the present disclosure. As shown, in certain embodiments, a first implementation of method 500 begins at operation module 502, in which a 3D point cloud 501 is projected onto a 2D plane representation to obtain a 2D projection of 3D point cloud 501.
[000118] The method 500 continues at operation module 503, in which at least one transformation is applied to the 2D projection of the 3D point cloud 501 to obtain a 2D contour of the 3D point cloud 501. The applied transformation may comprise, for example, a geometrical transformation.
[000119] The method 500 continues at operation module 504, wherein the 2D contour of the 3D point cloud 501 is applied to the 3D point cloud 501 to obtain at least one characteristic indicative of a desired feature of an object of the 3D point cloud 501. In certain embodiments, method 500 stores the at least one characteristic of 3D point cloud 501 in data module 509.
[000120] In certain embodiments, a second implementation of method 500 begins at operation module 505, wherein the at least one operation is applied to 3D point cloud 501 to obtain a 2D mesh and/or 3D mesh of the 3D point cloud 501. The method 500 continues at operation module 506, wherein at least one operation is applied to a 2D and/or 3D mesh of 3D point cloud 501 to obtain a 2D skeleton representation and/or 3D skeletal representation of the 3D point cloud 501. The details of the at least one operation have been described above. [000121] The method 500 continues at operation module 507, in which the at least one operation is applied to 2D and/or 3D skeletal representation of the 3D point cloud 501 to obtain a 2D and/or 3D spline skeleton of 3D point cloud 501. The method 500 continues at operation module 508, wherein the 2D and/or 3D spline skeleton of the 3D point cloud 501 is applied to the 3D point cloud 501 to obtain at least one characteristic indicative of a desired feature of an object of the 3D point cloud 501. The at least one characteristic of 3D point cloud 501 obtained in 508 is then stored in data module 509.
[000122] In certain embodiments of the method 500, the first and second implementations are conducted in parallel to obtain data from 3D point cloud 501. In such embodiments, the data obtained from 3D point cloud 501 may include, but is not limited to, the at least one 2D and/or 3D point cloud segment, the at least one cropped 2D and/or 3D point cloud, the at least one area-of- interest of a 2D and/or 3D point cloud, , the at least on fragment of a 2D and/or 3D point cloud, the at least one 2D projection, the at least one 2D contour, the at least one 2D and/or 3D mesh, the at least one 2D and/or 3D skeleton, the at least one 2D slice, the at least one landmark, the at least one 2D and/or 3D spline and/or the at least one 2D measurement, the at least one geometrical transformation.
[000123] In certain embodiments of the method 500, the first and second implementations are conducted sequentially to obtain data from 3D point cloud 501 . In such embodiments, the data obtained from 3D point cloud 501 may include, but is not limited to, the at least one 2D and/or 3D point cloud segment, the at least one cropped 2D and/or 3D point cloud, the at least one area-of- interest of a 2D and/or 3D point cloud, the at least on fragment of a 2D and/or 3D point cloud, the at least one 2D projection, the at least one 2D contour, the at least one 2D and/or 3D mesh, the at least one 2D and/or 3D skeleton, the at least one 2D slice, the at least one landmark, the at least one 2D and/or 3D spline and/or the at least one 2D measurement, the at least one geometrical transformation.
[000124] FIG. 6 depicts a high-level flow diagram of method 600 of iterative analysis and processing of a first 3D representation to obtain data from a second 3D representation, in accordance with the embodiments of the present disclosure.
[000125] As shown, the method 600 of iterative analysis and processing of a 3D representation incorporates a first 3D representation 601, an operation module 602 to obtain a modified 3D representation 603, an operation module 604 to obtain the at least one characteristic from a modified 3D representation 603, an operation module 605 to apply the at least on characteristic obtained from a modified 3D representation 603 to a second 3D representation 606, and data module 607 containing data obtained from second 3D representation 606. The elements of the method of iterative analysis and processing of a 3D representation 600 will be described in detail below.
[000126] The method 600 begins at operation module 602, wherein a 3D representation 601 is accessed. The 3D representation 601 and/or the second 3D representation 606 may include, but is not limited to, a complete or partial 3D point cloud, a 3D template, a 3D model, a 3D mesh, synthetic 3D model, a non- synthetic 3D model, a synthetic 3D object, a non-synthetic 3D object, a generated 3D model, a 3D scan, voxels, continuous functions, and/or computer-aided design (CAD) file data.
[000127] The method 600 continues at operation module 602, wherein at least one operation is applied to obtain a modified 3D representation 603 of the 3D representation 601. In certain embodiments, the at least one operation of 602 to obtain a modified 3D representation 603 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000128] In some embodiments, the modified 3D representation 603 may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
[000129] The method 600 continues at operation module 604, in which the at least one operation is applied to the modified 3D representation 603 to obtain at least one characteristic indicative of a desired feature of an object of the modified 3D representation 603. As noted above, the obtaining of the at least one characteristic of the modified 3D representation 603 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000130] In some embodiments, the at least one characteristic obtained from the modified 3D representation 603 may include, but is not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
[000131] The method 600 continues at operation module 605, wherein the at least one characteristic obtained from the modified 3D representation 603 is applied to a second 3D representation 606 to obtain data from the 3D representation 606. The method 600 concludes at data module 607, wherein the 3D representation 606 data obtained by operation module 605 is stored.
[000132] FIG. 7 depicts a high-level flow diagram of method 700 of iterative analysis and processing of a 3D representation to obtain objects from a 3D representation, in accordance with the embodiments of the present disclosure. The method 700 is directed to obtaining a plurality of modified 3D representations such as modified 3D point cloud and a 3D textured mesh.
[000133] The method 700 begins at operation module 702, wherein at least one operation is applied to a 3D point cloud 701 to obtain a modified 3D point cloud 703. The method 700 continues at operation module 704, wherein the at least one operation is applied to the modified 3D point cloud 703 to obtain at least one bounding box indicative of a periphery related to the desired feature object data in the modified 3D point cloud 703.
[000134] The method 700 proceeds to operation module 707, in which the at least one bounding box obtained from modified 3D point cloud 703 is applied to 3D textured mesh 706 to obtain the at least on object from 3D textured mesh 706. The method 700 concludes at data module 708, wherein the at least one object obtained from 3D mesh 706 is stored.
[000135] FIG. 8 depicts a high-level flow diagram of method 800 of iterative analysis and processing of a first and second 3D representation to obtain data from a first and/or second 3D representation, in accordance with the embodiments of the present disclosure. The method 800 is directed to obtaining at least one characteristic from a plurality of 3D representations, such as a first 3D representation 801 and a second 3D representation 802, and subsequent operations are utilized to obtain data from a first 3D representation 801 and/or a second 3D representation 802.
[000136] As shown, the method 800 of iterative analysis and processing of a 3D representation incorporates a first 3D representation 801, a second 3D representation 802, an operation module 803 to obtain the at least one characteristic from 801 and/or 802, an operation module 804 to obtain a modified 3D representation 805, an operations module 806 to obtain the at least one characteristic from a modified 3D representation 805, an operation module 807 to apply the at least one characteristic obtained from a modified 3D representation 805 to a first 3D representation 801 and/or a second 3D representation 802, and data module 808 to contain data obtained from a first 3D representation 801 and/or a second 3D representation 802. The elements of the method of iterative analysis and processing of a 3D representation 800 will be described in detail below.
[000137] In some embodiments, the 3D representation 801 and/or 802 may include, but is not limited to, a complete or partial 3D point cloud, 3D template, 3D model, 3D mesh, synthetic 3D model, non- synthetic 3D model, synthetic 3D object, non- synthetic 3D object, generated 3D model, 3D scan, voxels, continuous functions, and/or computer-aided design (CAD) file.
[000138] The method 800 begins at operation module 803, in which a first 3D representation 801 and/or a second 3D representation 802 are accessed to obtain the at least one characteristic indicative of a desired feature of an object from 801 and/or 802.
[000139] In some embodiments, the at least one operation of 803 to obtain the at least one characteristic from the first 3D representation 801 and/or the second 3D representation 802 may include, but is not limited to, down-sampling, segmentation, detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000140] In some embodiments, the at least one characteristic obtained from the first 3D representation 801 and/or the second 3D representation 802 may include, but is not limited to, feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
[000141] The method 800 continues at operation module 804, wherein the at least one operation is applied to obtain a modified 3D representation 805 from the first 3D representation 801 and/or the second 3D representation 802.
[000142] In some embodiments, the modified 3D representation 805 may include, but is not limited to, a down-sampled 3D point cloud, a sparse 3D point cloud, a segmented 3D point cloud, a plurality of segments of a 3D point cloud, a plurality of segments of a 3D point cloud segments with bounding boxes, a bounding box, a 3D mesh, a textured 3D mesh, a 2D projection, a 3D spline, a reposed 3D representation, a 3D template, a 3D representation aligned to a template, a scaled 3D representation, slices of a 3D representation, a 3D skeleton, a 2D skeleton, a 3D spline and/or a geometrically transformed 3D representation.
[000143] The method 800 continues at operation module 806, wherein the at least one operation is applied to the modified 3D representation 805 to obtain the at least one characteristic of the modified 3D representation 805.
[000144] In some embodiments, the at least one operation of 806 to obtain a modified 3D representation 203 may include, but is not limited to, down-sampling, segmentation; detecting planar surfaces, removing vertices, obtaining a bounding box, 2D plane projection, obtaining a 3D mesh, reposing, principal component analysis, closest-point registration of vertices, aligning to a template, scaling, slicing, clustering, region growing, filtering, obtaining a 3D skeleton, obtaining a 2D skeleton, obtaining a 3D spline, obtaining a 2D spline, geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
[000145] The method 800 continues at operation module 807, wherein the at least one characteristic obtained from the modified 3D representation 805 is applied to 3D representation 801 and/or 3D representation 802 to obtain data from 3D representation 801 and/or 3D representation 802. The method 800 concludes at data module 808, in which the data obtained from 3D representation 801 and/or 3D representation 802 by operation module 807 is stored.
[000146] FIG. 9 depicts a high-level flow diagram of method 900 of iterative analysis and processing of the at least one 3D representation to repose a 3D representation, in accordance with the embodiments of the present disclosure. The method 900 is directed to obtaining a reposed 3D point cloud. As shown, method 900 begins at operation module 903, wherein a 3D model template 901 and a 3D point cloud 902 are accessed to obtain at least one characteristic from 901 and/or 902.
[000147] The method 900 continues at operation module 904, which transfers the at least one characteristic and related patterns to a 3D model template 901 to obtain a reposed 3D model template 905. The method 900 continues at operation module 906, wherein at least one operation is applied to the reposed 3D model template 905 to align the at least one bounding box to the reposed 3D model template 905. Further, the at least one operation is applied to a reposed 3D model template 905 to apply the at least one landmark to a 3D model template 905.
[000148] The method 900 continues at operation module 907, wherein the at least one bounding box obtained in 906 is applied to a 3D point cloud 902 to obtain the reposed 3D point cloud 908. Further the at least one operation is applied to reposed 3D point cloud 908 to apply the at least one landmark obtained in 906 to a reposed 3D point cloud 908.
[000149] FIG. 10 depicts a high-level flow diagram of method 1000 of iterative analysis and processing of an at least one 3D representation to a morphed a 3D representation, in accordance with the embodiments of the present disclosure. The method 1000 begins at operation module 1003, wherein at least one principal components analysis operation is applied to a 3D model template 1001 and/or a 3D point cloud 1002 to obtain an aligned 3D model template 1004. In some embodiments, the at least one principal components analysis operation applied in operation module 1003 may include, but is not limited to, translation, rotation, and/or registration.
[000150] The method 1000 continues at operation module 1005, wherein the at least one operation of morphing is applied to align the vertices of an aligned 3D model template 1004 to the position of the vertices of a 3D point cloud 1002 to obtain a morphed 3D model template 1006.
[000151] In accordance with the broad aspects of the present inventive concepts, the disclosed embodiments of the iterative analysis and processing of a 3D representation may be utilized to obtain user body information containing user-specific representations, measurements, and characteristics of the user body and/or the at least one user body part.
[000152] In various embodiments, the obtained user body information may be utilized in the at least one transaction between a user and a vendor. In further embodiments, the obtained user body information utilized in the at least one transaction between a user and a vendor may be incorporated to execute an electronic purchasing transaction between a user and a vendor on a mobile communication device for an item relating to the user’ s physical characteristics, in which the method comprises executing instructions, by a processor, initiated by user requests; and establishing wireless communications, via a communication interface, with a potential vendor and external entities.
[000153] To this end, FIGs. 11A-C provide illustrative examples in which method 1000 is utilized to align and morph a vendor provided 3D model template 1101 to a 3D point cloud of a user body 1102 to obtain an aligned and morphed 3D model template 1103. The aligned and morphed 3D model template 1103 may be utilized by the vendor to provide accurate sizing and fitting for items of apparel available to the user for purchase.
[000154] In accordance with other aspects of the present inventive concepts, FIGs. 12-19 provide an illustrative example of the at least one embodiment of the iterative analysis and processing of a 3D representation, wherein user body information containing user-specific representations, measurements, and characteristics of the user body and/or the at least one user body part is utilized for executing an electronic purchasing transaction between a user and a vendor on a mobile communication device for an item relating to the user’s physical characteristics. [000155] The transaction between a user and a vendor begins, wherein a mobile communication device is utilized to obtain a 3D representation of the user body and/or the at least one user body part. The transaction between a user and a vendor further continues, wherein the method 300 is utilized on a 3D point cloud 1201 containing a planar surfacel202 and an 3D object- of-interest 1203 as depicted in FIG. 12 A. In the current example, the object-of-interest is a human hand and wrist. The method 300 may be utilized to remove a planar surface 1202 in 3D point cloud 1201 to obtain a 3D point cloud object-of-interest 1203 as depicted in FIG. 12B.
[000156] The transaction between a user and a vendor may further continue with method 500, wherein (i) a 3D mesh of object-of-interest 1301 is obtained from 3D point cloud object-of- interest 1203 as depicted in FIG. 13 A; (ii) a skeleton 1302 is applied to 3D mesh 1301 as depicted in FIG. 13B; (lii) a 2D projection with contour 1401 is obtained from 3D mesh 1301 as depicted in FIG. 14 A; (iv) a 2D contour with landmarks 1402 is obtained from a 2D projection with contour 1401 as depicted in FIG. 14B; (v) the operation of segmentation is applied to a 2D contour with landmarks 1402 to obtain a 2D point cloud possessing a 2D contour line, landmarks and segments 1501 as depicted in FIG. 15 A. FIG. 15B illustrates a 2D proj ection with finger segments obtained by method 500 from a 3D point cloud object-of-interest 1203.
[000157] In accordance with a further broad aspect of the present inventive concepts pertaining to a transaction between a user and a vendor, wherein a measurement is obtained from a 3D object-of-interest 1203. In reference to FIG. 16, the method 500 applies the at least one 2D finger segment obtained in (v) to a 3D object-of-interest 1203 to obtain a 3D object-of-interest 1601. In this example, the 3D object-of-interest 1601 is a human finger. The method 500 further applies skeleton 1602 to 3D object-of-interest 1601.
[000158] The method 500 continues with reference to FIG. 17, wherein the at least one 2D segment 1701 it obtained from a 3D object-of-interest a 1601 and skeleton 1602 as depicted in FIG. 17A. The method 500 further continues, wherein a median-fitting circle 1702 is obtained from 2D segment 1701 as depicted in FIG. 17B. The method 500 is further utilized to refine median-fitting circle 1702 to obtain exterior contour-fitting circle 1703 as depicted in FIG. 17C. In the current example, the exterior contour-fitting circle 1703 is labeled the best-fit match referenced to the at least one finger obtained from 3D object-of-interest 1203. [000159] The example continues, wherein user body information is utilized in an electronic purchasing transaction between a user and a vendor on a mobile communication device for an item relating to the user’s physical characteristics. FIG. 18 illustrates an embodiment of a user interface for the user purchase of and article of jewelry. In this example, the article of jewelry may be a ring. FIG. A-B depict a user interface 1801 containing a 3D representation of the user human hand and wrist obtained from a 3D object- of-interest 1203. The user may select the finger segment 1802, which may be highlighted to identify the selected finger. The at least one finger segment 1802 corresponds to the at least one 3D object-of- interest 1601 obtained from a 3D object-of-interest 1203.
[000160] Subsequent to the user selection of a finger-of-interest, the user may be presented with an embodiment of a user interface displaying the at least one best-fit match measure for the finger-of-interest. The best-fit measure corresponds to exterior contour-fitting circle 1703 obtained from a 3D object-of-interest 1203. In the example depicted in FIG. 19, the best-fit measure for the finger of interest if presented in a plurality of common measurement metrics 1901. In the current example, a dropdown window is available for the user to select a specified ring model for purchase. In some examples, a quality operation 1903 may be available for user assessment of the best-fit measure displayed in 1901.
[000161] It will be understood that the features and examples above are not meant to limit the scope of the present disclosure to a single implementation, as other implementations are possible by way of interchange of some or all of the described or illustrated elements. Moreover, where certain elements of the present disclosure can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present disclosure are described, and detailed descriptions of other portions of such known components are omitted so as not to obscure the disclosure. In the present specification, an implementation showing a singular component should not necessarily be limited to other implementations including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, the present disclosure encompasses present and future known equivalents to the known components referred to herein by way of illustration. [000162] The foregoing description of the specific implementations so fully reveals the general nature of the disclosure that others can, by applying knowledge within the skill of the relevant art(s) (including the contents of any documents cited and incorporated by reference herein), readily modify and/or adapt for various applications such specific implementations, without undue experimentation and without departing from the general concept of the present disclosure. Such adaptations and modifications are therefore intended to be within the meaning and range of equivalents of the disclosed implementations, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance presented herein, in combination with the knowledge of one skilled in the relevant art(s).
[000163] While the above- described implementations have been described and shown with reference to particular steps performed in a particular order, it will be understood that these steps may be combined, sub-divided, or re-ordered without departing from the teachings of the present technology. The steps may be executed in parallel or in series. Accordingly, the order and grouping of the steps is not a limitation of the present technology.
[000164] Moreover, it should be understood that the various embodiments disclosed by the present disclosure, have been presented by way of example, and not by limitation. It would be apparent to one skilled in the relevant art(s) that various changes in form and detail could be made therein without departing from the spirit and scope of the disclosure. Thus, the present disclosure should not be limited by any of the above-described implementations but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method for iterative analysis processing of a stored three dimensional (3D) representation of an object, the method comprising: accessing the stored 3D representation; applying at least a first operation to the accessed 3D representation to obtain a modified 3D representation; applying at least a second operation to the modified 3D representation to obtain at least one characteristic of the modified 3D representation; and applying the at least one characteristic obtained from the at least one modified 3D representation to the at least one 3D representation to obtain feature data of the object from the 3D representation.
2. The method of claim 1, wherein the at least one 3D representation comprises at least one of: partial 3D point clouds; 3D templates; 3D models; 3D meshes; synthetic 3D models; nonsynthetic 3D models; synthetic 3D objects; non-synthetic 3D objects; generated 3D models; 3D scans; voxels; continuous functions; and computer-aided design (CAD) files.
3. The method of claim 1, wherein the at least one characteristic of the at least one modified 3D representation is applied to a second 3D representation to obtain the feature data of the object from the second 3D representation.
4. The method of claim 3, wherein the second 3D representation is a modified version of the first 3D representation.
5. The method of claim 1, further comprising applying the at least a first operation and/or the at least a second operation to a plurality of 3D representations to obtain the at least one modified 3D representation.
6. The method of claim 5, further comprising applying the at least one characteristic obtained from the at least one modified 3D representation to the at least one 3D representation to obtain the feature data of the object from the at least one 3D representation.
7. The method of claims 1-5, further comprising applying the at least one characteristic obtained from the at least one modified 3D representation to obtain a plurality of characteristics from the at least one 3D representation.
8. The method of claim 1-5, further comprising applying a plurality of operations to the characteristic obtained from the at least one modified 3D representation to obtain a plurality of characteristics from the at least one 3D representation.
9. The method in claims 1-5, further comprising applying a plurality of operations to the characteristic obtained from the at least one modified 3D representation to obtain the at least one modified 3D representation from the at least one 3D representation.
10. The method of claim 1, wherein the at least a first operation applied to the at least one 3D representation comprises at least one of: down-sampling; segmentation; detecting planar surfaces; removing vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
11. The method of claim 1 , wherein the at least a second operation applied to the at least one modified 3D representation comprises at least one of: down-sampling; segmentation; detecting planar surfaces; removing vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
12. The method of claim 1, wherein the at least one characteristic obtained from the at least one modified 3D representation comprises at least one of: feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions.
13. A system for executing iterative analysis processing of a stored three dimensional (3D) representation of an object comprising: a memory configured to store iterative analysis processing instructions; a processor communicatively coupled to the memory and configured to execute the stored instructions initiated by a request to perform the iterative analysis processing of the stored 3D representation; and a communications interface communicatively coupled to the processor and configured to establish communications with a networked entity storing the 3D representation, wherein, in response to receipt of the request, the processor executes the instructions to: access the stored 3D representation; apply at least a first operation to the accessed 3D representation to obtain a modified 3D representation; apply at least a second operation to the modified 3D representation to obtain at least one characteristic of the modified 3D representation; and apply the at least one characteristic obtained from the at least one modified 3D representation to the at least one 3D representation to obtain feature data of the object from the 3D representation.
14. The system of claim 13, wherein the at least one 3D representation comprises at least one of: partial 3D point clouds; 3D templates; 3D models; 3D meshes; synthetic 3D models; non- synthetic 3D models; synthetic 3D objects; non-synthetic 3D objects; generated 3D models; 3D scans; voxels; continuous functions; and computer-aided design (CAD) files.
15. The system of claim 13, wherein the at least a first operation and at least a second operation comprise at least one of: down-sampling; segmentation; detecting planar surfaces; removing vertices; obtaining a bounding box; 2D plane projection; obtaining a 3D mesh; reposing; principal component analysis; closest-point registration of vertices; aligning to a template; scaling; slicing; clustering; region growing; filtering; obtaining a 3D skeleton; obtaining a 2D skeleton; obtaining a 3D spline; obtaining a 2D spline; geometric transformation, mathematical operations, constraining the planar surfaces and/or statistical outlier removal (SOR).
16. The system of claim 13, wherein the at least one characteristic obtained from the at least one modified 3D representation comprises at least one of: feature data comprising information concerning color, depth, heat, two dimensions (2D), three dimensions (3D), a continuous function, 3D descriptors, spatial distribution, geometric attributes, scale invariant features, shape descriptors, motion, transformations, key points, instances and/or functions
17. The system of claim 13, wherein the at least one characteristic of the at least one modified 3D representation is applied to a second 3D representation to obtain the feature data of the object from the second 3D representation.
18. The system of claim 17, wherein the second 3D representation is a modified version of the first 3D representation.
19. The system of claim 13, further comprising applying the at least a first operation and/or the at least a second operation to a plurality of 3D representations to obtain the at least one modified 3D representation.
20. A non-transitory computer-readable medium comprising computer-executable instructions that cause a system to execute the method according to any one of claims 1-12.
PCT/IB2023/058888 2022-09-08 2023-09-07 Iterative analysis and processing of 3d representations of objects WO2024052862A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22194722.9 2022-09-08
EP22194722 2022-09-08

Publications (1)

Publication Number Publication Date
WO2024052862A1 true WO2024052862A1 (en) 2024-03-14

Family

ID=83271656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/058888 WO2024052862A1 (en) 2022-09-08 2023-09-07 Iterative analysis and processing of 3d representations of objects

Country Status (1)

Country Link
WO (1) WO2024052862A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160247017A1 (en) * 2010-06-08 2016-08-25 Raj Sareen Method and system for body scanning and display of biometric data
WO2020240497A1 (en) 2019-05-31 2020-12-03 Applications Mobiles Overview Inc. System and method of generating a 3d representation of an object
WO2022137134A1 (en) * 2020-12-24 2022-06-30 Applications Mobiles Overview Inc. Method and system for automatic characterization of a three-dimensional (3d) point cloud

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160247017A1 (en) * 2010-06-08 2016-08-25 Raj Sareen Method and system for body scanning and display of biometric data
WO2020240497A1 (en) 2019-05-31 2020-12-03 Applications Mobiles Overview Inc. System and method of generating a 3d representation of an object
US20220237880A1 (en) * 2019-05-31 2022-07-28 Applications Mobiles Overview Inc. System and method of generating a 3d representation of an object
WO2022137134A1 (en) * 2020-12-24 2022-06-30 Applications Mobiles Overview Inc. Method and system for automatic characterization of a three-dimensional (3d) point cloud

Similar Documents

Publication Publication Date Title
CN110135455B (en) Image matching method, device and computer readable storage medium
Fuhrmann et al. Mve-a multi-view reconstruction environment.
Agarwal et al. Reconstructing rome
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
JP6554900B2 (en) Template creation apparatus and template creation method
WO2015036056A1 (en) Method and system for determining a model of at least part of a real object
Mousavi et al. The performance evaluation of multi-image 3D reconstruction software with different sensors
Kordelas et al. State-of-the-art algorithms for complete 3d model reconstruction
US20230047211A1 (en) Method and system for automatic characterization of a three-dimensional (3d) point cloud
US20200057778A1 (en) Depth image pose search with a bootstrapped-created database
US20220237880A1 (en) System and method of generating a 3d representation of an object
Du et al. Video fields: fusing multiple surveillance videos into a dynamic virtual environment
WO2014177604A1 (en) Method and system for generating a 3d model
JP2022518402A (en) 3D reconstruction method and equipment
Law et al. Single viewpoint model completion of symmetric objects for digital inspection
WO2024052862A1 (en) Iterative analysis and processing of 3d representations of objects
Wong et al. 3D object model reconstruction from image sequence based on photometric consistency in volume space
JP7298687B2 (en) Object recognition device and object recognition method
JP7251631B2 (en) Template creation device, object recognition processing device, template creation method, object recognition processing method, and program
Feris et al. Multiflash stereopsis: Depth-edge-preserving stereo with small baseline illumination
Chen et al. A 3-D point clouds scanning and registration methodology for automatic object digitization
CN111582121A (en) Method for capturing facial expression features, terminal device and computer-readable storage medium
EP4273801A1 (en) System and method for extracting an object of interest from a 3d point cloud
CN111369651A (en) Three-dimensional expression animation generation method and system
Noh et al. Retouch transfer for 3D printed face replica with automatic alignment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23783021

Country of ref document: EP

Kind code of ref document: A1