GB2625398A - Object identification - Google Patents
Object identification Download PDFInfo
- Publication number
- GB2625398A GB2625398A GB2305006.5A GB202305006A GB2625398A GB 2625398 A GB2625398 A GB 2625398A GB 202305006 A GB202305006 A GB 202305006A GB 2625398 A GB2625398 A GB 2625398A
- Authority
- GB
- United Kingdom
- Prior art keywords
- camera
- images
- mount
- tokenised
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 claims abstract description 44
- 238000000034 method Methods 0.000 claims description 85
- 230000003287 optical effect Effects 0.000 claims description 16
- 238000005286 illumination Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 abstract description 8
- 230000008569 process Effects 0.000 description 12
- 210000003128 head Anatomy 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 239000011521 glass Substances 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 230000001066 destructive effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 235000004443 Ricinus communis Nutrition 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000005405 multipole Effects 0.000 description 2
- 238000010422 painting Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 101000657326 Homo sapiens Protein TANC2 Proteins 0.000 description 1
- 102100034784 Protein TANC2 Human genes 0.000 description 1
- 239000004411 aluminium Substances 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 229910052799 carbon Inorganic materials 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000006260 foam Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001404 mediated effect Effects 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000013041 optical simulation Methods 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32128—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title attached to the image data, e.g. file header, transmitted message header, information on the same page or in the same computer file as the image
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B17/00—Details of cameras or camera bodies; Accessories therefor
- G03B17/56—Accessories
- G03B17/561—Support related camera accessories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/95—Pattern authentication; Markers therefor; Forgery detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
- H04N1/32149—Methods relating to embedding, encoding, decoding, detection or retrieval operations
- H04N1/32267—Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
- H04N1/32283—Hashing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3225—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
- H04N2201/3226—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of identification information or the like, e.g. ID code, index, title, part of an image, reduced-size image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N2201/3201—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N2201/3274—Storage or retrieval of prestored additional information
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Image Input (AREA)
Abstract
Scanning an object (e.g. artwork) comprises obtaining images 2510 from a first camera (200, fig. 2B); identifying 2520 an area within the images containing features (e.g. using image processing); using a second camera (210, fig. 2B), e.g. a signature camera, having a smaller (narrower) field of view to obtain 2530 an image of the identified area; and converting 2540 the second camera image into a unique tokenised representation, wherein the positions of the cameras are identified when the images are acquired. The object may be illuminated via illuminators (220, 230, fig. 2B) and provide a visual mark. Data from the images and camera coordinates may be hashed to generate a fingerprint ID. The features may need to meet a criteria e.g. against contrast, size, number, or edges. The object may be rescanned by moving the cameras to the positions, converting the second camera image into a further tokenised representation, and confirming a match between the original and further tokenised representations. This may confirm the object is the same and authentic. The cameras and illuminators may be mounted (via mount 140, fig. 1A) to a motion stage (120, fig. 1A) and a controller may move the stage relative to the object.
Description
Object Identification
Field of the Invention
The present invention relates to a system and method for scanning and identifying objects, and in particular, valuable objects such as artworks.
Background of the Invention
There can be many different reasons why it is important to be able to determine the identity of a particular object. For example, it may be possible for a skilled artist to duplicate an original painting using identical materials. Whilst an art expert may be able to differentiate between original artwork and forgeries or duplicates, this can require significant time and effort and is not always definitive, especially when experts disagree.
Furthermore, the expense (in time and resources) may only be worthwhile for particularly valuable works of art. Therefore, less valuable pieces may be more susceptible to duplication and fraud.
Where artwork is transported or loaned then it can be important for a gallery to ensure that it has received the legitimate item. Therefore, there can be reluctance in lending artworks to lesser known or less trusted institutions without significant security measures in place, such as security guards and physical measures. Similar problems arise when works of art are bought and sold. For example, buyers and their agents may incur the risk of purchasing an elaborate fake rather than the true item.
US2018/0365519 describes a system and method for generating a unique signature of a document or manufactured object in the form of a numeric representation. The signature is used to identify the object at another time. The signature is generated from unique features of the object. However, it can be difficult to locate the features on the object in order to accurately regenerate the numeric representation and confirm the identity of the object.
Therefore, there is required a method and system that overcomes these problems.
Summary of the Invention
A device is provided for scanning objects. The objects can be works of art (e.g., paintings, drawings, and sculptures) but any unique object can be scanned and characterised. The device acquires images of a significant part of the object and close-up or higher resolution images of a small portion of the object. These smaller portions can contain features and the features can be automatically identified based on their visual or optical characteristics. This may be achieved by two separate cameras with different fields of view and/or focal lengths or a single camera that can be configured (e.g., mechanically) to obtain the two types of images.
Lamp(s) or light source(s) illuminates the object when acquiring the wide-angle images (substantially the whole object, or a large section thereof should the object be larger than the device), preferably without aberrations or shadows. A further lamp or light source (or the same one in a different physical configuration) provides a different illumination to the object when the narrow-angle images are obtained. The narrow-angle images are of parts of the object included in the wide-angle images.
The camera or cameras are moved so that images (from both cameras) can be acquired from different parts of the object. A controller coordinates the movement and image acquisition so that the position of the camera or cameras (in three-dimensional space) can be determined, recorded and repeated when each image is acquired. Preferably, this can be relative to a pre-determined datum point on the object (e.g., an existing mark or small permanent feature that is preferably recognisable to the human eye, or at a measured location within the object).
The output from the object scanner may be sent to an external server. For example, these data (images and location information) may be transmitted over the internet. The external server can generate a fingerprint identifier (unique) from the image data and store the fingerprint identifier. The fingerprint identifier may be a unique tokenised or numerical representation of the image or images obtained by the object scanner. The fingerprint identifier may be stored together with information identifying the object. The resulting information or token may be digitally signed. The digitally signed information may be added to a blockchain and retrieved at a later time to identify the object by taking further images using the object scanner at the same positions (e.g., relative to the datum point) and generating a further fingerprint to be compared to the original. For example, the fingerprint or fingerprints may be recaptures and compared with original fingerprint(s) within the external server. If they match then the object can be positively identified.
In accordance with a first aspect there is provided a system for scanning an object, the system comprising: a first camera; a second camera having a field of view smaller than the first camera; a first illuminator configured to provide illumination for the first camera; a second illuminator configured to provide illumination for the second camera; a mount arranged to support the first camera, the second camera, the first illuminator and the second illuminator, wherein the first camera and the second camera are fixed relative to each other by the mount; a motion stage configured to move the mount relative to the object; and a controller configured to obtain images from the first and second cameras and move the mount using the motion stage. Therefore, both images of the whole object (or a significant part of the object) can be obtained as well as close-up images of a portion of the object. The position of the cameras when the images are acquired may be determined and stored so that similar images may be acquired (e.g., automatically) at a later time. A datum or identifiable location on the object may be determined. This can be used to automatically place the cameras at the same position at a later time.
Preferably, the system may further comprise a visual mark, laser line, optical mark or spot generator configured to illuminate at least a portion of an object within fields of view of the first and second cameras. The visual mark may be used to provide a guide to an operator to locate a datum point on the object or define a boundary of an area or volume to be scanned. The visual mark does not need to be recorded within any images and can be turned off when images are acquired. An image of the datum point and surround area may be acquired. The visual mark generator may form an image of crosshairs, dot, circle, star, or another shape. The fields of view of the first and second cameras may include an image of the visual mark. The visual mark may be non-destructive.
Preferably, the mount may be further arranged to support the visual mark generator. Therefore, the visual mark generator can be fixed and moved with respect to the cameras. The visual mark (e.g., laser line generator) may provide a visible (or infrared) non-destructive mark that moves with the cameras. The visual mark generator can be used to 4 -align the mount (and so the first and the second cameras) with a prominent feature on the object, such as a nose of a human figure, when setting the datum point (i.e., x, y, z position of the mount as controlled by the motion stage). The datum point may be a spot or location on the object that contains a permanent feature (e.g., brush stroke, crease, imperfection, paint spot, hole, etc.). The motion stage may be operated so that the visual mark aligns with a feature or mark on the object. The motion stage may contain sensors or stepper motors so that its location can be accurately known are repeatably set. Image processing techniques may be used to automatically align the visual mark with the feature on the object by using the motion stage. Therefore, it may only be necessary for an operator to approximately align the visual mark with the feature (e.g., a datum point) and the system can more accurately align them by moving the visual mark generator attached to the mount.
Optionally, the first illuminator may comprise a first rectangular or square light source within a first rectangular or square cross-sectional lightguide, wherein an optical axis of the first rectangular cross-sectional lightguide is at an angle greater than 10 degrees from an optical axis of the first camera. Therefore, illumination can be provided to the object without generating shadows or other aberrations (that may be confused with features of the object) within the acquired images. Other shape cross-sections for the lightguide may be used (e.g., circle, parabola, etc.).
Optionally, the first illuminator may further comprise a second rectangular light source within a second rectangular cross-sectional lightguide, wherein an optical axis of the second rectangular cross-sectional lightguide is at an angle greater than 10 degrees from an optical axis of the first camera and the optical axis of the first and second rectangular cross-sectional lightguides are non-parallel. This provides more even illumination. Other shape cross-sections for the lightguide may be used (e.g., circle, parabola, etc.).
Preferably, the first camera may be arranged between the first and the second light sources. The light sources and first camera may be vertically or horizontally arranged, preferably with the first camera equidistant from both light sources.
Optionally, the second illuminator may be a ring illuminator arranged around the second camera. This provides even illumination to the entire object or a large proportion of the object.
-
Preferably, the system may further comprise at least one communications interface configured to transmit data corresponding to the obtained images to an external computer system. This may be a modem, Wi-Fi, ethernet, network or another communications interface. Therefore, the object scanner can be limited or restricted to obtaining images and location information (of the cameras/mount) and the external computer system can process these data. This improves security as the object scanner can be operated by a third party. The images may be transmitted directly to the external computer by means of a data secure connection to improve security and not pass through any separate device.
Optionally, the mount may further comprise an arm pivotably attached to the motion stage. The arm may also form part of the motion stage and provide support for an axial or stepper motor. Therefore, the arm may be stowed for travel and storage.
Preferably, the system may further comprise a chassis, wherein the motion stage is fixed to the chassis. The chassis may support the other components and act as a stable base during image acquisition. The chassis may be metal (e.g., aluminium).
Optionally, the system may further comprise a plurality of foldable legs attached to the chassis. Preferably, there may be four legs and they may be self-levelling to provide a stable base when images are acquired.
Optionally, the system may further comprise a plurality of caster wheels attached to the chassis. There may be two wheels or caster wheels attached at one end of the chassis and angled so that the chassis may be tilted at an angle (e.g., 10-45 degrees to the ground) when the wheels engage with the ground. There may be a handle or handles located at an end opposite the wheels to pull the object scanner along the ground, for example.
Optionally, the system may further comprise a removable cover. This may be a flexible cover or preferably a rigid cover to provide protection during storage and transit.
Optionally, the system may further comprise a contact detector attached to the mount and configured to stop the motion stage when contact is detected. This avoids the mount and/or attached cameras and illuminators from hitting the scanned object. There 6 -may also be a distance sensor on the mount (e.g., laser, ultrasonic, etc.) that can be used to automatically maintain a predetermined or minimum distance to the object.
According to a second aspect, there is provided a method for scanning an object, the method comprising the steps of: obtaining one or more images from a first camera; identifying at least one area within the one or more images containing a plurality of features; using a second camera having a field of view smaller than the first camera to obtain an image of the at least one identified area; converting the image obtained by the second camera into a unique token, numerical or encoded representation, wherein positions of the first camera and the second camera are identified when the images are acquired. Therefore, objects can be repeatably and reliably identified. The tokenised or numerical representation may take the form of numbers and/or characters or other digital token. Binary, decimal, hexadecimal or other base number systems may be used. Therefore, the tokenised or numerical representation may itself take the form of numbers, characters, or purely computer readable data. The unique numerical, encoded or tokenised representation uses the image as a starting point and may use encryption or hashing techniques to generate the tokenised, numerical or encoded representation. The method may be repeated (e.g., at a later time or date) by positioning the cameras at the same position (e.g., by using a datum point), recreating the tokenised representation by acquiring new images (of the same part of the object containing the features) and comparing the original tokenised representation with the newly (and independently) generated tokenised representation. If there is a match (or if the match is within certain parameters or tolerance) then the object can be determined as being the same and authentic.
Optionally, the method may further comprise the step of associating the unique tokenised or numerical representation with the object. This may be achieved by recording a name, owner, artist, gallery, or other identifying information of the object together with the unique tokenised or numerical representation.
Optionally, the method may further comprise the steps of: 7 -before obtaining the one or more images from the first camera, illuminating the object with a visual mark at or near a feature of the object when the visual mark and the feature of the object are within the field of view of the second camera; and recording a position of the first camera and the second camera when the visual mark is at or near the feature of the object. The visual mark may be generated by a laser (visible or infrared) or other non-destructive light source. The position of the camera may be determined using an x, y, z motion stage to move and locate the camera. The visual mark may also be used to determine and set boundary positions (i.e., the extent on the object that the mount can be moved and so a range within which the cameras can obtain images).
Preferably, the visual mark may be generated by a visual mark generator that is fixed to a mount that supports the first camera and the second camera. The mount may fix the visual mark generator to the first and second cameras. The visual mark generator may be arranged so that it generates a visual mark (e.g., crosshairs or a dot) at the same position within the field of view of either or both cameras (and moves over the object as the cameras move to a different position). This process may be used to locate the feature on the object as a datum point. The position of the cameras (i.e., mount and visual mark generator) can be recorded when the visual mark is aligned with the feature. These positional data can be used as a datum point. Any further movement of the cameras can be recorded as steps (e.g., stepper motor increments) or distances from this datum point. Therefore, the position of the camera when the images containing the plurality of features can be known, recorded, and reproduced relative to the datum point or points.
Optionally, the method may further comprise the step of transmitting data corresponding to the images obtained by the second camera to an external computer system, wherein the step of converting the image obtained by the second camera into a unique numeric representation is carried out within the external computer system. Therefore, only the steps of acquiring the images (and determining position information) are carried out using a local device (object scanner). Processing these images (and position information) is carried out within a remove external computer system. This improves security. 8 -
Optionally, the method may further comprise the step of: hashing data of one or more images of one or more features obtained by the second camera and coordinates of the second camera when the one or more images were obtained to generate a hash value. This provides an efficient method for storing unique data.
Optionally, the method may further comprise the step of saving the hash value together with data identifying the object. This may be stored within a public or private blockchain. Therefore, unique objects can be repeatably and reliably identified.
Optionally, the method may further comprise the step of confirming that the plurality of features meets one or more criteria. For features to be capable of providing the basis for repeatably and reliably generating unique tokenised or numerical representations then they need to meet a set of criteria. For example, their contrast, size, number, edges, etc. may be compared the set of criteria.
Preferably, the method may further comprise the steps of: repeating the steps of: moving the second camera to the same position used to obtain the image from the second camera; using the second camera to obtain a further image of the at least one identified area at the same position; converting the image obtained by the second camera into a further tokenised representation; and confirming a match between further tokenised representation and the tokenised representation. The further tokenised representation can be generated according any or all of the above method steps.
Preferably, the system may further comprise an external computer system (e.g., accessible over a network such as the internet) and means for carrying out the steps of any of the above methods.
The methods described above may be implemented as a computer program comprising program instructions to operate a computer. The computer program may be stored on a computer-readable medium, including a non-transitory computer-readable medium.
The computer system may include a processor or processors (e.g. local, virtual or cloud-based) such as a Central Processing Unit (CPU), and/or a single or a collection of Graphics Processing Units (GPUs). The processor may execute logic in the form of a software program or programs. The computer system may include a memory including volatile and non-volatile storage medium. A computer-readable medium may be included to store the logic or program instructions. The different parts of the system may be connected using a network (e.g., wireless networks and wired networks). The computer system may include one or more interfaces. The computer system may contain a suitable operating system such as UNIX, Windows (RTM) or Linux, for example.
It should be noted that any feature described above may be used with any particular aspect or embodiment of the invention.
Brief description of the Figures
The present invention may be put into practice in a number of ways and embodiments will now be described by way of example only and with reference to the accompanying drawings, in which: Fig. 1A shows a perspective view of a system including an object scanner having a mount including a first camera and a second camera, an external computer system, and a tablet computer; Fig. 1B shows a perspective view of the object scanner of Figure 1A in a floor mounted configuration; Fig. 10 shows a perspective view of the object scanner of Figure 1A in a raised configuration; Fig. 2A shows a perspective view of the mount of Figure 1A; Fig. 2B shows a front view of the mount of Figure 1A; Fig. 20 shows a further perspective view of the mount of Figure 1A including a cross-sectional line X-X; Fig. 2D shows a cross-sectional view of the mount of Figure lA along the line of Figure 2C; -10 -Fig. 3 shows a front view of the object scanner of Figure 1B with the mount in a first configuration; Fig. 4 shows a front view of the object scanner of Figure 1B with the mount in a second configuration; Fig. 5A shows a perspective the object scanner of Figure 1A in a stowed configuration including a cover; Fig. 5B shows an underside view of the object scanner of Figure 1A in stowed configuration; Fig. 50 shows a perspective view of controls of the object scanner of Figure lA used to move the mount; Fig. 6 shows screenshots of a mobile app for operating the object scanner of Figure 1A; Fig. 7A shows a method of operating the object scanner of Figure 1A; Fig. 8A shows an image of an object to be scanned; Fig. 8B shows a schematic or simulated view of the object of Figure 8A being scanned by the object scanner of Figure 1; Figs. 9A and 19B shows example images generated of an object using different illumination techniques; Fig. 10 shows a sequence diagram of a method for operating the object scanner of Figure 1; Fig. 11 shows a sequence diagram of a further method for operating the object scanner of Figure 1; Fig. 12 shows a sequence diagram of a further method for operating the object scanner of Figure 1; Fig. 13 shows a flowchart of a method for operating the object scanner of Figure 1; Fig. 14 shows a flowchart of a further method for operating the object scanner of Figure 1; Fig. 15 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps of connecting the object scanner of Figure 1 to a network; Fig. 16 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps of the tablet computer obtaining job data from the external computer system; Fig. 17 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps for capturing a datum; Fig. 18 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps for setting bounds; Fig. 19 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps for scanning an object; Fig. 20 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps for capturing regions of interest on the object; Fig. 21 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps for capturing a datum; Fig. 22 shows a flowchart of a further method for operating the object scanner of Figure 1, including steps for capturing regions of interest; Fig. 23A shows a wide-angle image of an object being scanned; Fig. 23A shows a set of narrow-angle images of the object being scanned; Fig. 23A shows a magnified image of the object being scanned; Fig. 24 shows a table of data being processed to generate a tokenised representation of the object; Fig. 25 shows a flowchart of a high-level method for operating the object scanner of Figure 1; and Fig. 26 shows a flowchart of a high-level method for generating a tokenised representation of the object.
It should be noted that the figures are illustrated for simplicity and are not necessarily drawn to scale. Like features are provided with the same reference numerals.
Detailed description of the preferred embodiments
A system 10 for scanning objects (in particular artworks) includes an object scanner 100 and an external computer 20 or server (e.g., Artclear server) that communicates with the object scanner 100 over a network such as the internet. The communications between the object scanner 100 and the external computer 20 may be encrypted for security purposes.
Figure lA shows a schematic diagram of the system 10, including the object scanner 100 and the external computer 20. The external computer 20 receives data from the object scanner 100 and processes these data to generate a tokenised or numerical representation of the object. Further scans of the same object will generate the same or a -12 -similar tokenised or numerical representation and so this can be used to determine that the same object is being scanned at a later date. This provides secure and repeatable authentication of the scanned object. The original and further tokenised or numerical representations can be compared. If they are within a predetermined similarity or tolerance then the authenticity of the object can be confirmed.
The object scanner 100 includes a body or chassis 110, which provides a stable fixing point for other components. The chassis 110 provides stability and structure to the object scanner 100. A motion stage 120 is securely fixed to the chassis 110. The motion stage 120 is arranged to move an arm 130 (a component of the motion stage 120) across a surface parallel to a surface of the chassis 110. In use, the arm 130 is perpendicular to the chassis and the motion stage 120 but can be folded at a hinge or pivot 160 so that an axis of the arm 130 becomes parallel and adjacent to the motion stage 120 for transportation and storage.
Attached to the arm 130 is a mount 140 that supports cameras, a visual mark generator (e.g., laser line generator), and a distance sensor (e.g., ultrasonic or radar). These components will be described in further detail. In any case, because the mount 140 is securely fixed to the arm 130 (but can also move up and down the arm 130) then movement of the motion stage 120 also moves the cameras and laser line generator at the same time and by the same amount. The motion stage 120 may include digitally controlled stepper motors and/or sensors to detect and reliably repeat the same movements and distance changes.
Four foldable legs 170 are pivotably fixed to the chassis 110 by hinges or another arrangement. These hinges are located at one end of each leg 170 adjacent the chassis 110. At a distal end of each leg 170 to the hinge is an adjustable foot that can be extended along an axis of each leg 170 to accommodate for uneven surfaces upon which the object scanner 100 is placed. A pair of castor wheels 180 is located at one end of the chassis 110 and may be used to wheel the object scanner along a surface when the object scanner is in a stowed or transportation configuration.
The stowed configuration of the object scanner 100 (see Figure 5A) involves folding each leg 170 so that they are adjacent the chassis 110. The arm 130 is also folded along pivot 160 so that it is flat against the motion stage 120. The arm 130 is stowed when it is -13 -moved by the motion stage 120 to one end (to the left of Figure 1A) so that the arm 130 does not extend beyond an edge of the chassis 110 when folded. The object scanner 100 may be operated wirelessly using a tablet computer 30 or another portable device. The tablet computer 30 does not store any data or images of the object but may be used to transfer information to the external computer 20. Operation by the tablet computer 30 may be indirect and mediated by the external computer 20. The tablet computer 30 may receive wide-angle images but does not receive the narrow-angle images or tokenised or numerical representations of the objects (signature data).
Figures 1B and 1C show the object scanner 100 when in a floor mounted configuration and in a raised configuration, respectively. When in the floor mounted configuration, the legs 170 are folded and locked in place. When in a raised configuration, the legs 170 are extended away from the chassis 110 and may also be locked in place. Release latches 195 release the legs 170 from either configuration.
Figures 2A-2D show different views of the mount 140 and the components contained within the mount 140. Figure 2A shows a perspective view of the mount 140. The mount acts as a case or support for a wide-angle camera 200 and a narrow-angle camera 210. In this context, the field of view of the wide-angle camera 200 is greater than the field of view of the narrow-angle camera 210. The wide-angle camera 200 may be described as a first camera and the narrow-angle camera 210 may be described as the second camera. The second camera may also be known as a signature camera as its images are used to generate tokenised representations of the object (a signature). In these figures, the first illuminator 220 is shown above and below a lens of the first camera 200 but may be configured either horizontally or vertically either side of the lens of the first camera 200. The first illuminator 220 is configured to provide illumination when an image is being acquired by the first camera 200. The first illuminator 220 is provided adjacent to the first camera 200. A second illuminator 230 takes the form of a ring illuminator and is configured around a lens of the second camera 210. The second illuminator 230 is configured to provide illumination when an image is being acquired by the second camera 210 (e.g., a smaller area of the object).
A contact detector 240 or low force contact switch is affixed to the mount 140. In this example configuration, the contact detector 240 is located adjacent the second camera 210. When an object is placed in front of the first camera 200 and the second camera 210 -14 -the contact detector 240 is arranged so that as the motion stage 120 moves the mount 140 towards the object then, the contact detector will make contact with the object before any part of the mount 140 or cameras impact the object. The contact detector 240 is configured so that it prevents movement of the motion stage 120 once any force is applied to the content detector 240. This may be achieved by physically interrupting power or using software to trigger arresting any movement. This prevents the mount 140 and/or either camera from making contact with the object with a force capable of damaging it.
Figure 20 shows a further perspective view of the mount including a dashed line X-X shown through the first camera 200 and the first illuminator 220.
Figure 2D shows a cross-sectional view through the mount 140 along the line X-X of Figure 2C. Figure 2D shows an optical axis of the first camera 200 and a cross-section through the first illuminator 220.
As described previously, the first illuminator 220 comprises two separate components either side of the first camera 200. In this example implementation, the two components of the first illuminator 220 are above and below the axis of the first camera 200 but may be in different configurations. The first illuminator 220 comprises two light sources 240, which in this example are rectangular light emitting diodes (LED) or bar lights. Each LED 240 is at a distal end of a rectangular or square cross-sectional light guide 250 (an empty tube in this example). Each light guide 250 is arranged to direct light emitted by the LED 240 towards the optical axis of the camera 200. The light from each light guide 250 is directed to intersect along the optical axis of the first camera 200 between a sensor 260 of the first camera 200 and an object to be scanned (not shown in this figure). Therefore, this lighting arrangement reduces or eliminates shadows and other artefacts from images acquired by the first camera 200. An angle between an axis of each light guide 250 and the optical axis 270 of the first camera 200 may be greater than 10 degrees and preferably 45 degrees +/-5 degrees. The angle of incidence of the LEDs 240 are such that the direction of light, channelled by the light guide 250, does not reflect into the camera at the required working distance. Furthermore, the light guide 250 prevents direct reflection of the light source 240 reflecting off the object and into the camera, as can be seen in Fig 2C.
Figure 3 shows a front view of the object scanner 100 with the mount at a lower end of the arm 130 (closest to the chassis 110). A motor (not shown in this figure) may drive -15 -the mount 140 along the arm 130. The motor and arm may form part of the motion stage 120 (y-axis) with a separate motor and guide arranged to translate the arm across the chassis (x-axis). A further motorised component (not shown in the figure) may extend the mount 140 away from the arm (z-axis). A controller or electronic circuit (not shown in this figure) may be used to control the motion stage 120, including the motor within the arm 130 and attached to the mount 140. Sensors, stepper motors, or other means, may be used to determine the position of the mount 140 and/or arm 130 so that a point in space of the mount 140 (and so each of the first and second cameras) may be recorded and repeated accurately. Therefore, each camera may be place in the same position relative to an object to be scanned. Appropriate reference points may be used to achieve this. Furthermore, the position of each camera (i.e., X, Y and Z coordinates) may be determined and recorded whenever an image is acquired from each camera. This is because the controller both coordinates the position of the mount 140, arm 130 and the acquisition of images from the cameras.
Figure 4 shows a further front view of the object scanner 100 with the mount (and first and second cameras) at the top of the arm 130.
Figure 5 shows a stowed configuration of the object scanner 100. This includes the folded legs 170 the arm 130 after it has been pivoted around hinge 160 so that it lays flat against, and within the chassis 110. A cover 500 encloses most of the components of the object scanner 100 leaving only a portion of the chassis 100 and the castor wheels 180 exposed. Lifting handles 190 facilitate multiple person lifts of the whole device.
Poppers (dots) on the cover 500 are the means of fixing the cover to the base. Also in this configuration, the mount (camera/vision head) can be removed from arm 130 and stowed within a foam lined aperture in the chassis 110.
The object scanner 100 provides functionality to capture images for registering and authenticating objects such as artwork. This is performed by the cameras in the mount 140 or vision head. The mount 140 can be controlled by the operator in certain modes to move it into the correct position.
The operator can use physical switches within the object scanner 100 to achieve this movement. In this example, no motion is directly controlled from the tablet computer 30 other than providing indication that the external computer system 20 can start -16 -automated sequences on the object scanner 100. Movement may be made using the motion stage 120, with movements in the x-axis (e.g., horizontal, left and right) achieved across the chassis 110 and the y-axis (e.g., vertical, up and down) along the arm 130. In order to focus the cameras, the mount 140 can move in the z-axis (perpendicular to the arm 130) to be closer or farther away from the object. The z-axis control can be automated providing auto focus for the cameras.
Transport or caster wheels 180 and lifting handles 190 are used to assist in transporting the object scanner 100. The object scanner 100 can be set up in two configurations depending on the object size and position (see Figures 1B and 1C).
The mount 140 or vision head has two cameras, as previously described. The first (wide-angle) camera 200 is used to identify areas of the object to be inspected, and the second (signature) camera 210 scans the object with preferably 0.5mm resolution, to capture images of areas or regions of interest (ROI).
The mount 140 or vision head has built-in lighting to provide uniform lighting. It uses a distance sensor and image processing to automatically focus the cameras and adjust the lighting. This lighting conforms to museum standards for UV and IR. The laser 270 projects a cross hair or other visual mark, for use during datum acquisition, setting boundary position and to assist the operator to align the mount 140 or vision head to the correct position during signaturing (scanning) and authenticating objects.
The low force contact switch (contact detector 240) is mounted on the mount 140 or vision head to prevent further contact of the mount 140 or vision head with the object.
Additionally, there is also a distance sensor 260 to enable movement in the z-axis to set the correct focal distance for the cameras.
A mobile application loaded on the tablet computer 30 connects wirelessly to the object scanner 100 using Wi-Fi or Bluetooth (RTM). Bluetooth (RTM) may be limited to entering passwords (e.g., a Wi-Fi password). The object scanner 100 should have no direct communication with the table computer 30 during its operation to improve security. A datum point may be a recognisable feature point on the object. The operator can move or jog back to the same point during an authentication test. For example, the datum may be a centre of a pupil of the left eye on a portrait. This information may be stored as text to -17 -assist an operator to find the datum point at least roughly. The laser 270 may provide a visual mark on the object and the operator can manipulate the position of the visual mark until it lines up with the datum point. It is not necessary for the visual mark to perfectly line up with the datum point. An approximation will be sufficient as long as they are contained within the same field of view of the first camera 200. Image processing techniques may be used to correct for inaccuracies. The motion stage 120 may be operated automatically to compensate and align more accurately the visual mark with the datum feature on the object during authentication as well as automatically setting the distance from the object (z-axis) for correct focus, using the distance sensor 260. The position of the mount 140 may be recorded (e.g., using x, y, z coordinates of the motion stage 120) when a signal is received by the operator pressing a "set' button to indicate that the visual mark is aligned with the datum point.
Bounds may define an outer periphery region of the object or artwork within which the scanner will operate. Figure 5C shows controls or switches within the object scanner used to move the mount 140 and set the datum and bounds points ("set" button). Position information is transmitted directly to the external computer 20. The bounds may be set by the operator using the controls. Bounds may include a sub section of a large piece of artwork, or the entirety of a small piece. The bounds (e.g., three or more points depending on object shape) should not include the edge of a canvas, any picture mount or picture frame. Setting the bounds too small may reduce the number of regions of interest that can be found and can reduce the quality of the generated fingerprints. It may also stop the scanner from reaching certain regions of interest. This can also mean that the object cannot be reverified if the bounds are set too small at that time. This may be achieved by moving or jogging the position of the mount 140 (and visual mark provided by the laser 270) to each bound position or corner and pressing a 'Set' button on the object scanner controls for each position.
Upon instruction using the app on the tablet computer 30, the object scanner 100 operates automatically, first passing over the whole surface of the boundary region, then focussing on regions of interest (ROls). A composite image generated by tiling individual images from the first (wide-angle) camera 200 will appear on a screen of the tablet computer 30 as the scan progresses but preferably not stored. Figure 6 shows screen shots of the app on the tablet computer 30 as the scan progresses.
-18 -Whilst the object scanner obtains images and the tablet computer 30 provides feedback to the operator, the external computer 20 generates the tokenised or numerical representation of the object. The signature images are not sent to the tablet computer 30.
Functional properties of the hardware and software used to scan objects include: Server Connection via internet; Repeatable drive accuracy; Set a Datum on an image; Set a boundary for scanning and authenticating; Boundary must apply to both physical movement and finding features; Find features; Regions of interest are found accurately; Regions of interest are found repeatably; Identified regions of interest are suitable; Ability to get good images with signature camera; Repeatable and dependable auto focus routine; Able to match Datum back again; Able to match back to within a margin of error caused due to initial human position match; Able to re-image signature regions; Able to re-position to within a margin of error to the original region of interest and datums; Able to re-focus to the region of interest such that the region of interest can be compared to the original image; and Error handling.
In an example implementation the object scanner 100 provides: Wide Area Vision (first camera 200): Using a single camera Incremental scanning of artwork within manually specified bounds to provide overview image of artwork; Potential signature sites are automatically identified within the scan images; Camera to be calibrated during build. Calibration of the workspace is not required when in use; -19 -Camera selection to include suitable lighting selection; and Global shutter camera (capture entire image in one shorter exposure, reduces motion blur and environmental lighting fluctuation); Signature Camera (second camera 210): Appropriate lens and camera choice to facilitate signature software operation.
The camera and lens combination ensure that the optics provide a 1:1 ratio between the field of view at working distance of the lens and the actual sensor size so that each pixel is 3.45um; Utilising a telecentric lens to avoid perspective distortion; Dedicated local lighting. Calibratable to provide uniform brightness across different systems; and Camera to be calibrated during build. Calibration of the workspace is not required when in use.
The software components are shown in Figure 7A. The system 10 comprises three physical devices where software will be deployed. These are the external computer 20 (or Artclear server in Figure 7A), the tablet computer 30 (operator tablet in Figure 7A) and the object scanner 100 (Artclear device in Figure 7A).
The external computer 20 hosts software components. The external computer 20 may be a server that uses cloud-based infrastructure which can be scalable.
The tablet computer 30 is only used to access an app that fetches and displays information from the external computer system 20 and requests and confirms operations..
This is the user's only interface to the object scanner 100, which requests and confirms sequences (the joystick being another hardware interface to the object scanner 100 for the operator). To ensure as much security as possible the tablet computer 30 may be run in Kiosk mode, which will only allow the user to open the one application without a password.
However, the application must then be logged into so that access is controlled and granted via the external computer system 20. The Ul on the tablet connects through a web API to show current machine status and allow the user to operate the machine. However, this is controlled by the external computer 20 via the tablet computer 30.
-20 -The object scanner 100 contains camera equipment and the motorised translation stage (motion stage 120) to drive the mount 140 to correct locations at appropriate times. The software splits between Twincat real time software for the control of hardware 10 and motion, and.Net based windows software for image handling and network communication to the external computer 20.
Authentication security of the data is used to improve security. This uses paths for authentication for both the user through the tablet computer 30 and for the external computer 20. This can include token-based authentication.
For the object scanner 100, token-based authentication of certificate-based authentication may be used. In certificate-based authentication a Certificate Authority (CA) issues certificates, which are installed securely on the external computer 20 and generate a client certificate for each object scanner 100. The certificates are password protected (a password is used to generate the client certificate and then again to install the certificate on the device). If a malicious actor wanted to spoof an object scanner 100, they would need to be able to get both the password used to generate the certificate and the certificate itself. If it seems that an object scanner 100 has been compromised in some way the client certificate may be revoked on the external computer 20 until the object scanner 100 can be inspected and a new certificate installed.
One or more services on the external computer 20 may be configured to only allow clients with valid certificates to connect. This can be configured by subdomain, so the API for the object scanner 100 (Machine API in Figure 7A) may be mapped to a separate sub-25 domain.
A control service is a main point of contact for the object scanner 100. This handles all processes and states of connected object scanner 100 (there may be many object scanners in the system 10). It will be connected to the User Interface API through a message queue to allow for the session state to be updated and for the user interaction to propagate to the object scanner 100.
An operator API is a web API that serves data from a database to a human machine interface (HMI) and accept commands from the operator. The user service is the service that handles the requests.
-21 -A vision service contains functions to perform the following actions: Return an offset (x, y) from a given image to a stored image (in mm); Tile a set of images for a given artwork to perform a whole image for preview; and Analyse an image for features and decide on regions of interest in the image based on a set of criteria.
A signature service is wrapped around the signaturing software, which takes images from the object scanner 100. Image metrics (focus etc.) may be written to a database.
Image metrics can be used to decide if the signature (tokenised or numerical representation of the object) needs to be re-taken and to analyse performance under different conditions. The signature algorithm may be that described in US2018/0365519 or another process. In particular, the figure 2 of US2018/0365519 and its description at paragraphs [0014]-[0027] and figure 3 (paragraphs [0028]-[0030]) describe how the signature is generated. Figure 4 and paragraphs [0031]40035] of US2018/0365519 describe how the object can be authenticated by generating a further signature and comparing this to the original. These portions of US2018/0365519 and the entire document are incorporated by reference.
For simplicity, logged messages may be sent as JSON strings (or in other formats) containing the originating service or device, a reason for the log, and a message. Other formats may be used.
The motion system on the object scanner 100 may be programmed in the Twincat 3 environment for real-time control of hardware devices. Each motor in the motion stage 120 may have an integrated controller that communicates back to a programmable logic controller (PLC) via Ethercat or Ethernet, for example. The Twincat system may be used to set target positions and monitor motor positions to perform actions at appropriate times.
To communicate between the real time control environment and other services on the PLC (running in windows, for example) the Twincat ADS (Automation Device Specification) protocol may be used. This allows for example data about the positions of the motors to be transferred to the remote command interface to be passed onto the external computer 20 (e.g., through the tablet computer 30).
-22 -A local control service may be a software component, programmed in C# (.Net) covering the following functions: Capturing Images from the wide-angle (macro) Camera; Capturing Images from the signature camera; Compiling position data and images after a scan for transferring to the external computer 20 or server; Performing Focus calculations on the signature camera; Monitoring the device status; and Error detection and recovery.
A remote command interface may be a software component, for example, programmed in C# (.Net) that exclusively handles the communication with the external computer 20 on the object scanner 100. It may convert any datatypes to those suitable for communication and cleanse the data to make sure there have not been any errors (e.g.,
Null or empty fields.)
The command interface may also contain local error handling of failed network messages, it should make re-attempts and discard data (which could be 'sensitive') after a certain amount of time.
Example properties of the wide-angle vision system (the first camera 200) include:
Item Description
Camera: approximately 2000x1000 pixel resolution, Gig E; Lens: F5-F3.5; and Lighting: 40-60 mmx10-20 mm white bar lights The wide-angle camera may be integrated into the mount 140 or vision head and scans across the object to find regions of interest. The digital 10 on the camera is used to trigger and record the capture end positions so that the exact image location can be reconstructed.
-23 -Initial camera calculations were performed assuming a lens with a focal distance (F) of 4mm, this is generally the smallest focal distance (and therefore widest field of view) that can be achieved without greatly increasing the amount of pincushion distortion or without using heavily specialised lenses.
These camera sensors may be selected based on motion blur at the assumed speed being less than one pixel at the assumed speed of 100mm/s and the assumed working distance of 200mm. These assumptions result in 140mm and 160mm working distances and provide a compact design of camera head and mount 140.
Because of the design of the imaging process, scanning while moving generates a low amount of blur with the selected components, and provides sufficient light to be able to get a good image quality at a shutter speed of 0.5ms.
A ring light for use with the wide-angle camera (first camera 200) causes specular reflection when imaging through glass. Therefore, this may make this unsuitable for artworks. Bar lights integrated into the head with the working area allows the image to be completely lit while not causing specular reflections. A simulation was modelled based on the actual camera values (4mm focal length, 1/2.3" sensor) and working distance to make it as close as possible to the real situation of the object scanner 100. The area lights were used to simulate the bar lights in the correct location. Figure 8A shows an imaging target 800 and this was used with a layer of simulated glass put in between the imaging target and a simulated camera. The optical simulation is shown in Figure 8B with the right half of the figure illustrating a simulation of the lighting system (first illuminator 220) showing light paths highlighted (left of Figure 8B) and lighting simulation with glass layer over artwork (right of Figure 8B). It is noted that specular reflection from the light while off-angle where the area lights produce reflections in the glass. The simulation further proves that the lighting concept reduces specular reflections in the images (even where glass was present).
For the bar lights to remain compact they are preferably approximately 50x2Omm.
An example signature camera vision system (second camera 210) may include the following properties: -24 -Camera 1/2-2/3" sensor, approximately 2000x2000 pixel resolution, approximately 3-5pm pixel size, Gig E; Lens lx magnification, telecentric, 40-100 mm working distance; and Lighting 30-50mm ID/60-90mm OD ring light white.
GigE provides a more stable connection. The lens may have a 1:1 ratio between sensor size and FOV at working distance and a telecentric lens provides orthogonal perspective.
The lighting solution for the signature camera vision system may be calibrated to provide a uniform level of lighting across all produced scan devices, but also does not produce large levels of specular reflection on reflective artwork is able to light through all possible types of glass. Optional lighting may include the following: Replacing the lens with a version that included a prism for co-axial lighting and a high powered coaxial Led light source; and A PWM adjustable light ring.
Both lighting solutions are adjustable using din-mount controllers and an analogue output.
Figures 9A and 9B show example test images using a carbon fibre test piece (left side of Figures 9A and 9B) and a framed print with standard picture glass (right side of Figures 9A and 9B).
An example HMI may be a mobile app on an Apple iPad device.
There are three main procedures that are undertaken by operators. These are: Connecting to the server or external computer 20 and selecting a job; Registering a New Artwork (object); and Authenticating an existing artwork.
-25 -Figures 10, 11, 12 show sequence diagrams for each of these and focus on communications between the various software components (regardless of where they are deployed.) In the diagrams the HMI (human machine interface) device is the tablet computer 30. The scanner device is the object scanner 100.
Method 1000 of Figure 10 is the initial connection and job selection. In these examples, the object is a piece of artwork, but any object can be scanned. Method 1100 of Figure 11 shows the steps for registering an object. Method 1200 of Figure 12 shows the steps for authenticating an object that was previously registered.
Figures 13 and 14 show flowcharts for methods for scanning an object and obtaining a tokenised or numerical representation (signaturing) 1300 and authenticating the object (1400). These flowcharts include data inputs that are required, and outputs generated by each process. Connecting the object scanner 100 to the server (external computer 20) and selecting a job are left out of these two diagrams.
Figures 15 to 22 show in more detail sub-processes of the above methods as flowcharts. Method 1500 of Figure 15 shows the method steps for connecting the object scanner 100 (Scanner) to the external computer 20 (Server). Again, the HMI in this example is the tablet computer 30. Method 1600 of Figure 16 shows the method steps for starting a job. Method 1700 of Figure 17 shows the method steps for setting a datum. Method 1800 of Figure 18 shows the method steps for setting bounds on the object using the object scanner 100. Method 1900 of Figure 19 shows the method steps for scanning the artwork or object. Method 2000 of Figure 20 shows the method steps for creating the tokenised or numerical representation (signature) of the object. Method 2100 of Figure 21 shows the method steps for reacquiring the datum on the object so that the object can be authenticated and have its signature compared to a stored signature. Method 2200 of Figure 22 shows the method steps for authenticated the object.
Figures 23A-C show images of the object as captured by the object scanner 100.
Figure 23A shows an image of an example object obtained using the first (wide-angle) camera 200. Figure 23B shows a set of images of the same example object obtained using the second (narrow-angle) camera 210. Multiple images are required to cover the entire object (or at least regions of interest including features) as the second camera 210 has a smaller field of view. Figure 23C shows a close-up image of a feature within the -26 -object. The feature may be within a region of interest and used to generate the hashes and signature of the object.
Figure 24 shows a table illustrating the generation of the tokenised or numerical representation of fingerprint ID of the object (e.g., artwork). The scan information includes the locations (as measured by the motion stage 120 of the object scanner 100). The scan information also includes images (signature images) and their files as acquired using the second camera 210. These data are hashed (once for every region of interest that contain features). All of these hashes are then hashed together to form the tokenised or numerical representation of the object. The fingerprint ID is generated from the tokenised or numerical representation of the object and adding the datum, image and artwork details to form a full signature. The scan information may also include signatures generated by the HP software (e.g., signature = output of the HP algorithm; and the fingerprint ID = hash of these + images + datum + coordinates).
In this example implementation the fingerprint may take the following form: fingerprint = datum hash + coordinates of ROI relative to datum + contents (fingerprint file)+bytes (datum image) + bytes (big picture) + ROI hashes + metadata The fingerprint ID may be SHA256 encrypted.
For example:
{ "artwork": 666, "filename": "gh.tar.gz", "roi": ( "datum": "114109613026c3066fe-fdef-4e30-al 72-3855e5fac722", ''roi": "114109613026c3066fe-fdef-4e30-al 72-3855e5fac722": "da00804611914cf63393c1f4a31d3358353014b8bd5d1530ced73e82e8818cd3", "1141450813a41795e3-651e-4e75-82f6-f4e29e32a893'": "Oa653becd5322fca2b165c1bc54d24e9863891ad3f720f8be6fd1451546939bc", "1141627442f0457303-0a2a-4e52-87613495eb407ab01": "ce6075d2680d0586fcffd049637ff73a4ec2d2b6efb8a180224e388a613af534'', "1141639190ee077534-7763-4920-bb64-07421e6274ad": "c2d7a63ed92bc5d970e64f3c7e4a29c12aa3139d6ac62d8a5e09126c9d404797'", "11416547860f bd959b-3189-4362-b6b9-b8970ef216bc": "65c43ad589b2a7a8a5f87243ded4cba4ffa82320809b5b0506e8327ba2e83fe7", "11417975796c2c6f9d-e674-48fa-alcf-3f2b37505c42": -27 - "2121106b29835060c49927606045e64831e79f77424326f85eb9077afa1a84da'", "1141850387b7aa4ec9-200d-4f95-a32d-4164e22d61c2": "2fad552d148d9dcddda816f91ca2419a885d9dc5bcedfae799802076b51c2a6d'", "114188296a05b2aed-65a2-46e5-8101-de 1 8f064676b": "48cbedb7a338b8fd50e516c891483d715cb4fe7lal dd4871af 1324c6465f5ed3", "11420463899d30df0f-9673-4553-b59e-9e2e503de969n: "1178106e1aac1 f8500dcec42490d67d3acb8d65dd66eb671fa3245da356db581" }, "fingerprint id": "3bfba9d7ea4a0d0d4c33b67f9303decdcab29616e31ccdd292b92fa25ab2bc7f} } Figure 25 shows a flowchart of a high-level method 2500 for scanning an object. At step 2510 the first camera 200 is used to obtain a wide-angle image of the object. Areas within this wide-angle image are identified that contain suitable features. This may be achieved automatically using image processing techniques such as finding areas with high contrast changes, edge finding, Al, and/or finding point marks or shapes, for example. At step 2530, the areas of the object that contain the features are imaged by the second camera 210. For example, the areas of regions of interest may be approximately 5x5 mm (or 5x7.5 mm) areas on the object. The positions of the cameras when the images are acquired are recorded (e.g., x, y, z coordinates) using sensors or motor steps within stepper motors used by the motion stage 120. At step 2540, the images of the features are converted into the tokenised or numerical representation, which is used to generate the fingerprint ID. The location of each ROI may be recorded relative to the initial datum point or its coordinates.
Figure 26 shows a further flowchart illustrating different aspects of the method 2600. The scan is completed at step 2605. The output data from the object scanner 100 is sent to the external computer 20 or server at step 2610 (e.g., directly or through the tablet computer 30). The fingerprint ID is generated (as described above) at step 2615. The images used to generate the fingerprint ID are stored at step 2620. This may be within local storage or externally. A further process to generate the signature is initiated by a user issuing a command on the tablet computer 30 at step 2625. The user issues a command. This can cause the external computer 20 to generate a token at step 2630. The fingerprint and details of the object (e.g., artist, date, etc.) are collected at step 2635. An authorised signatory is sent a message at step 2640. Using the fingerprint ID that is prepared in the background allows or enables an operator to start a token sign-off process on a portal, -28 -wherein the person signs in using an assigned wallet. The authorised signatory accesses the system (e.g., using their own computer or terminal) at step 2745 and they provide approval at step 2750. The artist, object owner or custodian digitally signs the fingerprint ID using a private key at step 2655 (using their own computer or terminal). Checks are made regarding issuing the token at step 2660.
A gallery or third party signs the resultant token using their wallet ID and this is published to a blockchain at step 2665. This can be a public Blockchain. A further signoff by a system administrator (Artclear) can signoff the process at step 2670. Once the token holder requests a user (authenticator or someone else) to sign the token, Artclear or another administrator carries out a data check on users' pre-stored know your client (KYC) information to verify user's authenticity and to enable the token holder to continue its sign off.
An artist or owner of the object may authorise a third party (e.g., the gallery) to sign on their behalf. Alternatively, the artist or owner can also be the authorised signatory.
The object scanner 100 is limited to carrying out parts of the process for acquiring images and determining the location of the mount 140 (i.e., cameras and laser point) when images are acquired. The object scanner 100 does not store these data. The object scanner 100 does not carry out any steps for generating the tokenised or numerical representation of the object. These steps are only carried out externally to the object scanner 100 and tablet computer 30.
The process may be repeated at a later time to generate a further tokenised or numerical representation of the object. This may start with the operator realigning the feature or datum point on the object with the visual mark. This provides the system with information necessary move the mount 140 to the same position where the original image(s) were obtained. At these same position, new images are acquired and the tokenised or numerical representation of the object is generated (at the external computer 20). A comparison may be made between the stored tokenised or numerical representation of the object and the newly generated one. If these match that the object can be authenticated as being the same.
-29 -As used throughout, including in the claims, unless the context indicates otherwise, singular forms of the terms herein are to be construed as including the plural form and vice versa. For instance, unless the context indicates otherwise, a singular reference herein including in the claims, such as "a" or "an" (such as an ion multipole device) means "one or more" (for instance, one or more ion multipole device). Throughout the description and claims of this disclosure, the words "comprise", "including", "having" and "contain" and variations of the words, for example "comprising" and "comprises" or similar, mean "including but not limited to", and are not intended to (and do not) exclude other components. Also, the use of "or" is inclusive, such that the phrase "A or B" is true when "A" is true, "B is true", or both "A" and "B" are true.
The use of any and all examples, or exemplary language ("for instance", "such as", "for example" and like language) provided herein, is intended merely to better illustrate the disclosure and does not indicate a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
The terms "first" and "second" may be reversed without changing the scope of the disclosure. That is, an element termed a "first" element may instead be termed a "second" element and an element termed a "second" element may instead be considered a "first" element.
Any steps described in this specification may be performed in any order or simultaneously unless stated or the context requires otherwise. Moreover, where a step is described as being performed after a step, this does not preclude intervening steps being performed.
It is also to be understood that, for any given component or embodiment described throughout, any of the possible candidates or alternatives listed for that component may generally be used individually or in combination with one another, unless implicitly or explicitly understood or stated otherwise. It will be understood that any list of such candidates or alternatives is merely illustrative, not limiting, unless implicitly or explicitly understood or stated otherwise.
-30 -Unless otherwise described, all technical and scientific terms used throughout have a meaning as is commonly understood by one of ordinary skill in the art to which the various embodiments described herein belongs.
As will be appreciated by the skilled person, details of the above embodiment may be varied without departing from the scope of the present invention, as defined by the appended claims.
For example, different camera, motor stages and illuminators may be used.
Many combinations, modifications, or alterations to the features of the above embodiments will be readily apparent to the skilled person and are intended to form part of the invention. Any of the features described specifically relating to one embodiment or example may be used in any other embodiment by making the appropriate changes.
Claims (24)
- -31 -CLAIMS: 1. A system for scanning an object, the system comprising: a first camera; a second camera having a field of view smaller than the first camera; a first illuminator configured to provide illumination for the first camera; a second illuminator configured to provide illumination for the second camera; a mount arranged to support the first camera, the second camera, the first illuminator and the second illuminator, wherein the first camera and the second camera are fixed relative to each other by the mount; a motion stage configured to move the mount relative to the object; and a controller configured to obtain images from the first and second cameras and move the mount using the motion stage.
- 2. The system of claim 1 further comprising a visual mark generator configured to illuminate at least a portion of an object within fields of view of the first and second cameras.
- 3. The system of claim 2, wherein the mount is further arranged to support the visual mark generator.
- 4. The system according to any previous claims, wherein the first illuminator comprises a first rectangular light source within a first rectangular cross-sectional lightguide, wherein an optical axis of the first rectangular cross-sectional lightguide is at an angle greater than 10 degrees from an optical axis of the first camera.
- 5. The system of claim 4, wherein the first illuminator further comprises a second rectangular light source within a second rectangular cross-sectional lightguide, wherein an optical axis of the second rectangular cross-sectional lightguide is at an angle greater than 10 degrees from an optical axis of the first camera and the optical axis of the first and second rectangular cross-sectional lightguides are non-parallel.
- 6. The system of claim 5, wherein the first camera is arranged between the first and the second light sources.
- -32 - 7. The system according to any previous claim, wherein the second illuminator is a ring illuminator arranged around the second camera.
- 8. The system according to any previous claim further comprising at least one communications interface configured to transmit data corresponding to the obtained images to an external computer system.
- 9. The system according to any previous claim, wherein the mount further comprises an arm pivotably attached to the motion stage.
- 10. The system according to any previous claim further comprising a chassis, wherein the motion stage is fixed to the chassis.
- 11. The system of claim 10 further comprising a plurality of foldable legs attached to the chassis.
- 12. The system of claim 10 or claim 11, further comprising a plurality of caster wheels attached to the chassis.
- 13. The system according to any previous claim further comprising a removable cover.
- 14. The system according to any previous claim further comprising a contact detector attached to the mount and configured to stop the motion stage when contact is detected.
- 15. A method for scanning an object, the method comprising the steps of: obtaining one or more images from a first camera; identifying at least one area within the one or more images containing a plurality of features; using a second camera having a field of view smaller than the first camera to obtain an image of the at least one identified area; converting the image obtained by the second camera into a unique tokenised representation, wherein positions of the first camera and the second camera are identified when the images are acquired.
- -33 - 16. The method of claim 15 further comprising the step of associating the unique tokenised representation with the object.
- 17. The method according to claim 15 or claim 16 further comprising the steps of: before obtaining the one or more images from the first camera, illuminating the object with a visual mark at or near a feature of the object when the visual mark and the feature of the object are within the field of view of the second camera; and recording a position of the first camera and the second camera when the visual mark is at or near the feature of the object.
- 18. The method of claim 17, wherein the visual mark is generated by a visual mark generator that is fixed to a mount that supports the first camera and the second camera.
- 19. The method according to any of claims 15 to 18, further comprising the step of transmitting data corresponding to the images obtained by the second camera to an external computer system, wherein the step of converting the image obtained by the second camera into a unique numeric representation is carried out within the external computer system.
- 20. The method according to any of claims 15 to 19 further comprising the step of: hashing data of one or more images of one or more features obtained by the second camera and coordinates of the second camera when the one or more images were obtained to generate a hash value.
- 21. The method of claim 20 further comprising the step of saving the hash value together with data identifying the object.
- 22. The method according to any of claim 15 to 21 further comprising the step of confirming that the plurality of features meet one or more criteria.
- 23. The method according to any of claims 15 to 22 further comprising the steps of: repeating the steps of: moving the second camera to the same position used to obtain the image from the second camera; -34 -using the second camera to obtain a further image of the at least one identified area at the same position; converting the image obtained by the second camera into a further tokenised representation; and confirming a match between further tokenised representation and the tokenised representation.
- 24. The system according to any of claims 1 to 14 further comprising an external computer system and means for carrying out the steps of any of claims 15 to 22.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2305006.5A GB2625398A (en) | 2023-04-04 | 2023-04-04 | Object identification |
PCT/GB2024/050797 WO2024209182A1 (en) | 2023-04-04 | 2024-03-25 | Object identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2305006.5A GB2625398A (en) | 2023-04-04 | 2023-04-04 | Object identification |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202305006D0 GB202305006D0 (en) | 2023-05-17 |
GB2625398A true GB2625398A (en) | 2024-06-19 |
Family
ID=86316318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2305006.5A Pending GB2625398A (en) | 2023-04-04 | 2023-04-04 | Object identification |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2625398A (en) |
WO (1) | WO2024209182A1 (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060175549A1 (en) * | 2005-02-09 | 2006-08-10 | Miller John L | High and low resolution camera systems and methods |
JP2008099279A (en) * | 2006-10-06 | 2008-04-24 | Vitec Group Plc | Camera control system |
WO2009142332A1 (en) * | 2008-05-23 | 2009-11-26 | Advas Co., Ltd. | Hybrid video camera syste |
US20150371087A1 (en) * | 2011-09-15 | 2015-12-24 | Raf Technology, Inc. | Object identification and inventory management |
CN106446874A (en) * | 2016-10-28 | 2017-02-22 | 王友炎 | Authentic artwork identification instrument and identification method |
US20180365519A1 (en) * | 2016-04-07 | 2018-12-20 | Hewlett-Packard Development Company, L.P. | Signature authentications based on features |
CN112085009A (en) * | 2020-10-22 | 2020-12-15 | 肯维捷斯(武汉)科技有限公司 | Micro texture image acquisition anti-counterfeiting device and method |
US20210110469A1 (en) * | 2019-10-15 | 2021-04-15 | Alitheon, Inc. | Digital hypothecation database system |
US20220301195A1 (en) * | 2020-05-12 | 2022-09-22 | Proprio, Inc. | Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene |
KR20230081799A (en) * | 2021-11-29 | 2023-06-08 | 한국생산기술연구원 | Composite Scanning Apparatus for acquiring Large Art Work Data |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012110966A1 (en) * | 2011-02-15 | 2012-08-23 | Surgix Ltd. | Methods, apparatuses, assemblies, circuits and systems for assessing, estimating and/or determining relative positions, alignments, orientations and angles of rotation of a portion of a bone and between two or more portions of a bone or bones |
-
2023
- 2023-04-04 GB GB2305006.5A patent/GB2625398A/en active Pending
-
2024
- 2024-03-25 WO PCT/GB2024/050797 patent/WO2024209182A1/en unknown
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060175549A1 (en) * | 2005-02-09 | 2006-08-10 | Miller John L | High and low resolution camera systems and methods |
JP2008099279A (en) * | 2006-10-06 | 2008-04-24 | Vitec Group Plc | Camera control system |
WO2009142332A1 (en) * | 2008-05-23 | 2009-11-26 | Advas Co., Ltd. | Hybrid video camera syste |
US20150371087A1 (en) * | 2011-09-15 | 2015-12-24 | Raf Technology, Inc. | Object identification and inventory management |
US20180365519A1 (en) * | 2016-04-07 | 2018-12-20 | Hewlett-Packard Development Company, L.P. | Signature authentications based on features |
CN106446874A (en) * | 2016-10-28 | 2017-02-22 | 王友炎 | Authentic artwork identification instrument and identification method |
US20210110469A1 (en) * | 2019-10-15 | 2021-04-15 | Alitheon, Inc. | Digital hypothecation database system |
US20220301195A1 (en) * | 2020-05-12 | 2022-09-22 | Proprio, Inc. | Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene |
CN112085009A (en) * | 2020-10-22 | 2020-12-15 | 肯维捷斯(武汉)科技有限公司 | Micro texture image acquisition anti-counterfeiting device and method |
KR20230081799A (en) * | 2021-11-29 | 2023-06-08 | 한국생산기술연구원 | Composite Scanning Apparatus for acquiring Large Art Work Data |
Also Published As
Publication number | Publication date |
---|---|
GB202305006D0 (en) | 2023-05-17 |
WO2024209182A1 (en) | 2024-10-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3624353B2 (en) | Three-dimensional shape measuring method and apparatus | |
US20190122425A1 (en) | Robot motion planning for photogrammetry | |
JP6429772B2 (en) | 3D scanning and positioning system | |
US20140078260A1 (en) | Method for generating an array of 3-d points | |
US20150054918A1 (en) | Three-dimensional scanner | |
JP2002081922A (en) | Image processing device, image processing method, and program | |
JP5111447B2 (en) | Camera calibration apparatus, camera calibration method, camera calibration program, and recording medium recording the program | |
Marcin et al. | Hierarchical, three‐dimensional measurement system for crime scene scanning | |
US7463772B1 (en) | De-warping of scanned images | |
CN109766876A (en) | Contactless fingerprint acquisition device and method | |
JP2000089092A (en) | Document image pickup technique by digital camera | |
US9107613B2 (en) | Handheld scanning device | |
TWI510052B (en) | Scanner | |
JP2009260819A (en) | Notebook information processor and projective transformation parameter calculating method | |
CN109716348A (en) | It is independently processed from multiple area of interest | |
JP2010541058A (en) | Three-dimensional digitization method | |
CN106682917A (en) | Artwork authentication and transaction method and system based on digital signature | |
EP3430595A1 (en) | Determining the relative position between a point cloud generating camera and another camera | |
US20160054806A1 (en) | Data processing apparatus, data processing system, control method for data processing apparatus, and storage medium | |
TWI608737B (en) | Image projection | |
CN113228031B (en) | Authentication for connecting a barcode reader to a client computing device | |
EP3479354B1 (en) | Determining the relative position between a thermal camera and a 3d camera using a hybrid phantom and hybrid phantom | |
GB2625398A (en) | Object identification | |
CN106204604B (en) | Project touch control display apparatus and its exchange method | |
JP6777507B2 (en) | Image processing device and image processing method |