WO2023046510A1 - System and method for determining a characteristic of a surface of an object, methods for training a machine-learning model and programmable hardware - Google Patents
System and method for determining a characteristic of a surface of an object, methods for training a machine-learning model and programmable hardware Download PDFInfo
- Publication number
- WO2023046510A1 WO2023046510A1 PCT/EP2022/075256 EP2022075256W WO2023046510A1 WO 2023046510 A1 WO2023046510 A1 WO 2023046510A1 EP 2022075256 W EP2022075256 W EP 2022075256W WO 2023046510 A1 WO2023046510 A1 WO 2023046510A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- event
- measurement
- vision sensor
- characteristic
- machine
- Prior art date
Links
- 238000010801 machine learning Methods 0.000 title claims description 137
- 238000000034 method Methods 0.000 title claims description 111
- 238000012549 training Methods 0.000 title claims description 67
- 238000005259 measurement Methods 0.000 claims abstract description 137
- 238000012545 processing Methods 0.000 claims abstract description 47
- 230000008859 change Effects 0.000 claims abstract description 27
- 230000003068 static effect Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 description 42
- 238000004422 calculation algorithm Methods 0.000 description 20
- 238000003066 decision tree Methods 0.000 description 9
- 238000005457 optimization Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000012512 characterization method Methods 0.000 description 7
- 230000001419 dependent effect Effects 0.000 description 6
- 238000011960 computer-aided design Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000000463 material Substances 0.000 description 5
- 230000002787 reinforcement Effects 0.000 description 5
- 238000012706 support-vector machine Methods 0.000 description 5
- 238000004088 simulation Methods 0.000 description 4
- 238000007635 classification algorithm Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 125000002015 acyclic group Chemical group 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000005352 clarification Methods 0.000 description 1
- 238000010224 classification analysis Methods 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229920000747 poly(lactic acid) Polymers 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/55—Specular reflectivity
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/55—Specular reflectivity
- G01N21/57—Measuring gloss
- G01N2021/575—Photogoniometering
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
- G01N21/31—Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2201/00—Features of devices classified in G01N21/00
- G01N2201/12—Circuits of general importance; Signal processing
- G01N2201/129—Using chemometrical methods
- G01N2201/1296—Using chemometrical methods using neural networks
Definitions
- the present disclosure relates to surface characterization.
- examples relate to a system and a method for determining a characteristic of a surface of an object, methods for training a machine-learning model and a programmable hardware.
- Goniometer-based systems are conventionally used for determining various characteristics of object or material surfaces.
- material/object surface characterization through standard goniometer technology is very slow and inefficient.
- the present disclosure provides a system for determining a characteristic of a surface of an object.
- the system comprises a light source configured to illuminate the surface during a measurement. Additionally, the system comprises an event-based vision sensor configured to capture the surface during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement.
- the system further comprises a positioning system configured to relatively rotate at least two of the surface, the eventbased vision sensor and the light source with respect to each other about a rotation axis during the measurement.
- the system comprises processing circuitry coupled to the eventbased vision sensor and configured to determine the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
- the present disclosure provides a method for determining a characteristic of a surface of an object.
- the method comprises illuminating the surface by a light source during a measurement. Additionally, the method comprises capturing the surface by an event-based vision sensor during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement.
- the method further comprises relatively rotating at least two of the surface, the event-based vision sensor and the light source with respect to each other about a rotation axis during the measurement.
- the method comprises determining the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
- the present disclosure provides a method for training a machinelearning model.
- the machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement.
- the method comprises inputting data indicating a digital model of the object to the machine-learning model. Additionally, the method comprises inputting training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event -based vision sensor and a light source with respect to each other about a rotation axis.
- the light source is used for illuminating the surface during the measurement.
- the method further comprises determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target reflectance for the training data.
- the method comprises updating the weights of the machine-learning model based on the gradient of the loss function.
- the present disclosure provides another method for training a machine-learning model.
- the machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement.
- the method comprises inputting data indicating a reflectance of the surface. Additionally, the method comprises inputting training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis.
- the light source is used for illuminating the surface during the measurement.
- the method further comprises determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model of the object.
- the method comprises updating the weights of the machine-learning model based on the gradient of the loss function.
- the present disclosure provides a programmable hardware comprising circuitry configured to perform one of the above methods for training a machinelearning model.
- the present disclosure provides a non-transitory machine-readable medium having stored thereon a program having a program code for performing one of the above methods for training a machine-learning model, when the program is executed on a processor or a programmable hardware.
- Fig. 1 illustrates an example of a system for determining a characteristic of a surface of an object
- Fig. 2 illustrates an exemplary characterization of a vehicle surface
- Fig. 3 illustrates a flowchart of an example of a method for determining a characteristic of a surface of an object
- Fig. 4 illustrates a flowchart of an example of a method fortraining a machine-learning model
- FIG. 5 illustrates an exemplary data flow in the method illustrated in Fig. 4
- Fig. 6 illustrates a flowchart of an example of another method for training a machine-learning model
- Fig. 7 illustrates an exemplary data flow in the method illustrated in Fig. 6.
- Fig- 1 illustrates a system 100 for determining a characteristic of a surface of an object 101.
- the object 101 may be any physical object (body) and may be defined as a collection of matter within a defined contiguous boundary in three-dimensional space.
- the surface of the object 101 is the object 101’ s exterior or upper boundary.
- the system 100 comprises a light source 110.
- the light source 110 is configured to illuminate the surface of the object 101 during a measurement. This is exemplarily illustrated in Fig. 1 as the light source 110 emits light 111 toward the object 101.
- the light 111 may be of any suitable wavelength.
- the light source 110 may be configured to illuminate the surface of the object 101 during the measurement with at least one of ultraviolet light (wavelength from approx. 100 nm to approx. 380 nm), visible light (wavelength from approx. 380 nm to approx. 780 nm) and infrared light (wavelength from approx. 780 nm to approx. 1 mm).
- the wavelength of the light 111 may be adjustable in order to examine a wavelength dependency of the surface characteristics of the object 101.
- the light 111 emitted by the light source 110 exhibits a known (e.g. predefined or adjusted) opening angle and a known (e.g. predefined or adjusted) brightness.
- the light source 110 may illuminate a (e.g. wide and) contiguous area of the object 101’s surface or one or more individual section (e.g. one or more point) of the object 101’s surface.
- the light 110 may be emitted such by the light source 110 that a light pattern is projected on the object 101’s surface.
- the light source 110 may comprise various components such as one or more light emitter, electronic circuitry and optics (e.g. one or more lenses for adjusting a shape or the opening angle of the light 111, one or more monochromator for adjusting the wavelength of the light 111, one or more optical filter for adjusting the wavelength of the light 111, etc.).
- the one or more light emitter may, e.g., be Light-Emitting Diodes (LEDs) and/or one laser diodes (e.g. one or more Vertical-Cavity Surface-Emitting Lasers, VCSELs).
- a plurality of light emitters emitting light at different wavelengths may be provided and selectively activated to adjust the wavelength of the light 111 emitted by the light source 110.
- the light source 110 may alternatively comprise more, less or other components than those exemplary components described above.
- the light 111 emitted toward the object 101 by the light source 110 is reflected by the surface of the object 101.
- the reflection of the light 111 by the surface of the object 101 depends on various characteristics of the surface. For example, the reflection of the light 111 depends on a reflectance of the surface or an orientation of the surface in the three-dimensional space.
- the system 100 For detecting the reflections of the light 111 by the surface of the object 101, the system 100 comprises an event-based vision sensor 120 such as a dynamic vision sensor (also known as event camera, neuromorphic camera or silicon retina) that responds to local changes in brightness.
- the event-based vision sensor 120 does not capture image frames using a shutter like a conventional image sensor does. Instead, the photo-sensitive sensor elements or pixels of the event-based vision sensor 120 operate independently and asynchronously, detecting changes in brightness as they occur, and staying silent otherwise. Similar to what is described above for the light source 110, the event-based vision sensor 120 may be sensitive to light of different wavelengths. For example, the event-based vision sensor 120 may be sensitive for at least one of ultraviolet light, visible light and infrared light.
- the detection (of an occurrence) of a change in brightness by the event-based vision sensor 120 is called an “event”.
- the events are output by the event-based vision sensor 120 as an event stream (i.e. a stream of events).
- the event-based vision sensor 120 may provide high temporal resolution, high (wide) dynamic range, avoid under/overexposure and avoid motion blur compared to frame-based image sensors.
- the event-based vision sensor 120 is configured to capture the surface of the object 101 during the measurement. Depending on the surface characteristics of the object 101, the eventbased vision sensor 120 detects one or more event indicating detected changes in brightness during the measurement. Accordingly, output data 121 of the event-based vision sensor 120 encode an event stream indicating the one or more change in brightness measured (detected) by the event-based vision sensor 120 during the measurement.
- the system 100 additionally comprises a positioning system 130 configured to relatively rotate at least two of the surface of the object 101, the event-based vision sensor 120 and the light source 110 with respect to each other about a rotation axis z during the measurement.
- the positioning system 130 is a rotatable plate on which the object 101 is placed. As indicated by the arrow 131, the object 101 rotates about the rotation axis z in case the rotatable plate is rotated.
- the positioning system 130 is configured to rotate the object 101 about the rotation axis z during the measurement. Accordingly, the surface of the object 101 is rotated with respect to the event-based vision sensor 120 and the light source 110 about the rotation axis z during the measurement.
- the present disclosure is not limited thereto.
- at least one of the event-based vision sensor 120 and the light source 110 may additionally be rotated about the rotation axis z during the measurement.
- only the event-based vision sensor 120 and/or the light source 110 may rotate about the rotation axis z during the measurement, while the object 101 and, hence, the surface of the object 101 does not rotate about the rotation axis z during the measurement.
- the event-based vision sensor 120 and/or the light source 110 may be mounted on one or more arm (e.g. a robot arm) that is/are rotatable about the rotation axis z.
- the positioning system may be configured to rotate at least one of the event-based vision sensor 120 and the light source 110 about the rotation axis z during the measurement. Another detailed example of the positioning system will be described later with respect to Fig. 2.
- the surface of the object 101 Due to the relative rotation of at least two of the surface of the object 101, the event-based vision sensor 120 and the light source 110 with respect to each other about the rotation axis z during the measurement, the surface of the object 101 is illuminated from different angles of a hemisphere around the object 101 by the light source 110 and/or the surface of the object
- the eventbased vision sensor 120 is captured from different angles of the hemisphere around the object 101 by the eventbased vision sensor 120 during the measurement.
- the events i.e., changes in brightness measured by the event-based vision sensor 120 during the measurement relate (correspond) to the reflection characteristics of the object 101 ’s surface and, hence, allow to characterize the surface of the object.
- the system 100 further comprises processing circuitry 140.
- the processing circuitry 140 is coupled to the event-based vision sensor 120 and configured to receive the output data 121 of the event-based vision sensor 120. Additionally, the processing circuitry 140 is configured to receive further data
- the processing circuit 140 may be a single dedicated processor, a single shared processor, or a plurality of individual processors, some of which or all of which may be shared, a digital signal processor (DSP) hardware, an application specific integrated circuit (ASIC), a neuromorphic processor or a field programmable gate array (FPGA).
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- the processing circuitry 140 may optionally be coupled to, e.g., read only memory (ROM) for storing software, random access memory (RAM) and/or non-volatile memory.
- ROM read only memory
- RAM random access memory
- the processing circuitry 140 is configured to determine the characteristic of the object 101 ’s surface based on the event stream for the measurement included in the output data 121 and based on the data 102 indicating the other characteristic of the object 101 ’s surface.
- the data 102 indicating the other characteristic of the object 101 ’s surface may, e.g., be received from another entity such as memory coupled to the processing circuitry 140 or another processing circuitry external to the system 100. In other examples, the data 102 may be previously generated (e.g. be determined or be computed) by the processing circuitry 140.
- the system 100 may allow to determine the characteristic of the object 101 ’s surface with higher speed due to the high speed of the event-based vision sensor 120. Accordingly, the system 100 may allow (much) shorter processing times than conventional goniometer-based systems. Due to the asynchronous nature of the event-based vision sensor 120, the event-based vision sensor 120 may provide a higher angular resolution than conventional frame-based sensors. Further, the wide dynamic range of the event-based vision sensor 120 may allow to avoid saturation for strong specular reflections from the object 101’ s surface and operation outside a controlled environment (such as a black box used in conventional goniometer technology).
- the system 100 may be used in an open environment where the lighting conditions do not matter as the event-based vision sensor 120 is only sensitive to temporal changes in brightness. Accordingly, the requirements for the background of the object 101 may be eased (e.g. no perfectly black background may be needed). Further, the system 100 may allow to process substantially the whole scene at once. That is, unlike in conventional goniometer technology, it is not necessary to use a point light source for illuminating individual points of the object 101’s surface. Rather, the whole surface of the object 101 may be illuminated. In other words, the object 101’ s surface may be analyzed in parallel (at once) to a larger extent than with conventional goniometer technology.
- the system 100 may be understood as an event-based gonioreflectometer.
- the characteristic of the object 101’s surface may, e.g., be a reflectance of the surface.
- the reflectance of the object 101’s surface denotes the surface’s effectiveness in reflecting radiant energy such as the light 111.
- the reflectance of the obj ect 101’ s surface denotes the fraction of incident electromagnetic power (e.g. the light 111) that is reflected at the boundary of the object 101.
- the events i.e., the changes in brightness encoded in the output data 121 of the event-based vision sensor 120 relate (correspond) to the changes in reflectance of object 101’s surface.
- the other characteristic of the object 101’s surface indicated by the data 102 may be an orientation of (e.g. one or more point or one or more area on) the surface in the three-dimensional space.
- the orientation of the surface denotes the direction in which the surface is pointed (or the direction in which a point on the surface is pointed) . More specifically, orientation refers to the imaginary rotation that is needed to move the object 101’s surface from a reference placement to its current placement.
- the orientation of the object 101’ s surface together with the location of the obj ect 101’ s surface fully describe how the object 101 is placed in space.
- the orientation of the object 101’s surface may be given by the surface normal, i.e. a vector that is perpendicular to the tangent plane of the surface at a certain point P on the object 101’s surface.
- the orientation of the object 101’s surface may, e.g., determined by the processing circuitry 140 based on a digital model of the object 101 and optionally further data indicating a location and/or orientation of the object 101 in space.
- the processing circuitry 140 may receive data indicating the digital model of the object 101.
- the digital model may, e.g., be a Computer-Aided Design (CAD) model of the object 101 such as a CAD surface model. Together with the information about the location and/or orientation of the object 101 in space, the digital model of the object 101 allows to determine the orientation of each surface of the object 101 in space.
- Data indicating i.e.
- the processing circuitry 140 may further use data indicating the shape of the surface. For example, the shape of the surface may be inferred (taken) from the digital model of the object 101.
- the orientation of (e.g. a point or an area on) the object 101’s surface may be determined by the processing circuitry instead of the reflectance of the object 101’s surface.
- the characteristic of the object 101’s surface may be the orientation of (e.g. a point or an area on) the surface, and the other characteristic of the object 101’s surface indicated by the data 102 may be the reflectance of the surface.
- the processing circuitry 140 may determine the characteristic of the object 101’s surface by mapping (linking) the number of events in the output data 121 of the event-based vision sensor 120, which are caused by the relative rotation of at least two of the surface of the object 101, the event-based vision sensor 120 and the light source 110 with respect to each other, to a certain characteristic such as a certain reflectance or a certain orientation of the surface.
- the processing circuitry 140 may be configured to determine a number of changes in brightness that satisfy a quality criterion from the event stream for the measurement. In other words, the processing circuitry 140 may determine the number of events in the event stream for the measurement that satisfy the quality criterion.
- the quality criterion is a standard on which it is judged whether an event in the event stream is taken into account for the characterization of the object 101 ’s surface.
- the quality criterion may be a threshold or a combination of thresholds.
- the number of events indicating that the brightness changed by more than a threshold value may be determined.
- the number events indicating that the brightness changed by less than a threshold value may be determined.
- the number events indicating that the brightness changed by more than a first threshold value and less than a second threshold value may be determined.
- the processing circuitry 140 may be configured to select a set of possible values for the characteristic of the surface based on the other characteristic of the surface, which is indicated by the data 102. Each value of the selected set of possible values for the characteristic of the surface is associated (linked) to a certain (possible) number of detected changes in brightness.
- the characteristic of the surface is the orientation of the object 101 ’s surface and the other characteristic of the surface is the reflectance of the object 101’s surface
- different sets of possible orientations of the object 101 ’s surface may be provided for different reflectance values.
- the one set of possible values that corresponds (is associated to) to the reflectance indicated by the data 102 is selected from the different sets of possible orientations of the object 101 ’s surface.
- the selected set of possible orientations of the object 101’s surface comprises different possible orientations (e.g. different possible surface normal) of (e.g. a point or an area on) the object 101’s surface each associated to a certain (possible) number of detected changes in brightness.
- the characteristic of the surface is the reflectance of the object 101’s surface and the other characteristic of the surface is the orientation of (e.g. one or more point or one or more area on) the object 101’ s surface
- different sets of possible values for the reflectance of the object 101’s surface may be provided for different orientations (e.g. different surface normals).
- the one set of possible values that corresponds (is associated to) to the orientation indicated by the data 102 is selected from the different sets of possible values for the reflectance of the object 101 ’s surface.
- the selected set of possible values for the reflectance of the object 101’s surface comprises different possible values for the reflectance of the object 101’s surface each associated (linked) to a certain (possible) number of detected changes in brightness.
- the processing circuitry 140 may be configured to select, based on the determined number of changes in brightness (i.e. the number of detected events that satisfy the quality criterion), one possible value from the selected set of possible values as the characteristic of the surface.
- the processing circuitry 140 may be configured to select, based on the determined number of changes in brightness (i.e. the number of detected events that satisfy the quality criterion), one possible value from the selected set of possible values as the characteristic of the surface.
- each value of the selected set of possible values for the characteristic of the surface is associated (linked) to a certain number of detected changes in brightness. Accordingly, the one possible value for the characteristic of the surface that is associated (linked) to the determined number of changes in brightness (events) is selected as the characteristic of the surface from the selected set of possible values.
- the characteristic of the surface is the orientation of the object 101’s surface and the other characteristic of the surface is the reflectance of the object 101’s surface
- the one orientation that is associated to the determined number of changes in brightness (events) is selected from the selected set of possible orientations of the object 101’s surface.
- the characteristic of the surface is the reflectance of the object 101’s surface and the other characteristic of the surface is the orientation of the object 101’s surface
- the one possible value for the reflectance of the object 101’s surface that is associated to the determined number of changes in brightness (events) is selected from the selected set of possible values for the reflectance of the object 101’s surface.
- the characteristic of the surface is the reflectance of the object 101’s surface and the other characteristic of the surface is the orientation of the obj ect 101’ s surface.
- the orientation of the object 101 ’s surface e.g. its surface normal
- the reflectance of the object 101’s surface is to be determined.
- a set of possible values for the reflectance of the object 101’ s surface is selected that is associated to the specific orientation of the object 101 ’s surface.
- the object 101 is, e.g., rotated by 30 degree by the positioning system 130 to cause brightness changes that can be measured by the event-based vision sensor 120.
- the number of events indicating that the brightness changed by more than a threshold value A is determined from the output data 121 of the eventbased vision sensor 120 for the measurement.
- a first possible value C is selected from the selected set of possible values for the reflectance of the object 101’ s surface.
- the possible value C is associated to the number B of detected events that satisfy the quality criterion.
- a different second possible value E is selected from the selected set of possible values for the reflectance of the object 101 ’s surface.
- the possible value E is associated to the number D of detected events that satisfy the quality criterion.
- the characteristic of the surface is the orientation of the object 101’ s surface (e.g. its surface normal) and the other characteristic of the surface is the reflectance of the obj ect 101’ s surface.
- the reflectance of the obj ect 101’ s surface is known and the orientation of the object 101’ s surface is to be determined.
- a set of possible orientations of the obj ect 101’ s surface is selected that is associated to the specific reflectance of the obj ect 101’ s surface.
- the object 101 is rotated by, e.g., 30 degree by the positioning system 130 to cause brightness changes that can be measured by the event-based vision sensor 120.
- the number events indicating that the brightness changed by more than a threshold value K is determined from the output data 121 of the event-based vision sensor 120 for the measurement.
- a first possible orientation M is selected from the selected set of possible orientations of the object 101’s surface.
- the possible orientation M is associated to the number L of detected events that satisfy the quality criterion.
- the number of events indicating that the brightness changed by more than the threshold value K is a second value N (different from the first value L)
- a different second possible orientation O is selected from the selected set of possible orientations of the object 101’ s surface.
- the possible orientation O is associated to the number N of detected events that satisfy the quality criterion.
- the processing circuitry 140 may, e.g., execute an algorithm for determining the characteristic of the object 101 ’s surface.
- the measured reflectance changes depend on the angle of reflection (which is determined by the surface normal at the illuminated point of the object 101 ’s surface) and material reflection properties (i.e. reflectance).
- the algorithm may allow to infer the obj ect reflectance or the shape of the obj ect (which is defined by the surface normal at each point of the object’s surface) if either of the complementary information is known (i.e. reflectance from shape/angle, or shape/angle from reflectance).
- the system 100 may allow to extract, e.g., the surface normal or material properties such as the reflectance knowing one or the other at higher speed compared to a conventional goniometer techniques.
- the processing circuitry 140 may, e.g., be configured to determine the characteristic of the surface using a static rule-based model. That static rule-based model is based on a fixed (i.e. static) set of rules that specifies the mathematical model for mapping the detected number of events to the characteristic of the object 101’ s surface.
- the set of rules is coded by one or more human being.
- the processing circuitry 140 may be configured to determine the characteristic of the object 101’s surface using a trained machine-learning model.
- the machine-learning model is a data structure and/or set of rules representing a statistical model that the processing circuitry 140 uses to perform the above tasks without using explicit instructions, instead relying on models and inference.
- the data structure and/or set of rules represents learned knowledge (e.g. based on training performed by a machine-learning algorithm). For example, in machine-learning, instead of a rule-based transformation of data, a transformation of data may be used, that is inferred from an analysis of historical and/or training data.
- the event stream output by the event-based vision sensor 120 is analyzed using the machine-learning model (i.e. a data structure and/or set of rules representing the model).
- the machine-learning model is trained by a machine-learning algorithm.
- the term "machinelearning algorithm” denotes a set of instructions that are used to create, train or use a machinelearning model.
- the machine-learning model may be trained using training and/or historical event stream data as input and training content information (e.g. labels indicating the characteristic of the object 101’s surface) as output.
- training content information e.g. labels indicating the characteristic of the object 101’s surface
- the machine-learning model "learns" to recognize the content of the event stream, so that the content of the event stream that is not included in the training data can be recognized using the machine-learning model.
- the machine-learning model By training the machinelearning model using training data (e.g. an event stream or a given number of changes in brightness for a specific rotation) and a desired output, the machine-learning model "learns" a transformation between the training data and the output, which can be used to provide an output based on non-training data provided to the machine-learning model.
- training data e.g. an event stream or a given number of changes in brightness for a specific rotation
- the machine-learning model may be trained using training input data (e.g. an event stream or a given number of changes in brightness for a specific rotation).
- the machinelearning model may be trained using a training method called "supervised learning".
- supervised learning the machine-learning model is trained using a plurality of training samples, wherein each sample may comprise a plurality of input data values, and a plurality of desired output values, i.e., each training sample is associated with a desired output value.
- the machine-learning model "learns" which output value to provide based on an input sample that is similar to the samples provided during the training.
- a training sample may comprise a predetermined event stream or a given number of changes in brightness for a specific rotation as input data and one or more labels as desired output data.
- the labels indicate the characteristic of the object 101’s surface (e.g. a certain reflectance or a certain orientation).
- semi-supervised learning may be used.
- semi -supervised learning some of the training samples lack a corresponding desired output value.
- Supervised learning may be based on a supervised learning algorithm (e.g. a classification algorithm or a similarity learning algorithm).
- Classification algorithms may be used as the desired outputs of the trained machine-learning model are restricted to a limited set of values (categorical variables), i.e., the input is classified to one of the limited set of values (reflectance of the surface, orientation of the surface).
- Similarity learning algorithms are similar to classification algorithms but are based on learning from examples using a similarity function that measures how similar or related two objects are.
- unsupervised learning may be used to train the machine-learning model.
- unsupervised learning (only) input data are supplied and an unsupervised learning algorithm is used to find structure in the input data such as training and/or historical data encoding a known event stream (e.g. by grouping or clustering the input data, finding commonalities in the data).
- Clustering is the assignment of input data comprising a plurality of input values into subsets (clusters) so that input values within the same cluster are similar according to one or more (pre-defined) similarity criteria, while being dissimilar to input values that are included in other clusters.
- unsupervised learning may be used to train the machine-learning model to detect the reflectance or the orientation of the obj ect 101 ’ s surface.
- the input data for the unsupervised learning may be training or historical data (e.g. a previously measured event stream for a specific rotation or a given number of changes in brightness for a specific rotation).
- Reinforcement learning is a third group of machine-learning algorithms.
- reinforcement learning may be used to train the machine-learning model.
- one or more software actors called “software agents”
- software agents are trained to take actions in an environment. Based on the taken actions, a reward is calculated.
- Reinforcement learning is based on training the one or more software agents to choose the actions such that the cumulative reward is increased, leading to software agents that become better at the task they are given (as evidenced by increasing rewards). For example, reinforcement learning may be used to train the machine-learning model to determine the reflectance or the orientation of the object 101’s surface.
- Feature learning may be used.
- the machine-learning model may at least partially be trained using feature learning, and/or the machine-learning algorithm may comprise a feature learning component.
- Feature learning algorithms which may be called representation learning algorithms, may preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions.
- Feature learning may be based on principal components analysis or cluster analysis, for example.
- the machine-learning algorithm may use a decision tree as a predictive model.
- the machine-learning model may be based on a decision tree.
- observations about an item e.g. an event in the event stream
- an output value corresponding to the item may be represented by the leaves of the decision tree.
- Decision trees support discrete values and continuous values as output values. If discrete values are used, the decision tree may be denoted a classification tree, if continuous values are used, the decision tree may be denoted a regression tree.
- the machine-learning algorithm may use a decision tree for determining the reflectance or the orientation of the object 101’s surface.
- Association rules are a further technique that may be used in machine-learning algorithms.
- the machine-learning model may be based on one or more association rules.
- Association rules are created by identifying relationships between variables in large amounts of data.
- the machine-learning algorithm may identify and/or utilize one or more relational rules that represent the knowledge that is derived from the data.
- the rules may, e.g., be used to store, manipulate or apply the knowledge.
- the machine-learning model may be an Artificial Neural Network (ANN).
- ANNs are systems that are inspired by biological neural networks, such as can be found in a retina or a brain.
- ANNs comprise a plurality of interconnected nodes and a plurality of connections, so-called edges, between the nodes.
- input nodes that receiving input values (e.g. an event stream, an event or a number of changes in brightness), hidden nodes that are (only) connected to other nodes, and output nodes that provide output values (e.g. characteristic of the object 101’ s surface).
- Each node may represent an artificial neuron.
- Each edge may transmit information from one node to another.
- the output of a node may be defined as a (non-linear) function of its inputs (e.g. of the sum of its inputs).
- the inputs of a node may be used in the function based on a "weight" of the edge or of the node that provides the input.
- the weight of nodes and/or of edges may be adjusted in the learning process.
- the training of an ANN may comprise adjusting the weights of the nodes and/or edges of the ANN, i.e., to achieve a desired output for a given input.
- the machine-learning model may be a support vector machine, a random forest model or a gradient boosting model.
- Support vector machines i.e. support vector networks
- Support vector machines are supervised learning models with associated learning algorithms that may be used to analyze data (e.g. in classification or regression analysis).
- Support vector machines may be trained by providing an input with a plurality of training input values (e.g. events of different event stream or different numbers of changes in brightness for a specific rotation) that belong to one of two categories (e.g. two different reflectance values or two different orientations of the object 101 ’s surface).
- the support vector machine may be trained to assign a new input value to one of the two categories.
- the machine-learning model may be a Bayesian network, which is a probabilistic directed acyclic graphical model.
- a Bayesian network may represent a set of random variables and their conditional dependencies using a directed acyclic graph.
- the machine-learning model may be based on a genetic algorithm, which is a search algorithm and heuristic technique that mimics the process of natural selection.
- the machine-learning model may be a combination of the above examples.
- a frame stream indicating (image) frames captured during the measurement may additionally be used.
- the frame stream may, e.g., allow to improve (fine-tune) the inference of the object 101 ’s shape from the output data 121 of the event-based vision sensor 120.
- techniques such as “Structure from Motion” may be used to improve (fine-tune) the inference of the object 101 ’s shape from the event stream provided by the event-based vision sensor 120 using the frame stream.
- the event-based vision sensor 120 may be a hybrid sensor and additionally capture the object 101 frame-based such that the output data 121 of the event-based vision sensor further encode the frame stream indicating the frames captured by the event-based vision sensor 120 during the measurement.
- the event-based vision sensor 120 may comprise a first plurality of pixels for capturing the changes in brightness and a second plurality of pixels for capturing the (image) frames of the object 101 ’s surface.
- the event-based vision sensor 120 may comprise pixels that are able to concurrently capture changes in brightness and capture the frames of the obj ect 101’ s surface.
- the event-based vision sensor 120 may be a Dynamic and Active Pixel Vision Sensor (DAVIS) that is able to concurrently capture events and frames.
- DAVIS Dynamic and Active Pixel Vision Sensor
- the system 100 may additionally comprise a separate frame-based vision sensor 150 that is coupled to the processing circuitry 140.
- the framebased vision sensor 150 is configured to capture the object 101’ s surface during the measurement. Accordingly, output data 151 of the frame-based vision sensor 150 encode a frame stream indicating the frames captured by the frame-based vision sensor 150 during the measurement.
- the processing circuitry 140 receives the frame stream either from the event-based vision sensor 120 or the frame-based vision sensor 150 and processes it accordingly.
- the above description was given for a single measurement. However, it is to be noted that multiple measurements may be performed. In particular, various measurements may be performed for different tilt angles of the object 101’s surface, the event-based vision sensor 120 or the light source 110 with respect to the rotation axis z.
- the characteristic of the obj ect 101’ s surface may be sensitive to the tilt angle such that performing measurements with different tilt angles may allow to more completely characterize the object 101’ s surface and, hence, the object 101.
- the event-based vision sensor 120 may be configured to capture the object 101’ s surface during another (e.g. a second) measurement. Accordingly, the output data 121 of the event-based vision sensor 120 may encode another event stream indicating one or more change in brightness measured by the event -based vision sensor 120 during the other measurement.
- the light source 110 may be configured to illuminate the object 101 ’s surface during the other measurement.
- the positioning system 130 may be configured to change a tilt angle of one of the object 101’s surface (i.e. the object 101), the event-based vision sensor 120 and the light source 110 with respect to the rotation axis z prior to the other measurement such that different tilt angles are used for the measurement and the other measurement.
- the rotatable plate used as the positioning system 130 in the example of Fig.1 may be tilted prior to the other measurement such that the object 101 ’s surface (i.e. the object 101) such that different tilt angles of the object 101’ s surface (i.e. the object 101) are used for the measurement and the other measurement.
- the positioning system 130 may be configured to relatively rotate at least two of the object 101’ s surface, the eventbased vision sensor 120 and the light source 110 with respect to each other about the rotation axis z during the other measurement.
- the processing circuitry 140 may be further configured to determine the characteristic of the object 101’ s surface for the changed tilt angle (e.g. reflectance or orientation of the surface) based on the other event stream for the other measurement and based on the data 102 indicating the other characteristic of the surface (e.g. orientation or reflectance of the surface).
- FIG. 2 illustrates another system 200 for determining a characteristic of a surface of an object 201, which is a vehicle in the example of Fig. 2.
- the system 200 differs from the above described system 100 with respect to the implementation of the positioning system. Other than that, the systems are identical.
- the positioning system 230 comprises an arm 231 and a joint 232.
- the event-based vision sensor 120 and the light source 110 are both mounted (fixed) to the arm 231 of the positioning system 230.
- the arm 231 is held by the joint 232 such that the arm 231 and, hence, the event-based vision sensor 120 and the light source 110 can rotate (e.g. swing) relative to the vehicle 201.
- the rotation axis is not illustrated in Fig. 2 for reasons of simplicity. However, it will be understood by those skilled in the art that the rotation axis passes through the joint 232 in the example of Fig. 2.
- the positioning system 230 may, e.g., be implemented by means of a robotic arm.
- the light source 110 may be mounted on an entity different from the arm 231 (e.g. movable or immovable with respect to the vehicle 201). Accordingly, the positioning system 230 may move the event-based vision sensor 120 and the light source 110 around the vehicle 201 and allow to inspect the surface of the vehicle 201.
- the processing circuitry 140 is coupled to the event-based vision sensor 120 and determines a characteristic of the vehicle 201’s surface according to the above described principles. For example, a reflectance may be determined for the vehicle 201’s surface or orientations of the individual parts of the vehicle 201’s surface may be determined. By comparing the determined characteristics of the vehicle 201’s surface to reference characteristics, potential imperfections of the vehicle 201 may be determined.
- the system 200 may allow to determine the characteristic of the vehicle 201’s surface with high speed such that the system 200 may allow material/surface inspection with high throughput.
- the vehicle 201 is merely an example for an object that can be inspected. In general, the system 200 may be used for any mass produced object.
- Fig. 3 illustrates a flowchart of a method 300 for determining a characteristic of a surface of an object.
- the method 300 comprises illuminating 302 the surface by a light source during a measurement. Additionally, the method 300 comprises capturing 304 the surface by an event-based vision sensor during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement.
- the method 300 further comprises relatively rotating 306 at least two of the surface, the event-based vision sensor and the light source with respect to each other about a rotation axis during the measurement.
- the method 300 comprises determining 308 the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
- the method 300 may allow to determine the characteristic of the object’s surface with higher speed due to the high speed of the event-based vision sensor. Accordingly, the method 300 may allow shorter processing times than in methods using conventional goniometer-based systems. Additionally, the requirements for the background of the object may be eased. Further advantages of the method 300 are described above in connection with the systems 100 and 200. More details and aspects of the method 300 are explained in connection with the proposed technique or one or more examples described above (e.g. Fig. 1 and Fig. 2). The method 300 may comprise one or more additional optional features corresponding to one or more aspects of the proposed technique or one or more examples described above.
- the method 300 may optionally further comprise changing 310 a tilt angle of one of the surface, the event-based vision sensor and the light source with respect to the rotation axis prior to performing another measurement, in which the method steps 302 and 308 are repeated for the changed tilt angle to determine the characteristic of the surface for the changed tilt angle.
- Both methods are based on backpropagation and may be used for training a machine-learning model such that the respective model is able to determine a characteristic of a surface according to the above described principles.
- Fig. 4 illustrates a flowchart of a method 400 for training a machine-learning model.
- the machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event -based vision sensor during a measurement.
- the method 400 will be described in the following further with reference to Fig. 5 which illustrates the data flow 500 in the method 400.
- the method 400 comprises inputting 402 data indicating a reflectance of the surface to the machine-learning model.
- the machine-learning model 510 e.g. a neural network
- the method 400 comprises inputting 404 training data to the machine-learning model.
- the training data indicate a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis.
- the light source is used for illuminating the surface during the measurement.
- the training data may be obtained in a reference measurement (e.g. using the system 100) or be obtained by simulation.
- the number of changes in brightness indicated by the training data may be the number of events in the event stream output by the event-based vision for the reference measurement.
- the number of changes in brightness may be the number of events in the event stream output by the event-based vision sensor for the reference measurement which indicate that the brightness changed by more than a threshold value.
- a high-fidelity simulator may be used for obtaining the data by way of simulation.
- the machine-learning model 510 receives data 502 indicating the predetermined number of changes in brightness for the predetermined relative rotation.
- the data 502 may, e.g., be given on a per pixel basis similar to the output of event-based vision sensor during the measurement, which indicates the changes in brightness its individual pixels.
- the method 400 comprises determining 406 a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model (e.g. a CAD model) of the object. Further, the method 400 comprises updating 408 the weights of the machine-learning model based on the gradient of the loss function.
- a digital model e.g. a CAD model
- an output 511 of the machine-learning model 510 i.e., the orientation of the object’s surface determined by the machine-learning model 510 is input to a loss function and optimization network 520.
- the determined (estimated) orientation of the object’s surface may be given as a surface normal.
- the output 511 may be given on a per pixel base.
- data 503 indicating the digital model of the object are received by the loss function and optimization network 520.
- the loss function and optimization network 520 determines the gradient of the loss function for the weights of the machine-learning model 510 and updates the weights of the machine-learning model 510 based on the gradient of the loss function.
- the loss function (also known as cost function or error function) is a function that maps an event or values of one or more variables such as the output 511 and the data 503 onto a real number intuitively representing some "cost” associated with the event.
- An optimization problem seeks to minimize the loss function.
- the “cost” is the discrepancy between the orientation of the object’s surface output by the machine-learning model 510 and a predetermined orientation of the obj ect’ s surface given by the data 503.
- the method 400 tries to equal the orientation of the object’s surface output by the machine-learning model 510 to the predetermined orientation of the object’s surface given by the data 503.
- the updated weights 521 of the machine-learning model 510 are fed back to the machinelearning model 510 in order to update the machine-learning model 510.
- the weights of the machine-learning model 510 are adjusted by the loss function and optimization network 520.
- the method steps 404 to 408 may be repeated one or more time in order to iteratively train the machine-learning model 510 and minimize the loss function.
- the method 400 may allow to obtain a trained machine-learning model for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement.
- the above described processing circuitry 140 may use a machine-learning model trained according to the method 400 for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement.
- Fig- 6 illustrates a flowchart of another method 600 for training a machine-learning model.
- the machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event -based vision sensor during a measurement.
- the method 600 will be described in the following further with reference to Fig. 7 which illustrates the data flow 700 in the method 600.
- the method 600 comprises inputting 602 data indicating a digital model of the object to the machine-learning model.
- the machine-learning model 710 e.g. a neural network
- receives data 701 indicating a known or predetermined digital model of the object e.g. a CAD surface model
- the data indicating the digital model of the object allow to infer an orientation of the object for the training of the machine-learning model - similar to what is described above.
- the method 600 comprises inputting 604 training data to the machine-learning model.
- the training data indicate a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis.
- the light source is used for illuminating the surface during the measurement.
- the training data may be obtained in a reference measurement (e.g. using the system 100) or be obtained by simulation.
- the number of changes in brightness indicated by the training data may be the number of events in the event stream output by the event-based vision for the reference measurement.
- the number of changes in brightness may be the number of events in the event stream output by the event-based vision sensor for the reference measurement which indicate that the brightness changed by more than a threshold value.
- a high- fidelity simulator may be used for obtaining the data by way of simulation.
- the machine-learning model 710 receives data 702 indicating the predetermined number of changes in brightness for the predetermined relative rotation.
- the data 702 may, e.g., be given on a per pixel basis similar to the output of event-based vision sensor during the measurement, which indicates the changes in brightness its individual pixels.
- the method 600 comprises determining 606 a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target (desired, known) reflectance for the training data. Further, the method 600 comprises updating 608 the weights of the machinelearning model based on the gradient of the loss function.
- an output 711 of the machine-learning model 710 i.e., the reflectance value for the object’s surface determined by the machine-learning model 710 is input to a loss function and optimization network 720. Also the output 711 may be given on a per pixel base. Further, data 703 indicating the target reflectance of the object’s surface are received by the loss function and optimization network 720. The target reflectance is a predetermined (given) reflectance of the object for the training of the machine-learning model.
- the loss function and optimization network 720 determines the gradient of the loss function for the weights of the machine-learning model 710 and updates the weights of the machine-learning model 710 based on the gradient of the loss function. In the example of Fig.
- the “cost” is the discrepancy between the reflectance of the object’s surface output by the machine-learning model 710 and the target reflectance of the object’s surface given by the data 703.
- the method 700 tries to equal the reflectance of the object’s surface output by the machinelearning model 710 to the target reflectance of the object’s surface given by the data 703.
- the updated weights 721 of the machine-learning model 710 are fed back to the machinelearning model 710 in order to update the machine-learning model 710.
- the weights of the machine-learning model 710 are adjusted by the loss function and optimization network 720.
- the method steps 604 to 608 may be repeated one or more time in order to iteratively train the machine-learning model 710 and minimize the loss function.
- the above described processing circuitry 140 may use a machine-learning model trained according to the method 600 for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an eventbased vision sensor during a measurement.
- Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor or other programmable hardware component.
- steps, operations or processes of different ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components.
- Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processor-executable or computer-executable programs and instructions.
- Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example.
- Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), FPGAs, graphics processor units (GPU), ASICs, integrated circuits (ICs) or sys- tem-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
- F field programmable logic arrays
- FPGA field programmable logic arrays
- GPU graphics processor units
- ASICs integrated circuits
- SoCs sys- tem-on-a-chip
- a system for determining a characteristic of a surface of an object comprising: a light source configured to illuminate the surface during a measurement; an event-based vision sensor configured to capture the surface during the measurement, wherein output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement; a positioning system configured to relatively rotate at least two of the surface, the event -based vision sensor and the light source with respect to each other about a rotation axis during the measurement; and processing circuitry coupled to the event-based vision sensor and configured to determine the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
- processing circuitry is further configured to: receive data indicating a digital model of the object; and determine the orientation of the surface based on the digital model of the object.
- processing circuitry is configured to determine the characteristic of the surface by: determining a number of changes in brightness that satisfy a quality criterion from the event stream for the measurement. selecting a set of possible values for the characteristic of the surface based on the other characteristic of the surface; and selecting, based on the determined number of changes in brightness, one possible value from the selected set of possible values as the characteristic of the surface.
- the event-based vision sensor is configured to capture the surface during another measurement, wherein the output data of the event-based vision sensor encode another event stream indicating one or more change in brightness measured by the event-based vision sensor during the other measurement;
- the light source is configured to illuminate the surface during the other measurement the positioning system is configured to: change a tilt angle of one of the surface, the event-based vision sensor and the light source with respect to the rotation axis prior to the other measurement; and relatively rotate at least two of the surface, the event -based vision sensor and the light source with respect to each other about the rotation axis during the other measurement;
- the processing circuitry is configured to determine the characteristic of the surface for the changed tilt angle based the other event stream for the other measurement and based on the data indicating the other characteristic of the surface.
- a method for determining a characteristic of a surface of an object comprising: illuminating the surface by a light source during a measurement; capturing the surface by an event-based vision sensor during the measurement, wherein output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement; relatively rotating at least two of the surface, the event -based vision sensor and the light source with respect to each other about a rotation axis during the measurement; and determining the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
- a method fortraining a machine-learning model wherein the machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement, the method comprising: inputting, to the machine-learning model, data indicating a digital model of the object; inputting, to the machine-learning model, training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis, the light source being used for illuminating the surface during the measurement; determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target reflectance for the training data; and updating the weights of the machine-learning model based on the gradient of the loss function.
- a method for training a machine-learning model wherein the machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement, the method comprising: inputting, to the machine-learning model, data indicating a reflectance of the surface; inputting, to the machine-learning model, training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis, the light source being used for illuminating the surface during the measurement; determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model of the object; and updating the weights of the machine-learning model based on the gradient of the loss function.
- a programmable hardware comprising circuitry configured to perform the method according to (17) or (18).
- a non-transitory machine-readable medium having stored thereon a program having a program code for performing the method according to (17) or (18), when the program is executed on a processor or a programmable hardware.
- (21) A program having a program code for performing the method according to (17) or (18), when the program is executed on a processor or a programmable hardware.
- aspects described in relation to a device or system should also be understood as a description of the corresponding method.
- a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method.
- aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
A system for determining a characteristic of a surface of an object is provided. The system includes a light source configured to illuminate the surface during a measurement. Additionally, the system includes an event-based vision sensor configured to capture the surface during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement. The system further includes a positioning system configured to relatively rotate at least two of the surface, the event-based vision sensor and the light source with respect to each other about a rotation axis during the measurement. In addition, the system includes processing circuitry coupled to the event-based vision sensor and configured to determine the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
Description
SYSTEM AND METHOD FOR DETERMINING A CHARACTERISTIC OF A SURFACE OF AN OBJECT, METHODS FOR TRAINING A MACHINE -LEARNING
MODEL AND PROGRAMMABLE HARDWARE
Field
The present disclosure relates to surface characterization. In particular, examples relate to a system and a method for determining a characteristic of a surface of an object, methods for training a machine-learning model and a programmable hardware.
Background
Goniometer-based systems are conventionally used for determining various characteristics of object or material surfaces. However, material/object surface characterization through standard goniometer technology is very slow and inefficient.
Hence, there may be a demand for improved surface characterization.
Summary
This demand is met by apparatuses and methods in accordance with the independent claims. Advantageous embodiments are addressed by the dependent claims.
According to a first aspect, the present disclosure provides a system for determining a characteristic of a surface of an object. The system comprises a light source configured to illuminate the surface during a measurement. Additionally, the system comprises an event-based vision sensor configured to capture the surface during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement. The system further comprises a positioning system configured to relatively rotate at least two of the surface, the eventbased vision sensor and the light source with respect to each other about a rotation axis during the measurement. In addition, the system comprises processing circuitry coupled to the eventbased vision sensor and configured to determine the characteristic of the surface based on the
event stream for the measurement and based on data indicating another characteristic of the surface.
According to a second aspect, the present disclosure provides a method for determining a characteristic of a surface of an object. The method comprises illuminating the surface by a light source during a measurement. Additionally, the method comprises capturing the surface by an event-based vision sensor during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement. The method further comprises relatively rotating at least two of the surface, the event-based vision sensor and the light source with respect to each other about a rotation axis during the measurement. In addition, the method comprises determining the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
According to a third aspect, the present disclosure provides a method for training a machinelearning model. The machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement. The method comprises inputting data indicating a digital model of the object to the machine-learning model. Additionally, the method comprises inputting training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event -based vision sensor and a light source with respect to each other about a rotation axis. The light source is used for illuminating the surface during the measurement. The method further comprises determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target reflectance for the training data. In addition, the method comprises updating the weights of the machine-learning model based on the gradient of the loss function.
According to a fourth aspect, the present disclosure provides another method for training a machine-learning model. The machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement. The method comprises inputting data indicating a reflectance of the surface. Additionally, the method comprises inputting training data indicating a predetermined number of changes in brightness for a
predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis. The light source is used for illuminating the surface during the measurement. The method further comprises determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model of the object. In addition, the method comprises updating the weights of the machine-learning model based on the gradient of the loss function.
According to a fifth aspect, the present disclosure provides a programmable hardware comprising circuitry configured to perform one of the above methods for training a machinelearning model.
According to a sixth aspect, the present disclosure provides a non-transitory machine-readable medium having stored thereon a program having a program code for performing one of the above methods for training a machine-learning model, when the program is executed on a processor or a programmable hardware.
Brief description of the Figures
Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
Fig. 1 illustrates an example of a system for determining a characteristic of a surface of an object;
Fig. 2 illustrates an exemplary characterization of a vehicle surface;
Fig. 3 illustrates a flowchart of an example of a method for determining a characteristic of a surface of an object;
Fig. 4 illustrates a flowchart of an example of a method fortraining a machine-learning model;
Fig. 5 illustrates an exemplary data flow in the method illustrated in Fig. 4;
Fig. 6 illustrates a flowchart of an example of another method for training a machine-learning model; and
Fig. 7 illustrates an exemplary data flow in the method illustrated in Fig. 6.
Detailed Description
Some examples are now described in more detail with reference to the enclosed figures. However, other possible examples are not limited to the features of these embodiments described in detail. Other examples may include modifications of the features as well as equivalents and alternatives to the features. Furthermore, the terminology used herein to describe certain examples should not be restrictive of further possible examples.
Throughout the description of the figures same or similar reference numerals refer to same or similar elements and/or features, which may be identical or implemented in a modified form while providing the same or a similar function. The thickness of lines, layers and/or areas in the figures may also be exaggerated for clarification.
When two elements A and B are combined using an “or”, this is to be understood as disclosing all possible combinations, i.e. only A, only B as well as A and B, unless expressly defined otherwise in the individual case. As an alternative wording for the same combinations, "at least one of A and B" or "A and/or B" may be used. This applies equivalently to combinations of more than two elements.
If a singular form, such as “a”, “an” and “the” is used and the use of only a single element is not defined as mandatory either explicitly or implicitly, further examples may also use several elements to implement the same function. If a function is described below as implemented using multiple elements, further examples may implement the same function using a single element or a single processing entity. It is further understood that the terms "include", "including", "comprise" and/or "comprising", when used, describe the presence of the specified features, integers, steps, operations, processes, elements, components and/or a group thereof, but do not exclude the presence or addition of one or more other features, integers, steps, operations, processes, elements, components and/or a group thereof.
Fig- 1 illustrates a system 100 for determining a characteristic of a surface of an object 101. The object 101 may be any physical object (body) and may be defined as a collection of matter within a defined contiguous boundary in three-dimensional space. The surface of the object 101 is the object 101’ s exterior or upper boundary.
The system 100 comprises a light source 110. The light source 110 is configured to illuminate the surface of the object 101 during a measurement. This is exemplarily illustrated in Fig. 1 as the light source 110 emits light 111 toward the object 101. The light 111 may be of any suitable wavelength. For example, the light source 110 may be configured to illuminate the surface of the object 101 during the measurement with at least one of ultraviolet light (wavelength from approx. 100 nm to approx. 380 nm), visible light (wavelength from approx. 380 nm to approx. 780 nm) and infrared light (wavelength from approx. 780 nm to approx. 1 mm). According to examples, the wavelength of the light 111 may be adjustable in order to examine a wavelength dependency of the surface characteristics of the object 101. In addition to the known (e.g. predefined or adjusted) spectral distribution, the light 111 emitted by the light source 110 exhibits a known (e.g. predefined or adjusted) opening angle and a known (e.g. predefined or adjusted) brightness. The light source 110 may illuminate a (e.g. wide and) contiguous area of the object 101’s surface or one or more individual section (e.g. one or more point) of the object 101’s surface. For example, the light 110 may be emitted such by the light source 110 that a light pattern is projected on the object 101’s surface.
The light source 110 may comprise various components such as one or more light emitter, electronic circuitry and optics (e.g. one or more lenses for adjusting a shape or the opening angle of the light 111, one or more monochromator for adjusting the wavelength of the light 111, one or more optical filter for adjusting the wavelength of the light 111, etc.). The one or more light emitter may, e.g., be Light-Emitting Diodes (LEDs) and/or one laser diodes (e.g. one or more Vertical-Cavity Surface-Emitting Lasers, VCSELs). According to examples, a plurality of light emitters emitting light at different wavelengths may be provided and selectively activated to adjust the wavelength of the light 111 emitted by the light source 110. However, it is to be noted that the light source 110 may alternatively comprise more, less or other components than those exemplary components described above.
The light 111 emitted toward the object 101 by the light source 110 is reflected by the surface of the object 101. The reflection of the light 111 by the surface of the object 101 depends on
various characteristics of the surface. For example, the reflection of the light 111 depends on a reflectance of the surface or an orientation of the surface in the three-dimensional space.
For detecting the reflections of the light 111 by the surface of the object 101, the system 100 comprises an event-based vision sensor 120 such as a dynamic vision sensor (also known as event camera, neuromorphic camera or silicon retina) that responds to local changes in brightness. The event-based vision sensor 120 does not capture image frames using a shutter like a conventional image sensor does. Instead, the photo-sensitive sensor elements or pixels of the event-based vision sensor 120 operate independently and asynchronously, detecting changes in brightness as they occur, and staying silent otherwise. Similar to what is described above for the light source 110, the event-based vision sensor 120 may be sensitive to light of different wavelengths. For example, the event-based vision sensor 120 may be sensitive for at least one of ultraviolet light, visible light and infrared light. The detection (of an occurrence) of a change in brightness by the event-based vision sensor 120 is called an “event”. The events are output by the event-based vision sensor 120 as an event stream (i.e. a stream of events). The event-based vision sensor 120 may provide high temporal resolution, high (wide) dynamic range, avoid under/overexposure and avoid motion blur compared to frame-based image sensors.
The event-based vision sensor 120 is configured to capture the surface of the object 101 during the measurement. Depending on the surface characteristics of the object 101, the eventbased vision sensor 120 detects one or more event indicating detected changes in brightness during the measurement. Accordingly, output data 121 of the event-based vision sensor 120 encode an event stream indicating the one or more change in brightness measured (detected) by the event-based vision sensor 120 during the measurement.
The system 100 additionally comprises a positioning system 130 configured to relatively rotate at least two of the surface of the object 101, the event-based vision sensor 120 and the light source 110 with respect to each other about a rotation axis z during the measurement. In the example of Fig. 1, the positioning system 130 is a rotatable plate on which the object 101 is placed. As indicated by the arrow 131, the object 101 rotates about the rotation axis z in case the rotatable plate is rotated. In other words, the positioning system 130 is configured to rotate the object 101 about the rotation axis z during the measurement. Accordingly, the surface of the object 101 is rotated with respect to the event-based vision sensor 120 and the light
source 110 about the rotation axis z during the measurement. However, the present disclosure is not limited thereto. For example, at least one of the event-based vision sensor 120 and the light source 110 may additionally be rotated about the rotation axis z during the measurement. In other examples, only the event-based vision sensor 120 and/or the light source 110 may rotate about the rotation axis z during the measurement, while the object 101 and, hence, the surface of the object 101 does not rotate about the rotation axis z during the measurement. For example, the event-based vision sensor 120 and/or the light source 110 may be mounted on one or more arm (e.g. a robot arm) that is/are rotatable about the rotation axis z. In other words, the positioning system may be configured to rotate at least one of the event-based vision sensor 120 and the light source 110 about the rotation axis z during the measurement. Another detailed example of the positioning system will be described later with respect to Fig. 2.
Due to the relative rotation of at least two of the surface of the object 101, the event-based vision sensor 120 and the light source 110 with respect to each other about the rotation axis z during the measurement, the surface of the object 101 is illuminated from different angles of a hemisphere around the object 101 by the light source 110 and/or the surface of the object
101 is captured from different angles of the hemisphere around the object 101 by the eventbased vision sensor 120 during the measurement. The events, i.e., changes in brightness measured by the event-based vision sensor 120 during the measurement relate (correspond) to the reflection characteristics of the object 101 ’s surface and, hence, allow to characterize the surface of the object.
For determining the characteristic of the object 101’s surface, the system 100 further comprises processing circuitry 140. The processing circuitry 140 is coupled to the event-based vision sensor 120 and configured to receive the output data 121 of the event-based vision sensor 120. Additionally, the processing circuitry 140 is configured to receive further data
102 indicating another characteristic of the object 101 ’s surface. For example, the processing circuit 140 may be a single dedicated processor, a single shared processor, or a plurality of individual processors, some of which or all of which may be shared, a digital signal processor (DSP) hardware, an application specific integrated circuit (ASIC), a neuromorphic processor or a field programmable gate array (FPGA). The processing circuitry 140 may optionally be coupled to, e.g., read only memory (ROM) for storing software, random access memory (RAM) and/or non-volatile memory. The processing circuitry 140 is configured to determine
the characteristic of the object 101 ’s surface based on the event stream for the measurement included in the output data 121 and based on the data 102 indicating the other characteristic of the object 101 ’s surface.
The data 102 indicating the other characteristic of the object 101 ’s surface may, e.g., be received from another entity such as memory coupled to the processing circuitry 140 or another processing circuitry external to the system 100. In other examples, the data 102 may be previously generated (e.g. be determined or be computed) by the processing circuitry 140.
Compared to conventional goniometer technology, the system 100 may allow to determine the characteristic of the object 101 ’s surface with higher speed due to the high speed of the event-based vision sensor 120. Accordingly, the system 100 may allow (much) shorter processing times than conventional goniometer-based systems. Due to the asynchronous nature of the event-based vision sensor 120, the event-based vision sensor 120 may provide a higher angular resolution than conventional frame-based sensors. Further, the wide dynamic range of the event-based vision sensor 120 may allow to avoid saturation for strong specular reflections from the object 101’ s surface and operation outside a controlled environment (such as a black box used in conventional goniometer technology). For example, the system 100 may be used in an open environment where the lighting conditions do not matter as the event-based vision sensor 120 is only sensitive to temporal changes in brightness. Accordingly, the requirements for the background of the object 101 may be eased (e.g. no perfectly black background may be needed). Further, the system 100 may allow to process substantially the whole scene at once. That is, unlike in conventional goniometer technology, it is not necessary to use a point light source for illuminating individual points of the object 101’s surface. Rather, the whole surface of the object 101 may be illuminated. In other words, the object 101’ s surface may be analyzed in parallel (at once) to a larger extent than with conventional goniometer technology. The system 100 may be understood as an event-based gonioreflectometer.
The characteristic of the object 101’s surface may, e.g., be a reflectance of the surface. The reflectance of the object 101’s surface denotes the surface’s effectiveness in reflecting radiant energy such as the light 111. In other words, the reflectance of the obj ect 101’ s surface denotes the fraction of incident electromagnetic power (e.g. the light 111) that is reflected at the boundary of the object 101. Accordingly, the events, i.e., the changes in brightness encoded in the output data 121 of the event-based vision sensor 120 relate (correspond) to the changes
in reflectance of object 101’s surface. In case the characteristic of the object 101’s surface is the reflectance of the surface, the other characteristic of the object 101’s surface indicated by the data 102 may be an orientation of (e.g. one or more point or one or more area on) the surface in the three-dimensional space. The orientation of the surface denotes the direction in which the surface is pointed (or the direction in which a point on the surface is pointed) . More specifically, orientation refers to the imaginary rotation that is needed to move the object 101’s surface from a reference placement to its current placement. The orientation of the object 101’ s surface together with the location of the obj ect 101’ s surface fully describe how the object 101 is placed in space. For example, the orientation of the object 101’s surface may be given by the surface normal, i.e. a vector that is perpendicular to the tangent plane of the surface at a certain point P on the object 101’s surface.
The orientation of the object 101’s surface may, e.g., determined by the processing circuitry 140 based on a digital model of the object 101 and optionally further data indicating a location and/or orientation of the object 101 in space. For example, the processing circuitry 140 may receive data indicating the digital model of the object 101. The digital model may, e.g., be a Computer-Aided Design (CAD) model of the object 101 such as a CAD surface model. Together with the information about the location and/or orientation of the object 101 in space, the digital model of the object 101 allows to determine the orientation of each surface of the object 101 in space. Data indicating (i.e. information about) the location and/or orientation of the object 101 in space may, e.g., be (determined and) provided by the positioning system 130. For determining the reflectance of the object 101’s surface, the processing circuitry 140 may further use data indicating the shape of the surface. For example, the shape of the surface may be inferred (taken) from the digital model of the object 101.
In other examples, the orientation of (e.g. a point or an area on) the object 101’s surface may be determined by the processing circuitry instead of the reflectance of the object 101’s surface. In other words, the characteristic of the object 101’s surface may be the orientation of (e.g. a point or an area on) the surface, and the other characteristic of the object 101’s surface indicated by the data 102 may be the reflectance of the surface.
The processing circuitry 140 may determine the characteristic of the object 101’s surface by mapping (linking) the number of events in the output data 121 of the event-based vision sensor 120, which are caused by the relative rotation of at least two of the surface of the object 101,
the event-based vision sensor 120 and the light source 110 with respect to each other, to a certain characteristic such as a certain reflectance or a certain orientation of the surface.
For determining the characteristic of the surface, the processing circuitry 140 may be configured to determine a number of changes in brightness that satisfy a quality criterion from the event stream for the measurement. In other words, the processing circuitry 140 may determine the number of events in the event stream for the measurement that satisfy the quality criterion. The quality criterion is a standard on which it is judged whether an event in the event stream is taken into account for the characterization of the object 101 ’s surface. For example, the quality criterion may be a threshold or a combination of thresholds. For example, the number of events indicating that the brightness changed by more than a threshold value may be determined. Similarly, the number events indicating that the brightness changed by less than a threshold value may be determined. In other examples, the number events indicating that the brightness changed by more than a first threshold value and less than a second threshold value may be determined.
Further, for determining the characteristic of the surface, the processing circuitry 140 may be configured to select a set of possible values for the characteristic of the surface based on the other characteristic of the surface, which is indicated by the data 102. Each value of the selected set of possible values for the characteristic of the surface is associated (linked) to a certain (possible) number of detected changes in brightness.
For example, if the characteristic of the surface is the orientation of the object 101 ’s surface and the other characteristic of the surface is the reflectance of the object 101’s surface, different sets of possible orientations of the object 101 ’s surface may be provided for different reflectance values. Accordingly, the one set of possible values that corresponds (is associated to) to the reflectance indicated by the data 102 is selected from the different sets of possible orientations of the object 101 ’s surface. The selected set of possible orientations of the object 101’s surface comprises different possible orientations (e.g. different possible surface normal) of (e.g. a point or an area on) the object 101’s surface each associated to a certain (possible) number of detected changes in brightness.
Similarly, if the characteristic of the surface is the reflectance of the object 101’s surface and the other characteristic of the surface is the orientation of (e.g. one or more point or one or
more area on) the object 101’ s surface, different sets of possible values for the reflectance of the object 101’s surface may be provided for different orientations (e.g. different surface normals). Accordingly, the one set of possible values that corresponds (is associated to) to the orientation indicated by the data 102 is selected from the different sets of possible values for the reflectance of the object 101 ’s surface. The selected set of possible values for the reflectance of the object 101’s surface comprises different possible values for the reflectance of the object 101’s surface each associated (linked) to a certain (possible) number of detected changes in brightness.
Further, for determining the characteristic of the object 101’s surface, the processing circuitry 140 may be configured to select, based on the determined number of changes in brightness (i.e. the number of detected events that satisfy the quality criterion), one possible value from the selected set of possible values as the characteristic of the surface. As stated above, each value of the selected set of possible values for the characteristic of the surface is associated (linked) to a certain number of detected changes in brightness. Accordingly, the one possible value for the characteristic of the surface that is associated (linked) to the determined number of changes in brightness (events) is selected as the characteristic of the surface from the selected set of possible values.
For example, if the characteristic of the surface is the orientation of the object 101’s surface and the other characteristic of the surface is the reflectance of the object 101’s surface, the one orientation that is associated to the determined number of changes in brightness (events) is selected from the selected set of possible orientations of the object 101’s surface. Similarly, if the characteristic of the surface is the reflectance of the object 101’s surface and the other characteristic of the surface is the orientation of the object 101’s surface, the one possible value for the reflectance of the object 101’s surface that is associated to the determined number of changes in brightness (events) is selected from the selected set of possible values for the reflectance of the object 101’s surface.
The above described determination of the characteristic of the object 101’s surface will become more evident from the following examples.
In the first example, the characteristic of the surface is the reflectance of the object 101’s surface and the other characteristic of the surface is the orientation of the obj ect 101’ s surface.
In other words, the orientation of the object 101 ’s surface (e.g. its surface normal) is known and the reflectance of the object 101’s surface is to be determined.
As the initial orientation of the object 101’ s surface is known, a set of possible values for the reflectance of the object 101’ s surface is selected that is associated to the specific orientation of the object 101 ’s surface. During the measurement, the object 101 is, e.g., rotated by 30 degree by the positioning system 130 to cause brightness changes that can be measured by the event-based vision sensor 120. The number of events indicating that the brightness changed by more than a threshold value A is determined from the output data 121 of the eventbased vision sensor 120 for the measurement. In case the number of events indicating that the brightness changed by more than the threshold value A is a first value B, a first possible value C is selected from the selected set of possible values for the reflectance of the object 101’ s surface. The possible value C is associated to the number B of detected events that satisfy the quality criterion. In case the number of events indicating that the brightness changed by more than the threshold value A is a second value D (different from the first value A), a different second possible value E is selected from the selected set of possible values for the reflectance of the object 101 ’s surface. The possible value E is associated to the number D of detected events that satisfy the quality criterion.
In the second example, the characteristic of the surface is the orientation of the object 101’ s surface (e.g. its surface normal) and the other characteristic of the surface is the reflectance of the obj ect 101’ s surface. In other words, the reflectance of the obj ect 101’ s surface is known and the orientation of the object 101’ s surface is to be determined.
As the reflectance of the object 101’s surface is known, a set of possible orientations of the obj ect 101’ s surface is selected that is associated to the specific reflectance of the obj ect 101’ s surface. During the measurement, the object 101 is rotated by, e.g., 30 degree by the positioning system 130 to cause brightness changes that can be measured by the event-based vision sensor 120. The number events indicating that the brightness changed by more than a threshold value K is determined from the output data 121 of the event-based vision sensor 120 for the measurement. In case the number of events indicating that the brightness changed by more than the threshold value K is a first value L, a first possible orientation M is selected from the selected set of possible orientations of the object 101’s surface. The possible orientation M is associated to the number L of detected events that satisfy the quality criterion. In case the
number of events indicating that the brightness changed by more than the threshold value K is a second value N (different from the first value L), a different second possible orientation O is selected from the selected set of possible orientations of the object 101’ s surface. The possible orientation O is associated to the number N of detected events that satisfy the quality criterion.
The processing circuitry 140 may, e.g., execute an algorithm for determining the characteristic of the object 101 ’s surface. The measured reflectance changes depend on the angle of reflection (which is determined by the surface normal at the illuminated point of the object 101 ’s surface) and material reflection properties (i.e. reflectance). Similar to what is described above, the algorithm may allow to infer the obj ect reflectance or the shape of the obj ect (which is defined by the surface normal at each point of the object’s surface) if either of the complementary information is known (i.e. reflectance from shape/angle, or shape/angle from reflectance). The system 100 may allow to extract, e.g., the surface normal or material properties such as the reflectance knowing one or the other at higher speed compared to a conventional goniometer techniques.
The processing circuitry 140 may, e.g., be configured to determine the characteristic of the surface using a static rule-based model. That static rule-based model is based on a fixed (i.e. static) set of rules that specifies the mathematical model for mapping the detected number of events to the characteristic of the object 101’ s surface. The set of rules is coded by one or more human being.
In alternative examples, the processing circuitry 140 may be configured to determine the characteristic of the object 101’s surface using a trained machine-learning model.
The machine-learning model is a data structure and/or set of rules representing a statistical model that the processing circuitry 140 uses to perform the above tasks without using explicit instructions, instead relying on models and inference. The data structure and/or set of rules represents learned knowledge (e.g. based on training performed by a machine-learning algorithm). For example, in machine-learning, instead of a rule-based transformation of data, a transformation of data may be used, that is inferred from an analysis of historical and/or training data. In the proposed technique, the event stream output by the event-based vision sensor
120 is analyzed using the machine-learning model (i.e. a data structure and/or set of rules representing the model).
The machine-learning model is trained by a machine-learning algorithm. The term "machinelearning algorithm" denotes a set of instructions that are used to create, train or use a machinelearning model. For the machine-learning model to analyze the content of the event stream output by the event-based vision sensor 120, the machine-learning model may be trained using training and/or historical event stream data as input and training content information (e.g. labels indicating the characteristic of the object 101’s surface) as output. By training the machine-learning model with a large set of training data and associated training content information (e.g. labels or annotations), the machine-learning model "learns" to recognize the content of the event stream, so that the content of the event stream that is not included in the training data can be recognized using the machine-learning model. By training the machinelearning model using training data (e.g. an event stream or a given number of changes in brightness for a specific rotation) and a desired output, the machine-learning model "learns" a transformation between the training data and the output, which can be used to provide an output based on non-training data provided to the machine-learning model.
The machine-learning model may be trained using training input data (e.g. an event stream or a given number of changes in brightness for a specific rotation). For example, the machinelearning model may be trained using a training method called "supervised learning". In supervised learning, the machine-learning model is trained using a plurality of training samples, wherein each sample may comprise a plurality of input data values, and a plurality of desired output values, i.e., each training sample is associated with a desired output value. By specifying both training samples and desired output values, the machine-learning model "learns" which output value to provide based on an input sample that is similar to the samples provided during the training. For example, a training sample may comprise a predetermined event stream or a given number of changes in brightness for a specific rotation as input data and one or more labels as desired output data. The labels indicate the characteristic of the object 101’s surface (e.g. a certain reflectance or a certain orientation).
Apart from supervised learning, semi-supervised learning may be used. In semi -supervised learning, some of the training samples lack a corresponding desired output value. Supervised learning may be based on a supervised learning algorithm (e.g. a classification algorithm or a
similarity learning algorithm). Classification algorithms may be used as the desired outputs of the trained machine-learning model are restricted to a limited set of values (categorical variables), i.e., the input is classified to one of the limited set of values (reflectance of the surface, orientation of the surface). Similarity learning algorithms are similar to classification algorithms but are based on learning from examples using a similarity function that measures how similar or related two objects are.
Apart from supervised or semi-supervised learning, unsupervised learning may be used to train the machine-learning model. In unsupervised learning, (only) input data are supplied and an unsupervised learning algorithm is used to find structure in the input data such as training and/or historical data encoding a known event stream (e.g. by grouping or clustering the input data, finding commonalities in the data). Clustering is the assignment of input data comprising a plurality of input values into subsets (clusters) so that input values within the same cluster are similar according to one or more (pre-defined) similarity criteria, while being dissimilar to input values that are included in other clusters. For example, unsupervised learning may be used to train the machine-learning model to detect the reflectance or the orientation of the obj ect 101 ’ s surface. The input data for the unsupervised learning may be training or historical data (e.g. a previously measured event stream for a specific rotation or a given number of changes in brightness for a specific rotation).
Reinforcement learning is a third group of machine-learning algorithms. In other words, reinforcement learning may be used to train the machine-learning model. In reinforcement learning, one or more software actors (called "software agents") are trained to take actions in an environment. Based on the taken actions, a reward is calculated. Reinforcement learning is based on training the one or more software agents to choose the actions such that the cumulative reward is increased, leading to software agents that become better at the task they are given (as evidenced by increasing rewards). For example, reinforcement learning may be used to train the machine-learning model to determine the reflectance or the orientation of the object 101’s surface.
Furthermore, additional techniques may be applied to some of the machine-learning algorithms. For example, feature learning may be used. In other words, the machine-learning model may at least partially be trained using feature learning, and/or the machine-learning algorithm may comprise a feature learning component. Feature learning algorithms, which
may be called representation learning algorithms, may preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. Feature learning may be based on principal components analysis or cluster analysis, for example.
In some examples, the machine-learning algorithm may use a decision tree as a predictive model. In other words, the machine-learning model may be based on a decision tree. In a decision tree, observations about an item (e.g. an event in the event stream) may be represented by the branches of the decision tree, and an output value corresponding to the item may be represented by the leaves of the decision tree. Decision trees support discrete values and continuous values as output values. If discrete values are used, the decision tree may be denoted a classification tree, if continuous values are used, the decision tree may be denoted a regression tree. For example, the machine-learning algorithm may use a decision tree for determining the reflectance or the orientation of the object 101’s surface.
Association rules are a further technique that may be used in machine-learning algorithms. In other words, the machine-learning model may be based on one or more association rules. Association rules are created by identifying relationships between variables in large amounts of data. The machine-learning algorithm may identify and/or utilize one or more relational rules that represent the knowledge that is derived from the data. The rules may, e.g., be used to store, manipulate or apply the knowledge.
For example, the machine-learning model may be an Artificial Neural Network (ANN). ANNs are systems that are inspired by biological neural networks, such as can be found in a retina or a brain. ANNs comprise a plurality of interconnected nodes and a plurality of connections, so-called edges, between the nodes. There are usually three types of nodes, input nodes that receiving input values (e.g. an event stream, an event or a number of changes in brightness), hidden nodes that are (only) connected to other nodes, and output nodes that provide output values (e.g. characteristic of the object 101’ s surface). Each node may represent an artificial neuron. Each edge may transmit information from one node to another. The output of a node may be defined as a (non-linear) function of its inputs (e.g. of the sum of its inputs). The inputs of a node may be used in the function based on a "weight" of the edge or of the node that provides the input. The weight of nodes and/or of edges may be adjusted in the
learning process. In other words, the training of an ANN may comprise adjusting the weights of the nodes and/or edges of the ANN, i.e., to achieve a desired output for a given input.
Alternatively, the machine-learning model may be a support vector machine, a random forest model or a gradient boosting model. Support vector machines (i.e. support vector networks) are supervised learning models with associated learning algorithms that may be used to analyze data (e.g. in classification or regression analysis). Support vector machines may be trained by providing an input with a plurality of training input values (e.g. events of different event stream or different numbers of changes in brightness for a specific rotation) that belong to one of two categories (e.g. two different reflectance values or two different orientations of the object 101 ’s surface). The support vector machine may be trained to assign a new input value to one of the two categories. Alternatively, the machine-learning model may be a Bayesian network, which is a probabilistic directed acyclic graphical model. A Bayesian network may represent a set of random variables and their conditional dependencies using a directed acyclic graph. Alternatively, the machine-learning model may be based on a genetic algorithm, which is a search algorithm and heuristic technique that mimics the process of natural selection. In some examples, the machine-learning model may be a combination of the above examples.
Two more detailed examples on how to train a machine-learning model such that it is able to determine a characteristic of the object 101’ s surface will be described later with respect to Figs. 4 to 7.
In case the characteristic of the object 101’ s surface is the orientation of the surface and the other characteristic of the object 101’s surface is the reflectance of the surface, a frame stream indicating (image) frames captured during the measurement may additionally be used. The frame stream may, e.g., allow to improve (fine-tune) the inference of the object 101 ’s shape from the output data 121 of the event-based vision sensor 120. For example, techniques such as “Structure from Motion” may be used to improve (fine-tune) the inference of the object 101 ’s shape from the event stream provided by the event-based vision sensor 120 using the frame stream. However, it is to be noted that other known techniques for combining the information of the event-based stream and the information of the frame-based stream may be used instead of the “Structure from Motion” technique.
In some examples, the event-based vision sensor 120 may be a hybrid sensor and additionally capture the object 101 frame-based such that the output data 121 of the event-based vision sensor further encode the frame stream indicating the frames captured by the event-based vision sensor 120 during the measurement. For example, the event-based vision sensor 120 may comprise a first plurality of pixels for capturing the changes in brightness and a second plurality of pixels for capturing the (image) frames of the object 101 ’s surface. In other examples, the event-based vision sensor 120 may comprise pixels that are able to concurrently capture changes in brightness and capture the frames of the obj ect 101’ s surface. F or example, the event-based vision sensor 120 may be a Dynamic and Active Pixel Vision Sensor (DAVIS) that is able to concurrently capture events and frames.
In other examples, the system 100 may additionally comprise a separate frame-based vision sensor 150 that is coupled to the processing circuitry 140. As illustrated in Fig. 1, the framebased vision sensor 150 is configured to capture the object 101’ s surface during the measurement. Accordingly, output data 151 of the frame-based vision sensor 150 encode a frame stream indicating the frames captured by the frame-based vision sensor 150 during the measurement.
Depending on the implementation of the system 100, the processing circuitry 140 receives the frame stream either from the event-based vision sensor 120 or the frame-based vision sensor 150 and processes it accordingly.
The above description was given for a single measurement. However, it is to be noted that multiple measurements may be performed. In particular, various measurements may be performed for different tilt angles of the object 101’s surface, the event-based vision sensor 120 or the light source 110 with respect to the rotation axis z. The characteristic of the obj ect 101’ s surface may be sensitive to the tilt angle such that performing measurements with different tilt angles may allow to more completely characterize the object 101’ s surface and, hence, the object 101.
For example, similarly to what is described above, the event-based vision sensor 120 may be configured to capture the object 101’ s surface during another (e.g. a second) measurement. Accordingly, the output data 121 of the event-based vision sensor 120 may encode another event stream indicating one or more change in brightness measured by the event -based vision
sensor 120 during the other measurement. Further, similarly to what is described above, the light source 110 may be configured to illuminate the object 101 ’s surface during the other measurement. The positioning system 130 may be configured to change a tilt angle of one of the object 101’s surface (i.e. the object 101), the event-based vision sensor 120 and the light source 110 with respect to the rotation axis z prior to the other measurement such that different tilt angles are used for the measurement and the other measurement. For example, the rotatable plate used as the positioning system 130 in the example of Fig.1 may be tilted prior to the other measurement such that the object 101 ’s surface (i.e. the object 101) such that different tilt angles of the object 101’ s surface (i.e. the object 101) are used for the measurement and the other measurement. Further, similarly to what is described above, the positioning system 130 may be configured to relatively rotate at least two of the object 101’ s surface, the eventbased vision sensor 120 and the light source 110 with respect to each other about the rotation axis z during the other measurement. Based on the principles described above, the processing circuitry 140 may be further configured to determine the characteristic of the object 101’ s surface for the changed tilt angle (e.g. reflectance or orientation of the surface) based on the other event stream for the other measurement and based on the data 102 indicating the other characteristic of the surface (e.g. orientation or reflectance of the surface).
An exemplary application of the proposed surface characterization is illustrated in Fig. 2. Fig. 2 illustrates another system 200 for determining a characteristic of a surface of an object 201, which is a vehicle in the example of Fig. 2. The system 200 differs from the above described system 100 with respect to the implementation of the positioning system. Other than that, the systems are identical.
In the example of Fig. 2, the positioning system 230 comprises an arm 231 and a joint 232. The event-based vision sensor 120 and the light source 110 are both mounted (fixed) to the arm 231 of the positioning system 230. The arm 231 is held by the joint 232 such that the arm 231 and, hence, the event-based vision sensor 120 and the light source 110 can rotate (e.g. swing) relative to the vehicle 201. The rotation axis is not illustrated in Fig. 2 for reasons of simplicity. However, it will be understood by those skilled in the art that the rotation axis passes through the joint 232 in the example of Fig. 2. The positioning system 230 may, e.g., be implemented by means of a robotic arm. In other example, the light source 110 may be mounted on an entity different from the arm 231 (e.g. movable or immovable with respect to the vehicle 201).
Accordingly, the positioning system 230 may move the event-based vision sensor 120 and the light source 110 around the vehicle 201 and allow to inspect the surface of the vehicle 201. The processing circuitry 140 is coupled to the event-based vision sensor 120 and determines a characteristic of the vehicle 201’s surface according to the above described principles. For example, a reflectance may be determined for the vehicle 201’s surface or orientations of the individual parts of the vehicle 201’s surface may be determined. By comparing the determined characteristics of the vehicle 201’s surface to reference characteristics, potential imperfections of the vehicle 201 may be determined.
Due to the high speed of the event-based vision sensor 120, the system 200 may allow to determine the characteristic of the vehicle 201’s surface with high speed such that the system 200 may allow material/surface inspection with high throughput. It is to be noted that the vehicle 201 is merely an example for an object that can be inspected. In general, the system 200 may be used for any mass produced object.
For further highlighting the surface characterization described above, Fig. 3 illustrates a flowchart of a method 300 for determining a characteristic of a surface of an object. The method 300 comprises illuminating 302 the surface by a light source during a measurement. Additionally, the method 300 comprises capturing 304 the surface by an event-based vision sensor during the measurement. Output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement. The method 300 further comprises relatively rotating 306 at least two of the surface, the event-based vision sensor and the light source with respect to each other about a rotation axis during the measurement. In addition, the method 300 comprises determining 308 the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
The method 300 may allow to determine the characteristic of the object’s surface with higher speed due to the high speed of the event-based vision sensor. Accordingly, the method 300 may allow shorter processing times than in methods using conventional goniometer-based systems. Additionally, the requirements for the background of the object may be eased. Further advantages of the method 300 are described above in connection with the systems 100 and 200.
More details and aspects of the method 300 are explained in connection with the proposed technique or one or more examples described above (e.g. Fig. 1 and Fig. 2). The method 300 may comprise one or more additional optional features corresponding to one or more aspects of the proposed technique or one or more examples described above.
For example, as described above, different tilt angles of the surface, the event-based vision sensor or the light source with respect to the rotation axis may be used. Accordingly, the method 300 may optionally further comprise changing 310 a tilt angle of one of the surface, the event-based vision sensor and the light source with respect to the rotation axis prior to performing another measurement, in which the method steps 302 and 308 are repeated for the changed tilt angle to determine the characteristic of the surface for the changed tilt angle.
In the following, two methods for training a machine-learning model will be described in detail with respect to Figs. 4 to 7. Both methods are based on backpropagation and may be used for training a machine-learning model such that the respective model is able to determine a characteristic of a surface according to the above described principles.
Fig. 4 illustrates a flowchart of a method 400 for training a machine-learning model. The machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event -based vision sensor during a measurement. The method 400 will be described in the following further with reference to Fig. 5 which illustrates the data flow 500 in the method 400.
The method 400 comprises inputting 402 data indicating a reflectance of the surface to the machine-learning model. As illustrated in Fig. 5, the machine-learning model 510 (e.g. a neural network) receives data 501 indicating a known or predetermined reflectance of the surface as input.
Further, the method 400 comprises inputting 404 training data to the machine-learning model. The training data indicate a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis. The light source is used for illuminating the surface during the measurement. The training data may be obtained in a reference
measurement (e.g. using the system 100) or be obtained by simulation. For example, the number of changes in brightness indicated by the training data may be the number of events in the event stream output by the event-based vision for the reference measurement. In other examples, the number of changes in brightness may be the number of events in the event stream output by the event-based vision sensor for the reference measurement which indicate that the brightness changed by more than a threshold value. In still other examples, a high-fidelity simulator may be used for obtaining the data by way of simulation. As illustrated in Fig. 5, the machine-learning model 510 receives data 502 indicating the predetermined number of changes in brightness for the predetermined relative rotation. The data 502 may, e.g., be given on a per pixel basis similar to the output of event-based vision sensor during the measurement, which indicates the changes in brightness its individual pixels.
In addition, the method 400 comprises determining 406 a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model (e.g. a CAD model) of the object. Further, the method 400 comprises updating 408 the weights of the machine-learning model based on the gradient of the loss function.
As illustrated in Fig. 5, an output 511 of the machine-learning model 510, i.e., the orientation of the object’s surface determined by the machine-learning model 510 is input to a loss function and optimization network 520. For example, the determined (estimated) orientation of the object’s surface may be given as a surface normal. Also the output 511 may be given on a per pixel base. Further, data 503 indicating the digital model of the object are received by the loss function and optimization network 520. The loss function and optimization network 520 determines the gradient of the loss function for the weights of the machine-learning model 510 and updates the weights of the machine-learning model 510 based on the gradient of the loss function. In general, the loss function (also known as cost function or error function) is a function that maps an event or values of one or more variables such as the output 511 and the data 503 onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize the loss function. In the example of Fig. 5, the “cost” is the discrepancy between the orientation of the object’s surface output by the machine-learning model 510 and a predetermined orientation of the obj ect’ s surface given by the data 503. In other words, the method 400 tries to equal the orientation of the object’s surface
output by the machine-learning model 510 to the predetermined orientation of the object’s surface given by the data 503.
The updated weights 521 of the machine-learning model 510 are fed back to the machinelearning model 510 in order to update the machine-learning model 510. In other words, the weights of the machine-learning model 510 are adjusted by the loss function and optimization network 520.
As indicated in Fig. 4, the method steps 404 to 408 may be repeated one or more time in order to iteratively train the machine-learning model 510 and minimize the loss function.
The method 400 may allow to obtain a trained machine-learning model for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement. For example, the above described processing circuitry 140 may use a machine-learning model trained according to the method 400 for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement.
Fig- 6 illustrates a flowchart of another method 600 for training a machine-learning model. The machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event -based vision sensor during a measurement. The method 600 will be described in the following further with reference to Fig. 7 which illustrates the data flow 700 in the method 600.
The method 600 comprises inputting 602 data indicating a digital model of the object to the machine-learning model. As illustrated in Fig. 7, the machine-learning model 710 (e.g. a neural network) receives data 701 indicating a known or predetermined digital model of the object (e.g. a CAD surface model) as input. The data indicating the digital model of the object allow to infer an orientation of the object for the training of the machine-learning model - similar to what is described above.
Further, the method 600 comprises inputting 604 training data to the machine-learning model. The training data indicate a predetermined number of changes in brightness for a
predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis. The light source is used for illuminating the surface during the measurement. The training data may be obtained in a reference measurement (e.g. using the system 100) or be obtained by simulation. For example, the number of changes in brightness indicated by the training data may be the number of events in the event stream output by the event-based vision for the reference measurement. In other examples, the number of changes in brightness may be the number of events in the event stream output by the event-based vision sensor for the reference measurement which indicate that the brightness changed by more than a threshold value. In still other examples, a high- fidelity simulator may be used for obtaining the data by way of simulation. As illustrated in Fig. 7, the machine-learning model 710 receives data 702 indicating the predetermined number of changes in brightness for the predetermined relative rotation. The data 702 may, e.g., be given on a per pixel basis similar to the output of event-based vision sensor during the measurement, which indicates the changes in brightness its individual pixels.
In addition, the method 600 comprises determining 606 a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target (desired, known) reflectance for the training data. Further, the method 600 comprises updating 608 the weights of the machinelearning model based on the gradient of the loss function.
As illustrated in Fig. 7, an output 711 of the machine-learning model 710, i.e., the reflectance value for the object’s surface determined by the machine-learning model 710 is input to a loss function and optimization network 720. Also the output 711 may be given on a per pixel base. Further, data 703 indicating the target reflectance of the object’s surface are received by the loss function and optimization network 720. The target reflectance is a predetermined (given) reflectance of the object for the training of the machine-learning model. The loss function and optimization network 720 determines the gradient of the loss function for the weights of the machine-learning model 710 and updates the weights of the machine-learning model 710 based on the gradient of the loss function. In the example of Fig. 7, the “cost” is the discrepancy between the reflectance of the object’s surface output by the machine-learning model 710 and the target reflectance of the object’s surface given by the data 703. In other words, the method 700 tries to equal the reflectance of the object’s surface output by the machinelearning model 710 to the target reflectance of the object’s surface given by the data 703.
The updated weights 721 of the machine-learning model 710 are fed back to the machinelearning model 710 in order to update the machine-learning model 710. In other words, the weights of the machine-learning model 710 are adjusted by the loss function and optimization network 720.
As indicated in Fig. 6, the method steps 604 to 608 may be repeated one or more time in order to iteratively train the machine-learning model 710 and minimize the loss function. For example, the above described processing circuitry 140 may use a machine-learning model trained according to the method 600 for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an eventbased vision sensor during a measurement.
Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor or other programmable hardware component. Thus, steps, operations or processes of different ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components. Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processor-executable or computer-executable programs and instructions. Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example. Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), FPGAs, graphics processor units (GPU), ASICs, integrated circuits (ICs) or sys- tem-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
The following examples pertain to further embodiments:
(1) A system for determining a characteristic of a surface of an object, the system comprising: a light source configured to illuminate the surface during a measurement;
an event-based vision sensor configured to capture the surface during the measurement, wherein output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement; a positioning system configured to relatively rotate at least two of the surface, the event -based vision sensor and the light source with respect to each other about a rotation axis during the measurement; and processing circuitry coupled to the event-based vision sensor and configured to determine the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
(2) The system of (1), wherein the characteristic of the surface is a reflectance of the surface, and wherein the other characteristic of the surface is an orientation of the surface.
(3) The system of (2), wherein the processing circuitry is further configured to: receive data indicating a digital model of the object; and determine the orientation of the surface based on the digital model of the object.
(4) The system of (1), wherein the characteristic of the surface is an orientation of the surface, and wherein the other characteristic of the surface is a reflectance of the surface.
(5) The system of any one of (1) to (4), wherein the positioning system is configured to rotate the object about the rotation axis during the measurement.
(6) The system of any one of (1) to (5), wherein the positioning system is configured to rotate at least one of the event-based vision sensor and the light source about the rotation axis during the measurement.
(7) The system of any one of (1) to (6), wherein the processing circuitry is configured to determine the characteristic of the surface by: determining a number of changes in brightness that satisfy a quality criterion from the event stream for the measurement. selecting a set of possible values for the characteristic of the surface based on the other characteristic of the surface; and
selecting, based on the determined number of changes in brightness, one possible value from the selected set of possible values as the characteristic of the surface.
(8) The system of any one of (1) to (7), wherein the processing circuitry is configured to determine the characteristic of the surface using a static rule-based model.
(9) The system of any one of (1) to (7), wherein the processing circuitry is configured to determine the characteristic of the surface using a trained machine-learning model.
(10) The system of any one of (1) to (9), wherein: the event-based vision sensor is configured to capture the surface during another measurement, wherein the output data of the event-based vision sensor encode another event stream indicating one or more change in brightness measured by the event-based vision sensor during the other measurement; the light source is configured to illuminate the surface during the other measurement the positioning system is configured to: change a tilt angle of one of the surface, the event-based vision sensor and the light source with respect to the rotation axis prior to the other measurement; and relatively rotate at least two of the surface, the event -based vision sensor and the light source with respect to each other about the rotation axis during the other measurement; and the processing circuitry is configured to determine the characteristic of the surface for the changed tilt angle based the other event stream for the other measurement and based on the data indicating the other characteristic of the surface.
(11) The system of any one of (1) to (10), wherein the output data of the event-based vision sensor further encode a frame stream indicating frames captured by the event-based vision sensor during the measurement.
(12) The system of any one of (1) to (10), further comprising a frame-based vision sensor configured to capture the surface during the measurement, wherein output data of the frame - based vision sensor encode a frame stream indicating frames captured by the frame-based vision sensor during the measurement.
(13) The system of (11) or (12), wherein the characteristic of the surface is an orientation of the surface, wherein the other characteristic of the surface is a reflectance of the surface, and wherein the processing circuitry is configured to determine the orientation of the surface further based on the frame stream for the measurement.
(14) The system of any one of (1) to (13), wherein the light source is configured to illuminate the surface during the measurement with at least one of ultraviolet light, visible light and infrared light.
(15) The system of any one of (1) to (14), wherein the event-based vision sensor is sensitive for at least one of ultraviolet light, visible light and infrared light.
(16) A method for determining a characteristic of a surface of an object, the method comprising: illuminating the surface by a light source during a measurement; capturing the surface by an event-based vision sensor during the measurement, wherein output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement; relatively rotating at least two of the surface, the event -based vision sensor and the light source with respect to each other about a rotation axis during the measurement; and determining the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
(17) A method fortraining a machine-learning model, wherein the machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement, the method comprising: inputting, to the machine-learning model, data indicating a digital model of the object; inputting, to the machine-learning model, training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis, the light source being used for illuminating the surface during the measurement;
determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target reflectance for the training data; and updating the weights of the machine-learning model based on the gradient of the loss function.
(18) A method for training a machine-learning model, wherein the machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement, the method comprising: inputting, to the machine-learning model, data indicating a reflectance of the surface; inputting, to the machine-learning model, training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis, the light source being used for illuminating the surface during the measurement; determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model of the object; and updating the weights of the machine-learning model based on the gradient of the loss function.
(19) A programmable hardware comprising circuitry configured to perform the method according to (17) or (18).
(20) A non-transitory machine-readable medium having stored thereon a program having a program code for performing the method according to (17) or (18), when the program is executed on a processor or a programmable hardware.
(21) A program having a program code for performing the method according to (17) or (18), when the program is executed on a processor or a programmable hardware.
The aspects and features described in relation to a particular one of the previous examples may also be combined with one or more of the further examples to replace an identical or similar feature of that further example or to additionally introduce the features into the further example.
It is further understood that the disclosure of several steps, processes, operations or functions disclosed in the description or claims shall not be construed to imply that these operations are necessarily dependent on the order described, unless explicitly stated in the individual case or necessary for technical reasons. Therefore, the previous description does not limit the execution of several steps or functions to a certain order. Furthermore, in further examples, a single step, function, process or operation may include and/or be broken up into several sub-steps, - functions, -processes or -operations.
If some aspects have been described in relation to a device or system, these aspects should also be understood as a description of the corresponding method. For example, a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method. Accordingly, aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
The following claims are hereby incorporated in the detailed description, wherein each claim may stand on its own as a separate example. It should also be noted that although in the claims a dependent claim refers to a particular combination with one or more other claims, other examples may also include a combination of the dependent claim with the subject matter of any other dependent or independent claim. Such combinations are hereby explicitly proposed, unless it is stated in the individual case that a particular combination is not intended. Furthermore, features of a claim should also be included for any other independent claim, even if that claim is not directly defined as dependent on that other independent claim.
Claims
1. A system for determining a characteristic of a surface of an object, the system comprising: a light source configured to illuminate the surface during a measurement; an event-based vision sensor configured to capture the surface during the measurement, wherein output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement; a positioning system configured to relatively rotate at least two of the surface, the event -based vision sensor and the light source with respect to each other about a rotation axis during the measurement; and processing circuitry coupled to the event-based vision sensor and configured to determine the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
2. The system of claim 1, wherein the characteristic of the surface is a reflectance of the surface, and wherein the other characteristic of the surface is an orientation of the surface.
3. The system of claim 2, wherein the processing circuitry is further configured to: receive data indicating a digital model of the object; and determine the orientation of the surface based on the digital model of the object.
4. The system of claim 1, wherein the characteristic of the surface is an orientation of the surface, and wherein the other characteristic of the surface is a reflectance of the surface.
5. The system of claim 1, wherein the positioning system is configured to rotate the object about the rotation axis during the measurement.
6. The system of claim 1, wherein the positioning system is configured to rotate at least one of the event-based vision sensor and the light source about the rotation axis during the measurement.
7. The system of claim 1, wherein the processing circuitry is configured to determine the characteristic of the surface by: determining a number of changes in brightness that satisfy a quality criterion from the event stream for the measurement. selecting a set of possible values for the characteristic of the surface based on the other characteristic of the surface; and selecting, based on the determined number of changes in brightness, one possible value from the selected set of possible values as the characteristic of the surface.
8. The system of claim 1, wherein the processing circuitry is configured to determine the characteristic of the surface using a static rule-based model.
9. The system of claim 1, wherein the processing circuitry is configured to determine the characteristic of the surface using a trained machine-learning model.
10. The system of claim 1, wherein: the event-based vision sensor is configured to capture the surface during another measurement, wherein the output data of the event-based vision sensor encode another event stream indicating one or more change in brightness measured by the event-based vision sensor during the other measurement; the light source is configured to illuminate the surface during the other measurement the positioning system is configured to: change a tilt angle of one of the surface, the event-based vision sensor and the light source with respect to the rotation axis prior to the other measurement; and relatively rotate at least two of the surface, the event -based vision sensor and the light source with respect to each other about the rotation axis during the other measurement; and
the processing circuitry is configured to determine the characteristic of the surface for the changed tilt angle based the other event stream for the other measurement and based on the data indicating the other characteristic of the surface.
11. The system of claim 1 , wherein the output data of the event-based vision sensor further encode a frame stream indicating frames captured by the event-based vision sensor during the measurement.
12. The system of claim 1, further comprising a frame-based vision sensor configured to capture the surface during the measurement, wherein output data of the frame-based vision sensor encode a frame stream indicating frames captured by the frame-based vision sensor during the measurement.
13. The system of claim 11, wherein the characteristic of the surface is an orientation of the surface, wherein the other characteristic of the surface is a reflectance of the surface, and wherein the processing circuitry is configured to determine the orientation of the surface further based on the frame stream for the measurement.
14. The system of claim 1, wherein the light source is configured to illuminate the surface during the measurement with at least one of ultraviolet light, visible light and infrared light.
15. The system of claim 1, wherein the event-based vision sensor is sensitive for at least one of ultraviolet light, visible light and infrared light.
16. A method for determining a characteristic of a surface of an object, the method comprising: illuminating the surface by a light source during a measurement; capturing the surface by an event-based vision sensor during the measurement, wherein output data of the event-based vision sensor encode an event stream indicating one or more change in brightness measured by the event-based vision sensor during the measurement; relatively rotating at least two of the surface, the event -based vision sensor and the light source with respect to each other about a rotation axis during the measurement; and determining the characteristic of the surface based on the event stream for the measurement and based on data indicating another characteristic of the surface.
17. A method fortraining a machine-learning model, wherein the machine-learning model is for determining a reflectance of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement, the method comprising: inputting, to the machine-learning model, data indicating a digital model of the object; inputting, to the machine-learning model, training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis, the light source being used for illuminating the surface during the measurement; determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a target reflectance for the training data; and updating the weights of the machine-learning model based on the gradient of the loss function.
18. A method for training a machine-learning model, wherein the machine-learning model is for determining an orientation of a surface of an object based on an event stream indicating one or more change in brightness measured by an event-based vision sensor during a measurement, the method comprising: inputting, to the machine-learning model, data indicating a reflectance of the surface; inputting, to the machine-learning model, training data indicating a predetermined number of changes in brightness for a predetermined relative rotation of at least two of the surface, the event-based vision sensor and a light source with respect to each other about a rotation axis, the light source being used for illuminating the surface during the measurement; determining a gradient of a loss function for weights of the machine-learning model based on an output of the machine-learning model for the training data and based on data indicating a digital model of the object; and updating the weights of the machine-learning model based on the gradient of the loss function.
19. A programmable hardware comprising circuitry configured to perform the method according to claim 17.
20. A non-transitory machine-readable medium having stored thereon a program having a program code for performing the method according to claim 17, when the program is executed on a processor or a programmable hardware.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22782701.1A EP4405664A1 (en) | 2021-09-22 | 2022-09-12 | System and method for determining a characteristic of a surface of an object, methods for training a machine-learning model and programmable hardware |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21198179.0 | 2021-09-22 | ||
EP21198179 | 2021-09-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023046510A1 true WO2023046510A1 (en) | 2023-03-30 |
Family
ID=78134722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/075256 WO2023046510A1 (en) | 2021-09-22 | 2022-09-12 | System and method for determining a characteristic of a surface of an object, methods for training a machine-learning model and programmable hardware |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP4405664A1 (en) |
WO (1) | WO2023046510A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030071194A1 (en) * | 1996-10-25 | 2003-04-17 | Mueller Frederick F. | Method and apparatus for scanning three-dimensional objects |
US20120327295A1 (en) * | 2010-02-03 | 2012-12-27 | Rolf Beck | Method and apparatus for optically inspecting a test specimen having an at least partly reflective surface |
US20210166370A1 (en) * | 2017-12-08 | 2021-06-03 | Panasonic Intellectual Property Management Co., Ltd. | Inspection system, inspection method, program, and storage medium |
-
2022
- 2022-09-12 WO PCT/EP2022/075256 patent/WO2023046510A1/en active Application Filing
- 2022-09-12 EP EP22782701.1A patent/EP4405664A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030071194A1 (en) * | 1996-10-25 | 2003-04-17 | Mueller Frederick F. | Method and apparatus for scanning three-dimensional objects |
US20120327295A1 (en) * | 2010-02-03 | 2012-12-27 | Rolf Beck | Method and apparatus for optically inspecting a test specimen having an at least partly reflective surface |
US20210166370A1 (en) * | 2017-12-08 | 2021-06-03 | Panasonic Intellectual Property Management Co., Ltd. | Inspection system, inspection method, program, and storage medium |
Non-Patent Citations (1)
Title |
---|
GUILLERMO GALLEGO ET AL: "Event-based Vision: A Survey", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 8 August 2020 (2020-08-08), XP081735140, DOI: 10.1109/TPAMI.2020.3008413 * |
Also Published As
Publication number | Publication date |
---|---|
EP4405664A1 (en) | 2024-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102530209B1 (en) | Diagnostic systems and methods for deep learning models configured for semiconductor applications | |
KR102408319B1 (en) | Generate simulated output for your sample | |
US10607119B2 (en) | Unified neural network for defect detection and classification | |
KR102213730B1 (en) | Methods and systems including neural networks and forward physical models for semiconductor applications | |
US20220254005A1 (en) | Yarn quality control | |
KR20180094121A (en) | Accelerate semiconductor-related calculations using a learning-based model | |
KR20180090385A (en) | Accelerated training of machine learning-based models for semiconductor applications | |
JP7255919B2 (en) | Systems, methods and media for artificial intelligence process control in additive manufacturing | |
JP7131617B2 (en) | Method, device, system, program and storage medium for setting lighting conditions | |
CN114600154A (en) | BBP-assisted defect detection process for SEM images | |
US11694327B2 (en) | Cross layer common-unique analysis for nuisance filtering | |
US11816185B1 (en) | Multi-view image analysis using neural networks | |
US11983917B2 (en) | Boosting AI identification learning | |
KR102031982B1 (en) | A posture classifying apparatus for pressure distribution information using determination of re-learning of unlabeled data | |
US11657270B2 (en) | Self-assessing deep representational units | |
KR20230014684A (en) | Defect size measurement using deep learning method | |
Gallos et al. | Active vision in the era of convolutional neural networks | |
US20240070537A1 (en) | Microscopy System and Method for Generating a Machine-Learned Model for Processing Microscope Data | |
WO2023046510A1 (en) | System and method for determining a characteristic of a surface of an object, methods for training a machine-learning model and programmable hardware | |
CN116959078A (en) | Method for constructing fatigue detection model, fatigue detection method and device thereof | |
KR20240099109A (en) | Continuous machine learning model training for semiconductor manufacturing | |
JP2023529843A (en) | How to tune the fit of the image and data analysis model | |
Ansari | Deep Learning and Artificial Neural Networks | |
Liang et al. | Image tracking for the high similarity drug tablets based on light intensity reflective energy and artificial neural network | |
Zangl et al. | Quantifying Surface Texture with Deep Learning on Laser Treated Surfaces |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22782701 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022782701 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022782701 Country of ref document: EP Effective date: 20240422 |