US20210056426A1 - Generation of kernels based on physical states - Google Patents
Generation of kernels based on physical states Download PDFInfo
- Publication number
- US20210056426A1 US20210056426A1 US16/966,529 US201816966529A US2021056426A1 US 20210056426 A1 US20210056426 A1 US 20210056426A1 US 201816966529 A US201816966529 A US 201816966529A US 2021056426 A1 US2021056426 A1 US 2021056426A1
- Authority
- US
- United States
- Prior art keywords
- array
- temperature values
- kernel
- kernels
- engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 claims abstract description 49
- 238000004364 calculation method Methods 0.000 claims abstract description 22
- 238000003491 array Methods 0.000 claims description 32
- 238000000034 method Methods 0.000 claims description 24
- 239000000843 powder Substances 0.000 claims description 22
- 238000013527 convolutional neural network Methods 0.000 claims description 20
- 238000009792 diffusion process Methods 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 description 35
- 239000010410 layer Substances 0.000 description 29
- 238000010586 diagram Methods 0.000 description 14
- 238000001931 thermography Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 239000000463 material Substances 0.000 description 8
- 238000007781 pre-processing Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000005855 radiation Effects 0.000 description 5
- 239000003795 chemical substances by application Substances 0.000 description 4
- 239000013043 chemical agent Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 240000008042 Zea mays Species 0.000 description 1
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 235000005822 corn Nutrition 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005293 physical law Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/545—Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Definitions
- a computer may store data representative of a physical phenomenon.
- a complete dataset may be copied to the computer, or the computer may receive the data from a sensor, for example, over time.
- the computer may perform various operations on the data to further represent additional aspects of the physical phenomenon. Accordingly, the computer may provide a deeper understanding of the physical phenomena being modeled.
- FIG. 1 is a block diagram of an example system to generate a kernel based on a physical state.
- FIG. 2 is a block diagram of another example system to generate a kernel based on a physical state.
- FIG. 3 is a flow diagram of an example method to update models to generate a kernel.
- FIG. 4 is a flow diagram of an example method to compute an array of estimated temperature values.
- FIG. 5 is a flow diagram of another example method to update models to generate a kernel.
- FIG. 6 is a block diagram of an example computer-readable medium including instructions that cause a processor to generate a kernel based on a physical state.
- FIG. 7 is a block diagram of another example computer-readable medium including instructions that cause a processor to generate a kernel based on a physical state.
- physical phenomena may be complicated to model by a computer.
- some physical phenomena may be anisotropic, such as thermal diffusivity in heterogeneous or anisotropic materials, conductivity in particular materials (e.g., electrical conductivity in certain crystals), stiffness of particular materials, or the like.
- the physical phenomena may depend on several physical attributes.
- thermal diffusivity may depend on the temperature field and phase properties at a location and at neighboring locations.
- Aspects of the physical phenomena may be modeled at numerous locations, and numerous calculations may be performed for each location to account for the various interactions occurring at each location and among neighboring locations.
- such modeling may quickly lead to performing extraordinarily large numbers of calculations. Such large numbers of calculations may be time consuming to perform and may take too long to be usable in many situations.
- a printer such as a three-dimensional printer, may deliver thermal energy to a print target, such as a print bed where loose powder is fused or sintered layer-by-layer to build a printed part.
- the printer may also deliver chemical agents (e.g., property changing agents, such as fusing agents, detailing agents, etc.) selectively on the print target to eventually trigger phase changes and drive the powder at selected locations to be fused together.
- the volume elements (“voxels”) of the print target may include different materials relative to one another.
- the thermal diffusivity of each voxel may depend on the properties of that voxel as well as the properties of neighboring voxels.
- thermal diffusion in the printer may be anisotropic and may be complicated to model. Significant numbers of calculations may be involved in modeling thermal diffusion over even short periods of time. Modeling of complicated physical phenomena, such as anisotropic phenomena (e.g., thermal diffusion in three-dimensional printers), may be improved by providing models that permit reasonable numbers of calculations to be performed while modeling the physical phenomena.
- FIG. 1 is a block diagram of an example system 100 to generate a kernel based on a physical state.
- the system 100 may include a kernel generation engine 110 .
- the term “engine” refers to hardware (e.g., a processor, such as an integrated circuit or other circuitry) or a combination of software (e.g., programming such as machine- or processor-executable instructions, commands, or code such as firmware, a device driver, programming, object code, etc.) and hardware.
- Hardware includes a hardware element with no software elements such as an application specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), etc.
- ASIC application specific integrated circuit
- FPGA Field Programmable Gate Array
- a combination of hardware and software includes software hosted at hardware (e.g., a software module that is stored at a processor-readable memory such as random access memory (RAM), a hard-disk or solid-state drive, resistive memory, or optical media such as a digital versatile disc (DVD), and/or executed or interpreted by a processor), or hardware and software hosted at hardware.
- software hosted at hardware e.g., a software module that is stored at a processor-readable memory such as random access memory (RAM), a hard-disk or solid-state drive, resistive memory, or optical media such as a digital versatile disc (DVD), and/or executed or interpreted by a processor
- hardware e.g., a software module that is stored at a processor-readable memory such as random access memory (RAM), a hard-disk or solid-state drive, resistive memory, or optical media such as a digital versatile disc (DVD), and/or executed or interpreted by a processor
- the kernel generation engine 110 may generate a plurality of kernels based on a description of a physical state.
- the term “description of a physical state” refers to data indicative of characteristics of a physical system.
- the data may include raw or processed measurements of the physical system.
- the term “kernel” refers to an indication of a relationship between inputs to the kernel and outputs from the kernel.
- the kernel may indicate the relationship between a description of a physical state of a physical system at a first time and a description of a physical state of the physical system at a second time. Such relationship may reflect a physical law.
- the plurality of kernels are generated based on applying a neural network (e.g., a deep neural network) to the description of the physical state.
- a neural network e.g., a deep neural network
- the description of the physical state may be used as an input to the neural network, and the neural network may produce the plurality of kernels based on the input.
- the system 100 may include a calculation engine 120 to apply the plurality of kernels to the description of the physical state to produce a plurality of intermediate descriptions.
- the description of the physical state may be used as an input to each kernel.
- Each kernel may produce an intermediate description as an output. Accordingly, there may be a corresponding number of kernels and intermediate descriptions.
- the system 100 may include a weighting engine 130 to determine a plurality of weight maps based on the plurality of kernels. For example, the weighting engine 130 may determine how much weight should be accorded to various portions of each intermediate description based on the kernel that produced the intermediate description. Each weight map may indicate the weight to be applied to each portion of the corresponding intermediate description.
- the system 100 may include a compositing engine 140 to apply the plurality of weight maps to the plurality of intermediate descriptions to produce weighted intermediate descriptions. For example, each portion of each intermediate description may be weighted as indicated by the corresponding portion of the corresponding weight map.
- the compositing engine 140 may combine the weighted intermediate descriptions to produce an updated description of the physical state. In some examples, the compositing engine 140 may combine the weighted intermediate descriptions by computing a sum, an arithmetic or geometric mean, a median, a mode, a minimum, a maximum, or the like.
- FIG. 2 is a block diagram of another example system 200 to generate a kernel based on a physical state.
- the system 200 may include a thermal imaging device 202 .
- the thermal imaging device 202 may generate a thermal image by sensing infrared radiation at a plurality of picture elements (pixels) at a point in time.
- the term “point in time” refers to a time period that is short relative to the time spans over which measurable changes occur to a physical state being measured.
- the length of the point in time may correspond to the shutter speed of the thermal imaging device.
- Each pixel of the thermal image may correspond to the intensity of the infrared radiation at that pixel.
- the thermal imaging device 202 may capture a thermal image of a print target.
- the thermal imaging device 202 may capture an overhead view of the print target and thus depict an x-y plane.
- the thermal imaging device 202 may capture a first thermal image of a first layer of powder on the print target and a second thermal image of a second layer of powder on the print target.
- the thermal imaging device 202 may capture the first thermal image immediately prior to the second layer of powder being added on top of the first layer of powder.
- the term “immediately prior” refers to capturing the thermal image when little change will occur to the physical state being measured before the next layer is added.
- the thermal image device 202 may be associated with a frame rate, and the last frame captured before adding the second layer of powder may be captured immediately prior to the addition of the second layer of powder.
- the terms “first” and “second” are used to differentiate between different elements and may not indicate position.
- the first layer of powder may not be a bottom layer of powder.
- the system 200 may include a preprocessing engine 204 .
- the preprocessing engine 204 may correct the thermal image to compensate for distortion caused by the thermal imaging device 202 (e.g., distortion caused by a lens or camera angle of the thermal imaging device 202 ). For example, straight lines on the print target may appear as curved lines in the thermal image.
- the preprocessing engine 204 may apply an inversion of the distortion to the thermal image, for example, so that straight lines on the print target appear as straight lines in the corrected thermal image.
- the preprocessing engine 204 may convert the thermal image into an array of temperature values at the point in time.
- the term “array” refers to a group of data elements (e.g., data elements indicative of temperature values).
- the array may be a multidimensional array, such as a two-dimensional array with the dimensions corresponding to the dimensions of the corrected thermal image.
- the preprocessing engine 204 may compute the temperature values from the intensity values based on a predetermined or measured relationship between intensity and temperature.
- the system 200 may include a kernel generation engine 210 .
- the kernel generation engine 210 may generate a plurality of kernels based on the array of temperature values.
- the kernel generation engine 210 may generate the plurality of kernels based on a description of a physical state other than the array of temperature values (e.g., a thermal image, an array of electric potentials or currents, an array of stresses or strains, etc.).
- the kernel generation engine 210 may use a model to compute the plurality of kernels based on the array of temperature values.
- the model may include a neural network, such as a convolutional neural network.
- the array of temperature values may be input to the convolutional neural network.
- the array of temperature values may be a difference between two arrays of temperature values (e.g., the difference between a first array corresponding to a first thermal image of a first layer of powder and a second array corresponding to a second thermal image of a second layer of powder).
- the input to the convolutional neural network may include a two-dimensional array of temperature values.
- the layers in the convolutional neural network and the output of the convolutional neural network may include three-dimensional arrays of values.
- the size of the third dimension of the output may be equal to the number of kernels to be generated.
- the hidden layers of the convolutional neural network may include third dimensions with sizes equal to the number of kernels to be generated, that are integer multiples of the number of kernels to be generated, or the like. For example, there may be three kernels and a 100 ⁇ 100 array of temperature values in an example.
- the input may be the 100 ⁇ 100 array of temperature values, a second layer may produce a 33 ⁇ 33 ⁇ 6 array of values, a third layer may produce an 11 ⁇ 11 ⁇ 6 array of values, and an output of the convolutional neural network may be a 5 ⁇ 5 ⁇ 3 array of values.
- an alternate description of the physical state may also be included in the input.
- the input may include an array of temperature values and an array of alternate values associated with the print target (e.g., the input may include a three dimensional array constructed from a plurality of two dimensional arrays variously describing a physical state of a physical system).
- the array of alternate values may include fusing or detailing agent distribution maps (e.g., maps that indicate the phase or material status of each voxel), distribution maps of other chemical agents, intensities from a visible light image of the print target, or the like.
- each kernel may include an array of values.
- each kernel may include a two dimensional array of values, and the output of the convolutional neural network may include a three dimensional array that includes the plurality of kernels.
- the kernel may not explicitly indicate the calculations relating inputs to the kernel to outputs to the kernel, but rather the kernel may include an array of values used in predetermined calculations.
- the system 200 may include a calculation engine 220 .
- the calculation engine 220 may apply the plurality of kernels to the description of the physical state to produce a plurality of intermediate descriptions.
- the calculation engine 220 may receive the array of temperature values as an input and calculate each of a plurality of intermediate arrays of values based on the array of temperature values and one of the plurality of kernels.
- the calculation engine 220 may compute each intermediate array of values by convolving each kernel with the array of temperature values. Accordingly, each intermediate array may correspond to one of the plurality of kernels.
- another function such as a cross-correlation, may be used rather than a convolution.
- the calculation engine 220 may apply the plurality of kernels to the difference between two arrays of temperature values.
- the kernel generation engine 210 may generate the plurality of kernels using one of the two arrays of temperature values as an input to the neural network, and the calculation engine 220 may apply the plurality of kernels to the difference.
- the result of the convolution operations may be a 100 ⁇ 100 ⁇ 3 array of intermediate values (e.g., three 100 ⁇ 100 arrays of values).
- the plurality of kernels may model thermal diffusion between an element of the array of temperature values and nearby elements of the array of temperature values. Such nearby elements may represent areas neighboring a particular location on the print target and that may act as heat sources or heat sinks affecting the thermal diffusion to or from that particular location.
- Various physical attributes may affect the thermal diffusion, so each kernel may model a different physical attribute related to the thermal diffusion.
- Each kernel may model the pattern of thermal diffusion that will occur from nearby elements when accounting for a particular physical attribute.
- the convolutional neural network may be trained to generate kernels that model the physical attributes rather than the convolutional neural network itself modeling the physical attributes.
- each kernel may be a 5 ⁇ 5 array of values so that the kernels account for thermal diffusion from two neighboring elements in each direction.
- the system 200 may include a weighting engine 230 .
- the weighting engine 230 may determine a plurality of weight maps based on the plurality of kernels. Different physical attributes may be dominant for different temperature values or under different conditions. Accordingly, the plurality of weight maps may indicate how applicable each kernel is for the various locations in the array of temperature values. There may be a corresponding number of kernels and weight maps.
- the weighting engine 230 may use a model to compute the plurality of weight maps based on the plurality of kernels.
- the model may include a neural network, such as a de-convolutional neural network, a super-resolution convolutional neural network, or the like.
- the weighting engine 230 may apply the neural network to the plurality of kernels to determine the plurality of weight maps. For example, there may be a 100 ⁇ 100 array of temperature values and three kernels that each include a 5 ⁇ 5 array of values in an example.
- the input to the neural network may be the plurality of kernels represented by a 5 ⁇ 5 ⁇ 3 array of values.
- a second layer may produce an 11 ⁇ 11 ⁇ 3 array of values, a third layer may produce a 33 ⁇ 33 ⁇ 3 array of values, and the output of the neural network may be a 100 ⁇ 100 ⁇ 3 array of weight values.
- the plurality of weight maps may include an array of weight values that is the same size as the plurality of intermediate arrays of values.
- the system 200 may include a compositing engine 240 .
- the compositing engine 240 may apply the plurality of weight maps to the plurality of intermediate descriptions to produce weighted intermediate descriptions. For example, the compositing engine 240 may multiply each value in the array of weight values by a corresponding value in the plurality of intermediate arrays of values (e.g., an element-by-element multiplication) to produce a plurality of weighted intermediate arrays of values.
- the compositing engine 240 may combine the weighted intermediate descriptions to produce an updated description of the physical state.
- the compositing engine 240 may sum the weighted intermediate arrays of values to produce an array of updated temperature values (e.g., an element-by-element summation of the plurality of weighted intermediate arrays of values with each weighted intermediate array of values as a summand).
- the updated description of the physical state may include an estimate of the physical state at an earlier or later point in time (e.g., an array of estimated temperature values for an earlier or later point in time).
- summing the intermediate arrays may produce an updated difference array reflecting the differences between temperature vales of the second layer of powder at the earlier or later point in time and the unmodified first array.
- the compositing engine 240 may add the updated difference array to the unmodified first array to generate the array of estimated temperature values for the earlier or later point in time.
- the two 100 ⁇ 100 ⁇ 3 arrays may be multiplied element-by-element, and the three 100 ⁇ 100 arrays of the resulting 100 ⁇ 100 ⁇ 3 array may be summed element-by-element to produce a single 100 ⁇ 100 array of updated values.
- the compositing engine 240 may use the weighted maps to determine how much influence each kernel should have on a particular element of the final output.
- the convolution results from a kernel that produces a larger value in the weight map for a particular location than other kernels will have more impact on the final output than the convolution results from the other kernels and vice versa. Accordingly, the weighting engine 230 and the compositing engine 240 may ensure that the kernels may model the effects of the physical attributes where those physical attributes are relevant and not where those physical attributes are not relevant.
- the system 200 may include a training engine 250 .
- the training engine 250 may compare the updated description of the physical state to a true description of the physical state.
- the thermal imaging device 202 may capture a thermal image at the earlier or later point in time corresponding to the updated description of the physical state.
- the thermal image at the earlier or later point in time may be converted to the true description of the physical state.
- the preprocessing engine 204 may correct the thermal image and convert the thermal image into an array of true temperature values.
- the training engine 250 may compare the array of estimated temperature values to the array of true temperature values.
- the training engine 250 may compute a loss function, which may include computing the difference, the ratio, the mean squared error, the absolute error, or the like between the array of estimated temperature values and the array of true temperature values.
- the array of estimated temperature values and the array of true temperature values may correspond to the difference between temperature values of a second layer of powder at the earlier or later point in time and the temperature values of the first layer of powder immediately prior to addition of the second layer of powder.
- the training engine 250 may update the model used by the kernel generation engine 210 and the model used by the weighting engine 230 based on the comparison. For example, when the models are neural networks, the training engine 250 may update the neural networks by backpropagating an error through the neural networks (e.g., the error for each value determined by the training engine 250 when comparing the array of estimated temperature values to the array of true temperature values). The training engine 250 may update weights of the neurons in the neural networks by performing a gradient descent on the loss function. The training engine 250 may use a loss function based on comparing the array of estimated temperature values to the array of true temperature values to update weights for both the neural network used by the kernel generation engine 210 and the neural network used by the weighting engine 230 .
- the training data used by the training engine 250 may include input and output arrays of temperature values separated by the same predetermined time period so that the system 200 is trained to estimate thermal diffusion over that time period.
- the predetermined time period may be determined based on physical science insights, experimental experiences, or the like.
- the training engine 250 may determine how many kernels should be included in the plurality of kernels or may determine the size of the kernels.
- the system 200 may be trained using a predetermined number or size of kernels.
- the training engine 250 may analyze the kernels produced during or after training (e.g., the kernels produced after much of the training has occurred).
- the kernel is an array of values
- the training engine 250 may determine whether a kernel contains substantially all zero values or entirely all zero values.
- the kernel may contain substantially all zero values if all but a small percentage or number of values or zero, if all the values are near zero relative to other kernels, both, or the like.
- the training engine 250 may reduce the number of kernels included in the plurality of kernels based on how many kernels contain substantially all zero values.
- the training engine 250 may increase the number of kernels. Similarly, the training engine 250 may analyze the kernels to determine whether values near the edges of the array are substantially all zero. The training engine 250 may decrease or increase the size of the kernels based on whether the values near the edges of the array or substantially all zero. The training engine 250 may restart training using the new number of kernels or the new size kernels.
- FIG. 3 is a flow diagram of an example method 300 to update models to generate a kernel.
- a processor may perform the method 300 .
- the method 300 may include generating first and second arrays of temperature values.
- generating the first and second arrays of temperature values may include receiving temperature values from a sensor or remote device, deriving the temperature values from stored data, or the like.
- Block 304 may include computing an array of estimated temperature values based on the first array of temperature values.
- the first array of temperature values may be associated with a first time
- the second array of temperature values may be associated with a second time.
- Computing the array of estimated temperature values may include computing estimated temperature values associated with the second time using the first array of temperature values associated with the first time.
- computing the array of estimated temperature values may include using first and second models to compute the array of estimated temperature values.
- the method 300 may include comparing the array of estimated temperature values to the second array of temperature values.
- the array of estimated temperature values may be an estimate of the second array of temperature values. Accordingly, comparing the array of estimated temperature values to the second array of temperature values may determine the accuracy of the estimates.
- Block 308 may include updating the first and second models based on the comparing. For example, the first and second models may be updated to improve the accuracy of the array of estimated temperature values generated at block 304 . Referring to FIG.
- the thermal imaging device 202 or the preprocessing engine 204 may perform block 302 ; the kernel generation engine 210 , the calculation engine 220 , the weighting engine 230 , or the compositing engine 240 may perform block 304 ; and the training engine 250 may perform blocks 306 or 308 .
- FIG. 4 is a flow diagram of an example method 400 to compute an array of estimated temperature values.
- a processor may perform the method 400 .
- the method 400 may include computing a plurality of kernels based on a first array of temperature values and a first model. Each kernel may relate to a physical attribute of a physical system.
- the first model may relate arrays of temperature values to kernels representative of the effects associated with that physical attribute.
- Block 404 may include applying the plurality of kernels to the first array of temperature values to produce intermediate arrays.
- the plurality of kernels may indicate how various physical attributes will affect the first array of temperature values. Applying the kernels may include computing the effects of those physical attributes on the first array of temperature values. In an example, applying the plurality of kernels may include convolving each kernel with the first array of temperature values to produce a corresponding intermediate array.
- the method 400 may include computing a plurality of weight maps based on the plurality of kernels and a second model. The second model may relate the kernels to their relevance at particular locations. For example, some physical attributes may have dominant effects in a first area while other physical attributes may have dominant effects in a second area. Accordingly, the plurality of weight maps may indicate how much the physical attributes associated with each kernel affects the temperature value at each location.
- Block 408 may include computing the array of estimated temperature values based on the plurality of intermediate arrays and the plurality of weight maps.
- each intermediate array may have a corresponding weight map (e.g., both may be generated from one of the plurality of kernels).
- the weight map may be applied to its corresponding intermediate array, and the intermediate arrays may be combined to produce the array of estimated temperature values. Applying the weight map may weight the value at each location of the intermediate array to reflect the relevance of the kernel producing that value to that location. Combining the intermediate arrays may combine the effects of the various physical attributes to produce an estimate of the temperature value that results from those effects.
- the kernel generation engine 110 may perform block 402 ; the calculation engine 120 may perform block 404 ; the weighting engine 130 may perform block 406 ; and the compositing engine 140 may perform block 408 .
- FIG. 5 is a flow diagram of another example method 500 to update models to generate a kernel.
- a processor may perform the method 500 .
- the method 500 may include capturing a first thermal image at a first time and a second thermal image at a second time. Capturing the first and second thermal images may include sensing infrared radiation, such as with a thermal imaging device.
- the thermal images may be of a top layer of a print target, and the thermal images may be captured during printing.
- Block 504 may include correcting the first and second thermal images for distortion from an imaging device to create first and second corrected thermal images.
- a lens of the thermal imaging device may cause curvature such that straight lines being imaged appear as curved lines in the image.
- correcting the first and second thermal images may include undoing the distortion caused by the imaging device so that straight lines being imaged appear as straight lines in the corrected thermal images.
- the method 500 may include converting the first and second corrected thermal images to create the first and second arrays of temperature values.
- the corrected thermal images may include an array of measured intensities of infrared radiation.
- the measured intensities of infrared radiation may correspond to temperatures or emissivities of the image target. Accordingly, each intensity in the array of measured intensities may be used to calculate a corresponding temperature value.
- Block 508 may include computing an array of estimated temperature values based on the first array of temperature values.
- the array of estimated temperature values may be computed using first and second models.
- the array of estimated temperature values may be computed in the manner discussed above with reference to FIG. 4 .
- the first and second models may include neural networks.
- each model may include a neural network, such as a convolutional neural network, a de-convolutional neural network, a super-resolution convolutional neural network, or the like.
- the method 500 may include comparing the array of estimated temperature values to the second array of temperature values.
- the array of estimated temperature values may be an estimate of the temperature values at the second time while the second array of temperature values is the true temperature values at that second time. Accordingly, comparing the array of estimated temperature values to the second array of temperature values may include computing an error for each element of the arrays using a loss function.
- Block 512 may include backpropagating an error determined based on the comparing.
- the error may be determined based on a loss function.
- the weights used by the neural networks of the first and second models may be updated based on a gradient of the loss function.
- Backpropagating the error may include finding weights that minimize the error, for example, by performing a gradient descent of the loss function.
- a training data set may include a set of first and second arrays of temperature values or a set of first and second thermal images that may be used to update the weights or the neural networks.
- the method 500 may include determining a kernel contains substantially all zero values (e.g., a kernel used to compute the array of estimated temperature values as discussed with reference to FIG. 4 ). After some amount of training, a kernel may begin to approach having substantially or entirely all zero values.
- the method 500 may include reducing the number of kernels included in the plurality of kernels. For example, a kernel converging towards all zero values may indicate that the plurality of kernels includes more kernels than are needed to model the physical attributes. Accordingly, the unnecessary kernels may be discarded. In some examples, training can be restarted with the updated number of kernels.
- the preprocessing engine 204 may perform blocks 504 or 506 ; the kernel generation engine 210 , the calculation engine 220 , the weighting engine 230 , or the corn positing engine 240 may perform block 508 ; and the training engine 250 may perform blocks 510 , 512 , 514 , or 516 .
- FIG. 6 is a block diagram of an example computer-readable medium 600 including instructions that, when executed by a processor 602 , cause the processor 602 to generate a kernel based on a physical state.
- the physical state may be represented by an array of temperature values.
- the computer-readable medium 600 may be a non-transitory computer-readable medium, such as a volatile computer-readable medium (e.g., volatile RAM, a processor cache, a processor register, etc.), a non-volatile computer-readable medium (e.g., a magnetic storage device, an optical storage device, a paper storage device, flash memory, read-only memory, non-volatile RAM, etc.), and/or the like.
- a volatile computer-readable medium e.g., volatile RAM, a processor cache, a processor register, etc.
- a non-volatile computer-readable medium e.g., a magnetic storage device, an optical storage device, a paper storage device, flash memory, read-only memory, non-volatile RAM
- the processor 602 may be a general purpose processor or special purpose logic, such as a microprocessor (e.g., a central processing unit, a graphics processing unit, etc.), a digital signal processor, a microcontroller, an ASIC, an FPGA, a programmable array logic (PAL), a programmable logic array (PLA), a programmable logic device (PLD), etc.
- a microprocessor e.g., a central processing unit, a graphics processing unit, etc.
- PAL programmable array logic
- PLA programmable logic array
- PLD programmable logic device
- the computer-readable medium 600 may include a kernel computation module 610 .
- a “module” (in some examples referred to as a “software module”) is a set of instructions that when executed or interpreted by a processor or stored at a processor-readable medium realizes a component or performs a method.
- the kernel computation module 610 may include instructions that, when executed, cause the processor 602 to compute a kernel based on an array of temperature values using a first neural network. For example, the kernel computation module 610 may cause the processor 602 to use the array of temperature values as an input to the first neural network.
- the kernel computation module 610 may cause the processor 602 to use the first neural network to compute the kernel as an output from the first neural network.
- the computer-readable medium 600 may include a kernel application module 620 .
- the kernel application module 620 may cause the processor 602 to apply the kernel to the array of temperature values to produce an intermediate array of values.
- the kernel application module 620 may cause the processor 602 to compute the intermediate array based on the kernel and the array of temperature values.
- the array of temperature values may be used by the kernel computation module 610 to compute the kernel that is applied to the same array of temperature values by the kernel application module 620 .
- the computer-readable medium 600 may include a weight computation module 630 .
- the weight computation module 630 may cause the processor 602 to compute a weight map based on the kernel using a second neural network. For example, the weight computation module 630 may cause the processor 602 to use the kernel as an input to the second neural network.
- the weight computation module 630 may cause the processor 602 to use the second neural network to compute the weight map as an output from the first neural network.
- the weight map may be the same size as the intermediate array and include a weight value corresponding to each value in the intermediate array.
- the computer-readable medium 600 may include a weight application module 640 .
- the weight application module 640 may cause the processor 602 to apply the weight map to the intermediate array to produce an updated array of temperature values.
- the weight application module 640 may cause the processor 602 to adjust the values of the intermediate array based on the weight map.
- the kernel computation module 610 may realize the kernel generation engine 110 of FIG. 1 ; the kernel application module 620 may realize the calculation engine 120 ; the weight computation module 630 may realize the weighting engine 130 ; and the weight application module 640 may realize the compositing engine 140 .
- FIG. 7 is a block diagram of another example computer-readable medium 700 including instructions that, when executed by a processor 702 , cause the processor 702 to generate a kernel based on a physical state.
- the physical state again may be represented by an array of temperature values.
- the computer-readable medium 700 may include a kernel computation module 710 .
- the kernel computation module 710 may include instructions that, when executed, cause the processor 702 to compute a kernel based on the array of temperature values using a first neural network.
- the kernel computation module 710 may cause the processor 702 to simulate the first neural network and use the array of temperature values as an input to the first neural network.
- the kernel computation module 710 may cause the processor 702 to produce a plurality of kernels based on the array of temperature values when simulating the first neural network.
- the first neural network may include a convolutional neural network.
- the kernel computation module 710 may include an x-y kernel module 712 and a z kernel module 714 .
- the x-y kernel module 712 may cause the processor 702 to receive an array of temperature values corresponding to a single layer of powder in a three-dimensional printer and to compute a kernel based on that array of temperature values.
- the array of temperature values may include an array of differences between temperature values of a first layer of powder in a three-dimensional printer and temperature values of a second layer of powder in the three-dimensional printer.
- the thermal diffusion in the three-dimensional printer in the x or y directions may be different than the thermal diffusion in the three-dimensional printer in the z direction.
- the z kernel module 714 may cause the processor 702 to receive an array of temperature values corresponding to differences between temperature values of the first layer and temperature values of the second layer.
- the array of temperature values may be a difference array computed by an element-by-element subtraction between an array associated with temperature values of the first layer and an array associated with temperature values of the second layer).
- the z kernel module 714 may cause the processor 702 to compute a kernel based on the array of temperature values corresponding to differences.
- the x-y kernel module 712 and the z kernel module 714 may each be associated with different neural network than the other, and each may cause the processor 702 to compute kernels using its associated neural network.
- the z kernel module 714 may cause the processor to compute a plurality of kernels based on an array of temperature values for a single layer of powder, but the plurality of kernels may be applicable to an array of temperature values corresponding to differences in temperature values.
- the kernels may be applied (as discussed below) to the difference array, and the updated array of temperature values (discussed below) may be an updated array of differences (e.g., regardless of the input to the z kernel module 714 ).
- the computer-readable medium 700 may include a kernel application module 720 .
- the kernel application module 720 may cause the processor 702 to apply the kernel to the array of temperature values to produce an intermediate array.
- the kernel application module 720 may include a convolution module 722 .
- the convolution module 722 may cause the processor 702 to convolve the kernel with the array of temperature values to produce the intermediate array.
- the kernel may model thermal diffusion between an element of the array of temperature values and nearby elements of the array of temperature values.
- the kernel may be represented as an array of values that when convolved with the array of temperature values models the thermal energy transferred among the elements of the array.
- the convolution module 722 may cause the processor 702 to convolve each kernel with a copy of the array of temperature values.
- the computer-readable medium 700 may include a weight computation module 730 .
- the weight computation module 730 may cause the processor 702 to compute a weight map based on the kernel using a second neural network.
- the weight computation module 730 may cause the processor 702 to simulate the second neural network and use the kernel as an input to the second neural network.
- the second neural network may include a de-convolutional neural network, a super-resolution convolutional neural network, or the like.
- the weight computation module 730 may cause the processor 702 to compute a plurality of weight maps based on the plurality of kernels when simulating the second neural network.
- the computer-readable medium 700 may include a weight application module 740 .
- the weight application module 740 may cause the processor 702 to apply the weight map to the intermediate array to produce an updated array of temperature values.
- the weight application module 740 may cause the processor 702 to multiply the weight map by the intermediate array element-by-element.
- the kernel may model particular physical attributes of the thermal diffusion among elements of the array of temperature values, and the weight map may reflect the relevance of those particular physical attributes to different areas of the array of temperature values.
- the weight map may include a smaller value that reduces the value of an element of the intermediate array where the particular physical attributes are less relevant, and the weight map may include a larger value that increases or does not reduce as much the value of an element of the intermediate array where the particular physical attributes are more relevant.
- the weight application module 740 may cause the processor 702 to multiply each weight map by a corresponding intermediate array element-by-element.
- the weight application module 740 may cause the processor 702 to sum together the results from the multiplications element-by-element to compute the updated array of temperature values.
- the updated array of temperature values may be computed based on an array of temperature values and an array of non-temperature values.
- the kernel computation module 710 may cause the processor 702 to compute an additional kernel based on the array of non-temperature values.
- the kernel application module 720 may cause the processor 702 to apply the additional kernel to the array of temperature values or the array of non-temperature values to produce an additional intermediate array.
- the weight computation module 730 may cause the processor 702 to compute an additional weight map based on the additional kernel.
- the weight application module 740 may cause the processor 702 to composite the intermediate array with the additional intermediate array based on the weight map and the additional weight map.
- the weight application module 740 may cause the processor 702 to multiply the additional intermediate array by the additional weight map element-by-element.
- the weight application module 740 may cause the processor 702 to sum together the results from the multiplications element-by-element to compute the updated array of temperature values.
- the weight application module 740 may cause the processor 702 to sum together results flowing from the plurality of kernels with results flowing from the additional kernel.
- the computer-readable medium 700 may include a training module 750 .
- the training module 750 may cause the processor 702 to calculate an error between the updated array of temperature values and an array of true temperature values.
- the training module 750 may cause the processor 702 to adjust the first and second neural networks based on the error.
- the training module 750 may cause the processor 702 to compute the error between the updated array of temperature values and the array of true temperature values using a loss function, and the training module 750 may cause the processor 702 to use a gradient of the loss function to minimize the error.
- the training module 750 may cause the processor 702 to update weights in the first and second neural networks based on the loss function applied to the updated array of temperature values and the array of true temperature values. Referring to FIG.
- the kernel computation module 710 when executed by the processor 702 , the kernel computation module 710 may realize the kernel generation engine 210 ; the kernel application module 720 may realize the calculation engine 220 ; the weight computation module 730 may realize the weighting engine 230 ; the weight application module 740 may realize the compositing engine 240 ; and the training module 750 may realize the training engine 250 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Geometry (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Feedback Control In General (AREA)
Abstract
Description
- Various physical phenomena can be modeled using computers. For example, a computer may store data representative of a physical phenomenon. A complete dataset may be copied to the computer, or the computer may receive the data from a sensor, for example, over time. The computer may perform various operations on the data to further represent additional aspects of the physical phenomenon. Accordingly, the computer may provide a deeper understanding of the physical phenomena being modeled.
-
FIG. 1 is a block diagram of an example system to generate a kernel based on a physical state. -
FIG. 2 is a block diagram of another example system to generate a kernel based on a physical state. -
FIG. 3 is a flow diagram of an example method to update models to generate a kernel. -
FIG. 4 is a flow diagram of an example method to compute an array of estimated temperature values. -
FIG. 5 is a flow diagram of another example method to update models to generate a kernel. -
FIG. 6 is a block diagram of an example computer-readable medium including instructions that cause a processor to generate a kernel based on a physical state. -
FIG. 7 is a block diagram of another example computer-readable medium including instructions that cause a processor to generate a kernel based on a physical state. - In some examples, physical phenomena may be complicated to model by a computer. For example, some physical phenomena may be anisotropic, such as thermal diffusivity in heterogeneous or anisotropic materials, conductivity in particular materials (e.g., electrical conductivity in certain crystals), stiffness of particular materials, or the like. The physical phenomena may depend on several physical attributes. For example, thermal diffusivity may depend on the temperature field and phase properties at a location and at neighboring locations. Aspects of the physical phenomena may be modeled at numerous locations, and numerous calculations may be performed for each location to account for the various interactions occurring at each location and among neighboring locations. However, such modeling may quickly lead to performing extraordinarily large numbers of calculations. Such large numbers of calculations may be time consuming to perform and may take too long to be usable in many situations.
- In an example, a printer, such as a three-dimensional printer, may deliver thermal energy to a print target, such as a print bed where loose powder is fused or sintered layer-by-layer to build a printed part. The printer may also deliver chemical agents (e.g., property changing agents, such as fusing agents, detailing agents, etc.) selectively on the print target to eventually trigger phase changes and drive the powder at selected locations to be fused together. Accordingly, the volume elements (“voxels”) of the print target may include different materials relative to one another. The thermal diffusivity of each voxel may depend on the properties of that voxel as well as the properties of neighboring voxels. These properties may depend on the materials contained in each voxel or the current state of those materials, which materials and states may vary due to the heterogeneity of the voxels. Accordingly, thermal diffusion in the printer may be anisotropic and may be complicated to model. Significant numbers of calculations may be involved in modeling thermal diffusion over even short periods of time. Modeling of complicated physical phenomena, such as anisotropic phenomena (e.g., thermal diffusion in three-dimensional printers), may be improved by providing models that permit reasonable numbers of calculations to be performed while modeling the physical phenomena.
-
FIG. 1 is a block diagram of anexample system 100 to generate a kernel based on a physical state. Thesystem 100 may include akernel generation engine 110. As used herein, the term “engine” refers to hardware (e.g., a processor, such as an integrated circuit or other circuitry) or a combination of software (e.g., programming such as machine- or processor-executable instructions, commands, or code such as firmware, a device driver, programming, object code, etc.) and hardware. Hardware includes a hardware element with no software elements such as an application specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), etc. A combination of hardware and software includes software hosted at hardware (e.g., a software module that is stored at a processor-readable memory such as random access memory (RAM), a hard-disk or solid-state drive, resistive memory, or optical media such as a digital versatile disc (DVD), and/or executed or interpreted by a processor), or hardware and software hosted at hardware. - The
kernel generation engine 110 may generate a plurality of kernels based on a description of a physical state. As used herein, the term “description of a physical state” refers to data indicative of characteristics of a physical system. For example, the data may include raw or processed measurements of the physical system. As used herein, the term “kernel” refers to an indication of a relationship between inputs to the kernel and outputs from the kernel. For example, the kernel may indicate the relationship between a description of a physical state of a physical system at a first time and a description of a physical state of the physical system at a second time. Such relationship may reflect a physical law. In an example, the plurality of kernels are generated based on applying a neural network (e.g., a deep neural network) to the description of the physical state. For example, the description of the physical state may be used as an input to the neural network, and the neural network may produce the plurality of kernels based on the input. - The
system 100 may include acalculation engine 120 to apply the plurality of kernels to the description of the physical state to produce a plurality of intermediate descriptions. For example, the description of the physical state may be used as an input to each kernel. Each kernel may produce an intermediate description as an output. Accordingly, there may be a corresponding number of kernels and intermediate descriptions. - The
system 100 may include aweighting engine 130 to determine a plurality of weight maps based on the plurality of kernels. For example, theweighting engine 130 may determine how much weight should be accorded to various portions of each intermediate description based on the kernel that produced the intermediate description. Each weight map may indicate the weight to be applied to each portion of the corresponding intermediate description. - The
system 100 may include a compositingengine 140 to apply the plurality of weight maps to the plurality of intermediate descriptions to produce weighted intermediate descriptions. For example, each portion of each intermediate description may be weighted as indicated by the corresponding portion of the corresponding weight map. The compositingengine 140 may combine the weighted intermediate descriptions to produce an updated description of the physical state. In some examples, the compositingengine 140 may combine the weighted intermediate descriptions by computing a sum, an arithmetic or geometric mean, a median, a mode, a minimum, a maximum, or the like. -
FIG. 2 is a block diagram ofanother example system 200 to generate a kernel based on a physical state. Thesystem 200 may include athermal imaging device 202. Thethermal imaging device 202 may generate a thermal image by sensing infrared radiation at a plurality of picture elements (pixels) at a point in time. As used herein, the term “point in time” refers to a time period that is short relative to the time spans over which measurable changes occur to a physical state being measured. For example, the length of the point in time may correspond to the shutter speed of the thermal imaging device. Each pixel of the thermal image may correspond to the intensity of the infrared radiation at that pixel. In an example, thethermal imaging device 202 may capture a thermal image of a print target. Thethermal imaging device 202 may capture an overhead view of the print target and thus depict an x-y plane. - In an example, the
thermal imaging device 202 may capture a first thermal image of a first layer of powder on the print target and a second thermal image of a second layer of powder on the print target. For example, thethermal imaging device 202 may capture the first thermal image immediately prior to the second layer of powder being added on top of the first layer of powder. As used herein, the term “immediately prior” refers to capturing the thermal image when little change will occur to the physical state being measured before the next layer is added. In an example, thethermal image device 202 may be associated with a frame rate, and the last frame captured before adding the second layer of powder may be captured immediately prior to the addition of the second layer of powder. As used herein, the terms “first” and “second” are used to differentiate between different elements and may not indicate position. For example, the first layer of powder may not be a bottom layer of powder. - The
system 200 may include apreprocessing engine 204. Thepreprocessing engine 204 may correct the thermal image to compensate for distortion caused by the thermal imaging device 202 (e.g., distortion caused by a lens or camera angle of the thermal imaging device 202). For example, straight lines on the print target may appear as curved lines in the thermal image. Thepreprocessing engine 204 may apply an inversion of the distortion to the thermal image, for example, so that straight lines on the print target appear as straight lines in the corrected thermal image. Thepreprocessing engine 204 may convert the thermal image into an array of temperature values at the point in time. As used herein, the term “array” refers to a group of data elements (e.g., data elements indicative of temperature values). The array may be a multidimensional array, such as a two-dimensional array with the dimensions corresponding to the dimensions of the corrected thermal image. In an example, thepreprocessing engine 204 may compute the temperature values from the intensity values based on a predetermined or measured relationship between intensity and temperature. - The
system 200 may include akernel generation engine 210. In an example, thekernel generation engine 210 may generate a plurality of kernels based on the array of temperature values. In other examples, thekernel generation engine 210 may generate the plurality of kernels based on a description of a physical state other than the array of temperature values (e.g., a thermal image, an array of electric potentials or currents, an array of stresses or strains, etc.). Thekernel generation engine 210 may use a model to compute the plurality of kernels based on the array of temperature values. In some examples, the model may include a neural network, such as a convolutional neural network. The array of temperature values may be input to the convolutional neural network. In some examples, the array of temperature values may be a difference between two arrays of temperature values (e.g., the difference between a first array corresponding to a first thermal image of a first layer of powder and a second array corresponding to a second thermal image of a second layer of powder). - In an example, the input to the convolutional neural network may include a two-dimensional array of temperature values. The layers in the convolutional neural network and the output of the convolutional neural network may include three-dimensional arrays of values. For example, the size of the third dimension of the output may be equal to the number of kernels to be generated. The hidden layers of the convolutional neural network may include third dimensions with sizes equal to the number of kernels to be generated, that are integer multiples of the number of kernels to be generated, or the like. For example, there may be three kernels and a 100×100 array of temperature values in an example. The input may be the 100×100 array of temperature values, a second layer may produce a 33×33×6 array of values, a third layer may produce an 11×11×6 array of values, and an output of the convolutional neural network may be a 5×5×3 array of values. In some examples, an alternate description of the physical state may also be included in the input. For example, the input may include an array of temperature values and an array of alternate values associated with the print target (e.g., the input may include a three dimensional array constructed from a plurality of two dimensional arrays variously describing a physical state of a physical system). The array of alternate values may include fusing or detailing agent distribution maps (e.g., maps that indicate the phase or material status of each voxel), distribution maps of other chemical agents, intensities from a visible light image of the print target, or the like. In an example, each kernel may include an array of values. For example, each kernel may include a two dimensional array of values, and the output of the convolutional neural network may include a three dimensional array that includes the plurality of kernels. In such an example, the kernel may not explicitly indicate the calculations relating inputs to the kernel to outputs to the kernel, but rather the kernel may include an array of values used in predetermined calculations.
- The
system 200 may include acalculation engine 220. Thecalculation engine 220 may apply the plurality of kernels to the description of the physical state to produce a plurality of intermediate descriptions. For example, thecalculation engine 220 may receive the array of temperature values as an input and calculate each of a plurality of intermediate arrays of values based on the array of temperature values and one of the plurality of kernels. In an example, thecalculation engine 220 may compute each intermediate array of values by convolving each kernel with the array of temperature values. Accordingly, each intermediate array may correspond to one of the plurality of kernels. In some examples, another function, such as a cross-correlation, may be used rather than a convolution. In some examples, thecalculation engine 220 may apply the plurality of kernels to the difference between two arrays of temperature values. In an example, thekernel generation engine 210 may generate the plurality of kernels using one of the two arrays of temperature values as an input to the neural network, and thecalculation engine 220 may apply the plurality of kernels to the difference. There may be three kernels and a 100×100 array of temperature values in an example. The result of the convolution operations may be a 100×100×3 array of intermediate values (e.g., three 100×100 arrays of values). - The plurality of kernels may model thermal diffusion between an element of the array of temperature values and nearby elements of the array of temperature values. Such nearby elements may represent areas neighboring a particular location on the print target and that may act as heat sources or heat sinks affecting the thermal diffusion to or from that particular location. Various physical attributes may affect the thermal diffusion, so each kernel may model a different physical attribute related to the thermal diffusion. Each kernel may model the pattern of thermal diffusion that will occur from nearby elements when accounting for a particular physical attribute. The convolutional neural network may be trained to generate kernels that model the physical attributes rather than the convolutional neural network itself modeling the physical attributes. In an example, each kernel may be a 5×5 array of values so that the kernels account for thermal diffusion from two neighboring elements in each direction.
- The
system 200 may include aweighting engine 230. Theweighting engine 230 may determine a plurality of weight maps based on the plurality of kernels. Different physical attributes may be dominant for different temperature values or under different conditions. Accordingly, the plurality of weight maps may indicate how applicable each kernel is for the various locations in the array of temperature values. There may be a corresponding number of kernels and weight maps. Theweighting engine 230 may use a model to compute the plurality of weight maps based on the plurality of kernels. In some examples, the model may include a neural network, such as a de-convolutional neural network, a super-resolution convolutional neural network, or the like. Theweighting engine 230 may apply the neural network to the plurality of kernels to determine the plurality of weight maps. For example, there may be a 100×100 array of temperature values and three kernels that each include a 5×5 array of values in an example. The input to the neural network may be the plurality of kernels represented by a 5×5×3 array of values. A second layer may produce an 11×11×3 array of values, a third layer may produce a 33×33×3 array of values, and the output of the neural network may be a 100×100×3 array of weight values. In some examples, the plurality of weight maps may include an array of weight values that is the same size as the plurality of intermediate arrays of values. - The
system 200 may include acompositing engine 240. Thecompositing engine 240 may apply the plurality of weight maps to the plurality of intermediate descriptions to produce weighted intermediate descriptions. For example, thecompositing engine 240 may multiply each value in the array of weight values by a corresponding value in the plurality of intermediate arrays of values (e.g., an element-by-element multiplication) to produce a plurality of weighted intermediate arrays of values. Thecompositing engine 240 may combine the weighted intermediate descriptions to produce an updated description of the physical state. For example, thecompositing engine 240 may sum the weighted intermediate arrays of values to produce an array of updated temperature values (e.g., an element-by-element summation of the plurality of weighted intermediate arrays of values with each weighted intermediate array of values as a summand). The updated description of the physical state may include an estimate of the physical state at an earlier or later point in time (e.g., an array of estimated temperature values for an earlier or later point in time). In an example where thecalculation engine 220 applies the plurality of kernels to a difference between the first and second arrays of temperature values, summing the intermediate arrays may produce an updated difference array reflecting the differences between temperature vales of the second layer of powder at the earlier or later point in time and the unmodified first array. Thecompositing engine 240 may add the updated difference array to the unmodified first array to generate the array of estimated temperature values for the earlier or later point in time. In an example with a 100×100×3 array of intermediate values and a 100×100×3 array of weight values, the two 100×100×3 arrays may be multiplied element-by-element, and the three 100×100 arrays of the resulting 100×100×3 array may be summed element-by-element to produce a single 100×100 array of updated values. - The
compositing engine 240 may use the weighted maps to determine how much influence each kernel should have on a particular element of the final output. The convolution results from a kernel that produces a larger value in the weight map for a particular location than other kernels will have more impact on the final output than the convolution results from the other kernels and vice versa. Accordingly, theweighting engine 230 and thecompositing engine 240 may ensure that the kernels may model the effects of the physical attributes where those physical attributes are relevant and not where those physical attributes are not relevant. - The
system 200 may include atraining engine 250. Thetraining engine 250 may compare the updated description of the physical state to a true description of the physical state. For example, thethermal imaging device 202 may capture a thermal image at the earlier or later point in time corresponding to the updated description of the physical state. The thermal image at the earlier or later point in time may be converted to the true description of the physical state. For example, thepreprocessing engine 204 may correct the thermal image and convert the thermal image into an array of true temperature values. Thetraining engine 250 may compare the array of estimated temperature values to the array of true temperature values. For example, thetraining engine 250 may compute a loss function, which may include computing the difference, the ratio, the mean squared error, the absolute error, or the like between the array of estimated temperature values and the array of true temperature values. In an example, the array of estimated temperature values and the array of true temperature values may correspond to the difference between temperature values of a second layer of powder at the earlier or later point in time and the temperature values of the first layer of powder immediately prior to addition of the second layer of powder. - The
training engine 250 may update the model used by thekernel generation engine 210 and the model used by theweighting engine 230 based on the comparison. For example, when the models are neural networks, thetraining engine 250 may update the neural networks by backpropagating an error through the neural networks (e.g., the error for each value determined by thetraining engine 250 when comparing the array of estimated temperature values to the array of true temperature values). Thetraining engine 250 may update weights of the neurons in the neural networks by performing a gradient descent on the loss function. Thetraining engine 250 may use a loss function based on comparing the array of estimated temperature values to the array of true temperature values to update weights for both the neural network used by thekernel generation engine 210 and the neural network used by theweighting engine 230. In some examples, the training data used by thetraining engine 250 may include input and output arrays of temperature values separated by the same predetermined time period so that thesystem 200 is trained to estimate thermal diffusion over that time period. The predetermined time period may be determined based on physical science insights, experimental experiences, or the like. - In some examples, the
training engine 250 may determine how many kernels should be included in the plurality of kernels or may determine the size of the kernels. Thesystem 200 may be trained using a predetermined number or size of kernels. Thetraining engine 250 may analyze the kernels produced during or after training (e.g., the kernels produced after much of the training has occurred). In examples where the kernel is an array of values, thetraining engine 250 may determine whether a kernel contains substantially all zero values or entirely all zero values. The kernel may contain substantially all zero values if all but a small percentage or number of values or zero, if all the values are near zero relative to other kernels, both, or the like. Thetraining engine 250 may reduce the number of kernels included in the plurality of kernels based on how many kernels contain substantially all zero values. If none of the plurality of kernels include substantially all zero values and the number of kernels has not been decreased previously, thetraining engine 250 may increase the number of kernels. Similarly, thetraining engine 250 may analyze the kernels to determine whether values near the edges of the array are substantially all zero. Thetraining engine 250 may decrease or increase the size of the kernels based on whether the values near the edges of the array or substantially all zero. Thetraining engine 250 may restart training using the new number of kernels or the new size kernels. -
FIG. 3 is a flow diagram of anexample method 300 to update models to generate a kernel. A processor may perform themethod 300. Atblock 302, themethod 300 may include generating first and second arrays of temperature values. For example, generating the first and second arrays of temperature values may include receiving temperature values from a sensor or remote device, deriving the temperature values from stored data, or the like. -
Block 304 may include computing an array of estimated temperature values based on the first array of temperature values. The first array of temperature values may be associated with a first time, and the second array of temperature values may be associated with a second time. Computing the array of estimated temperature values may include computing estimated temperature values associated with the second time using the first array of temperature values associated with the first time. As discussed below with reference toFIG. 4 , computing the array of estimated temperature values may include using first and second models to compute the array of estimated temperature values. - At
block 306, themethod 300 may include comparing the array of estimated temperature values to the second array of temperature values. For example, the array of estimated temperature values may be an estimate of the second array of temperature values. Accordingly, comparing the array of estimated temperature values to the second array of temperature values may determine the accuracy of the estimates.Block 308 may include updating the first and second models based on the comparing. For example, the first and second models may be updated to improve the accuracy of the array of estimated temperature values generated atblock 304. Referring toFIG. 2 , in an example, thethermal imaging device 202 or thepreprocessing engine 204 may perform block 302; thekernel generation engine 210, thecalculation engine 220, theweighting engine 230, or thecompositing engine 240 may perform block 304; and thetraining engine 250 may performblocks -
FIG. 4 is a flow diagram of anexample method 400 to compute an array of estimated temperature values. A processor may perform themethod 400. Atblock 402, themethod 400 may include computing a plurality of kernels based on a first array of temperature values and a first model. Each kernel may relate to a physical attribute of a physical system. The first model may relate arrays of temperature values to kernels representative of the effects associated with that physical attribute. -
Block 404 may include applying the plurality of kernels to the first array of temperature values to produce intermediate arrays. For example, the plurality of kernels may indicate how various physical attributes will affect the first array of temperature values. Applying the kernels may include computing the effects of those physical attributes on the first array of temperature values. In an example, applying the plurality of kernels may include convolving each kernel with the first array of temperature values to produce a corresponding intermediate array. Atblock 406, themethod 400 may include computing a plurality of weight maps based on the plurality of kernels and a second model. The second model may relate the kernels to their relevance at particular locations. For example, some physical attributes may have dominant effects in a first area while other physical attributes may have dominant effects in a second area. Accordingly, the plurality of weight maps may indicate how much the physical attributes associated with each kernel affects the temperature value at each location. -
Block 408 may include computing the array of estimated temperature values based on the plurality of intermediate arrays and the plurality of weight maps. For example, each intermediate array may have a corresponding weight map (e.g., both may be generated from one of the plurality of kernels). The weight map may be applied to its corresponding intermediate array, and the intermediate arrays may be combined to produce the array of estimated temperature values. Applying the weight map may weight the value at each location of the intermediate array to reflect the relevance of the kernel producing that value to that location. Combining the intermediate arrays may combine the effects of the various physical attributes to produce an estimate of the temperature value that results from those effects. Referring toFIG. 1 , in an example, thekernel generation engine 110 may perform block 402; thecalculation engine 120 may perform block 404; theweighting engine 130 may perform block 406; and thecompositing engine 140 may perform block 408. -
FIG. 5 is a flow diagram of anotherexample method 500 to update models to generate a kernel. A processor may perform themethod 500. Atblock 502, themethod 500 may include capturing a first thermal image at a first time and a second thermal image at a second time. Capturing the first and second thermal images may include sensing infrared radiation, such as with a thermal imaging device. In an example, the thermal images may be of a top layer of a print target, and the thermal images may be captured during printing. -
Block 504 may include correcting the first and second thermal images for distortion from an imaging device to create first and second corrected thermal images. For example, a lens of the thermal imaging device may cause curvature such that straight lines being imaged appear as curved lines in the image. Accordingly, correcting the first and second thermal images may include undoing the distortion caused by the imaging device so that straight lines being imaged appear as straight lines in the corrected thermal images. - At
block 506, themethod 500 may include converting the first and second corrected thermal images to create the first and second arrays of temperature values. The corrected thermal images may include an array of measured intensities of infrared radiation. The measured intensities of infrared radiation may correspond to temperatures or emissivities of the image target. Accordingly, each intensity in the array of measured intensities may be used to calculate a corresponding temperature value. -
Block 508 may include computing an array of estimated temperature values based on the first array of temperature values. The array of estimated temperature values may be computed using first and second models. For example, the array of estimated temperature values may be computed in the manner discussed above with reference toFIG. 4 . The first and second models may include neural networks. For example, each model may include a neural network, such as a convolutional neural network, a de-convolutional neural network, a super-resolution convolutional neural network, or the like. - At
block 510, themethod 500 may include comparing the array of estimated temperature values to the second array of temperature values. For example, the array of estimated temperature values may be an estimate of the temperature values at the second time while the second array of temperature values is the true temperature values at that second time. Accordingly, comparing the array of estimated temperature values to the second array of temperature values may include computing an error for each element of the arrays using a loss function. -
Block 512 may include backpropagating an error determined based on the comparing. For example, the error may be determined based on a loss function. The weights used by the neural networks of the first and second models may be updated based on a gradient of the loss function. Backpropagating the error may include finding weights that minimize the error, for example, by performing a gradient descent of the loss function. In some examples, a training data set may include a set of first and second arrays of temperature values or a set of first and second thermal images that may be used to update the weights or the neural networks. - At
block 514, themethod 500 may include determining a kernel contains substantially all zero values (e.g., a kernel used to compute the array of estimated temperature values as discussed with reference toFIG. 4 ). After some amount of training, a kernel may begin to approach having substantially or entirely all zero values. Atblock 516, themethod 500 may include reducing the number of kernels included in the plurality of kernels. For example, a kernel converging towards all zero values may indicate that the plurality of kernels includes more kernels than are needed to model the physical attributes. Accordingly, the unnecessary kernels may be discarded. In some examples, training can be restarted with the updated number of kernels. In an example, thethermal imaging device 202 ofFIG. 2 may perform block 502; thepreprocessing engine 204 may performblocks kernel generation engine 210, thecalculation engine 220, theweighting engine 230, or thecorn positing engine 240 may perform block 508; and thetraining engine 250 may performblocks -
FIG. 6 is a block diagram of an example computer-readable medium 600 including instructions that, when executed by aprocessor 602, cause theprocessor 602 to generate a kernel based on a physical state. In an example, the physical state may be represented by an array of temperature values. The computer-readable medium 600 may be a non-transitory computer-readable medium, such as a volatile computer-readable medium (e.g., volatile RAM, a processor cache, a processor register, etc.), a non-volatile computer-readable medium (e.g., a magnetic storage device, an optical storage device, a paper storage device, flash memory, read-only memory, non-volatile RAM, etc.), and/or the like. Theprocessor 602 may be a general purpose processor or special purpose logic, such as a microprocessor (e.g., a central processing unit, a graphics processing unit, etc.), a digital signal processor, a microcontroller, an ASIC, an FPGA, a programmable array logic (PAL), a programmable logic array (PLA), a programmable logic device (PLD), etc. The computer-readable medium 600 or theprocessor 602 may be distributed among a plurality of computer-readable media or a plurality of processors. - The computer-
readable medium 600 may include akernel computation module 610. As used herein, a “module” (in some examples referred to as a “software module”) is a set of instructions that when executed or interpreted by a processor or stored at a processor-readable medium realizes a component or performs a method. Thekernel computation module 610 may include instructions that, when executed, cause theprocessor 602 to compute a kernel based on an array of temperature values using a first neural network. For example, thekernel computation module 610 may cause theprocessor 602 to use the array of temperature values as an input to the first neural network. Thekernel computation module 610 may cause theprocessor 602 to use the first neural network to compute the kernel as an output from the first neural network. - The computer-
readable medium 600 may include akernel application module 620. Thekernel application module 620 may cause theprocessor 602 to apply the kernel to the array of temperature values to produce an intermediate array of values. For example, thekernel application module 620 may cause theprocessor 602 to compute the intermediate array based on the kernel and the array of temperature values. Thus, the array of temperature values may be used by thekernel computation module 610 to compute the kernel that is applied to the same array of temperature values by thekernel application module 620. - The computer-
readable medium 600 may include aweight computation module 630. Theweight computation module 630 may cause theprocessor 602 to compute a weight map based on the kernel using a second neural network. For example, theweight computation module 630 may cause theprocessor 602 to use the kernel as an input to the second neural network. Theweight computation module 630 may cause theprocessor 602 to use the second neural network to compute the weight map as an output from the first neural network. The weight map may be the same size as the intermediate array and include a weight value corresponding to each value in the intermediate array. - The computer-
readable medium 600 may include aweight application module 640. Theweight application module 640 may cause theprocessor 602 to apply the weight map to the intermediate array to produce an updated array of temperature values. For example, theweight application module 640 may cause theprocessor 602 to adjust the values of the intermediate array based on the weight map. In an example, when executed by theprocessor 602, thekernel computation module 610 may realize thekernel generation engine 110 ofFIG. 1 ; thekernel application module 620 may realize thecalculation engine 120; theweight computation module 630 may realize theweighting engine 130; and theweight application module 640 may realize thecompositing engine 140. -
FIG. 7 is a block diagram of another example computer-readable medium 700 including instructions that, when executed by aprocessor 702, cause theprocessor 702 to generate a kernel based on a physical state. In an example, the physical state again may be represented by an array of temperature values. The computer-readable medium 700 may include akernel computation module 710. Thekernel computation module 710 may include instructions that, when executed, cause theprocessor 702 to compute a kernel based on the array of temperature values using a first neural network. For example, thekernel computation module 710 may cause theprocessor 702 to simulate the first neural network and use the array of temperature values as an input to the first neural network. Thekernel computation module 710 may cause theprocessor 702 to produce a plurality of kernels based on the array of temperature values when simulating the first neural network. In some examples, the first neural network may include a convolutional neural network. - The
kernel computation module 710 may include anx-y kernel module 712 anda z kernel module 714. Thex-y kernel module 712 may cause theprocessor 702 to receive an array of temperature values corresponding to a single layer of powder in a three-dimensional printer and to compute a kernel based on that array of temperature values. In some examples, the array of temperature values may include an array of differences between temperature values of a first layer of powder in a three-dimensional printer and temperature values of a second layer of powder in the three-dimensional printer. For example, the thermal diffusion in the three-dimensional printer in the x or y directions may be different than the thermal diffusion in the three-dimensional printer in the z direction. Thez kernel module 714 may cause theprocessor 702 to receive an array of temperature values corresponding to differences between temperature values of the first layer and temperature values of the second layer. For example, the array of temperature values may be a difference array computed by an element-by-element subtraction between an array associated with temperature values of the first layer and an array associated with temperature values of the second layer). Thez kernel module 714 may cause theprocessor 702 to compute a kernel based on the array of temperature values corresponding to differences. In some examples, thex-y kernel module 712 and thez kernel module 714 may each be associated with different neural network than the other, and each may cause theprocessor 702 to compute kernels using its associated neural network. In an example, thez kernel module 714 may cause the processor to compute a plurality of kernels based on an array of temperature values for a single layer of powder, but the plurality of kernels may be applicable to an array of temperature values corresponding to differences in temperature values. In examples where thez kernel module 714 causes theprocessor 702 to compute the plurality of kernels, the kernels may be applied (as discussed below) to the difference array, and the updated array of temperature values (discussed below) may be an updated array of differences (e.g., regardless of the input to the z kernel module 714). - The computer-
readable medium 700 may include akernel application module 720. Thekernel application module 720 may cause theprocessor 702 to apply the kernel to the array of temperature values to produce an intermediate array. In an example, thekernel application module 720 may include aconvolution module 722. Theconvolution module 722 may cause theprocessor 702 to convolve the kernel with the array of temperature values to produce the intermediate array. In an example, the kernel may model thermal diffusion between an element of the array of temperature values and nearby elements of the array of temperature values. The kernel may be represented as an array of values that when convolved with the array of temperature values models the thermal energy transferred among the elements of the array. In examples that include a plurality of kernels, theconvolution module 722 may cause theprocessor 702 to convolve each kernel with a copy of the array of temperature values. - The computer-
readable medium 700 may include aweight computation module 730. Theweight computation module 730 may cause theprocessor 702 to compute a weight map based on the kernel using a second neural network. For example, theweight computation module 730 may cause theprocessor 702 to simulate the second neural network and use the kernel as an input to the second neural network. In some examples, the second neural network may include a de-convolutional neural network, a super-resolution convolutional neural network, or the like. In examples that include a plurality of kernels, theweight computation module 730 may cause theprocessor 702 to compute a plurality of weight maps based on the plurality of kernels when simulating the second neural network. - The computer-
readable medium 700 may include aweight application module 740. Theweight application module 740 may cause theprocessor 702 to apply the weight map to the intermediate array to produce an updated array of temperature values. For example, theweight application module 740 may cause theprocessor 702 to multiply the weight map by the intermediate array element-by-element. The kernel may model particular physical attributes of the thermal diffusion among elements of the array of temperature values, and the weight map may reflect the relevance of those particular physical attributes to different areas of the array of temperature values. For example, the weight map may include a smaller value that reduces the value of an element of the intermediate array where the particular physical attributes are less relevant, and the weight map may include a larger value that increases or does not reduce as much the value of an element of the intermediate array where the particular physical attributes are more relevant. In examples that include a plurality of kernels, theweight application module 740 may cause theprocessor 702 to multiply each weight map by a corresponding intermediate array element-by-element. Theweight application module 740 may cause theprocessor 702 to sum together the results from the multiplications element-by-element to compute the updated array of temperature values. - In some examples, the updated array of temperature values may be computed based on an array of temperature values and an array of non-temperature values. For example, the
kernel computation module 710 may cause theprocessor 702 to compute an additional kernel based on the array of non-temperature values. Thekernel application module 720 may cause theprocessor 702 to apply the additional kernel to the array of temperature values or the array of non-temperature values to produce an additional intermediate array. Theweight computation module 730 may cause theprocessor 702 to compute an additional weight map based on the additional kernel. Theweight application module 740 may cause theprocessor 702 to composite the intermediate array with the additional intermediate array based on the weight map and the additional weight map. For example, theweight application module 740 may cause theprocessor 702 to multiply the additional intermediate array by the additional weight map element-by-element. Theweight application module 740 may cause theprocessor 702 to sum together the results from the multiplications element-by-element to compute the updated array of temperature values. In examples including a plurality of kernels based on the array of temperature values, theweight application module 740 may cause theprocessor 702 to sum together results flowing from the plurality of kernels with results flowing from the additional kernel. - The computer-
readable medium 700 may include atraining module 750. Thetraining module 750 may cause theprocessor 702 to calculate an error between the updated array of temperature values and an array of true temperature values. Thetraining module 750 may cause theprocessor 702 to adjust the first and second neural networks based on the error. For example, thetraining module 750 may cause theprocessor 702 to compute the error between the updated array of temperature values and the array of true temperature values using a loss function, and thetraining module 750 may cause theprocessor 702 to use a gradient of the loss function to minimize the error. Thetraining module 750 may cause theprocessor 702 to update weights in the first and second neural networks based on the loss function applied to the updated array of temperature values and the array of true temperature values. Referring toFIG. 2 , in an example, when executed by theprocessor 702, thekernel computation module 710 may realize thekernel generation engine 210; thekernel application module 720 may realize thecalculation engine 220; theweight computation module 730 may realize theweighting engine 230; theweight application module 740 may realize thecompositing engine 240; and thetraining module 750 may realize thetraining engine 250. - The above description is illustrative of various principles and implementations of the present disclosure. Numerous variations and modifications to the examples described herein are envisioned. Accordingly, the scope of the present application should be determined only by the following claims.
Claims (15)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2018/024269 WO2019190449A1 (en) | 2018-03-26 | 2018-03-26 | Generation of kernels based on physical states |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210056426A1 true US20210056426A1 (en) | 2021-02-25 |
Family
ID=68058509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/966,529 Pending US20210056426A1 (en) | 2018-03-26 | 2018-03-26 | Generation of kernels based on physical states |
Country Status (2)
Country | Link |
---|---|
US (1) | US20210056426A1 (en) |
WO (1) | WO2019190449A1 (en) |
Citations (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050093907A1 (en) * | 2003-10-31 | 2005-05-05 | Carl Staelin | Ink thickness consistency in digital printing presses |
US20050215897A1 (en) * | 2004-01-20 | 2005-09-29 | Kabushiki Kaisha Toshiba | Image data processing method and apparatus for ultrasonic diagnostic apparatus, and image processing apparatus |
US20080045847A1 (en) * | 2006-06-30 | 2008-02-21 | University Of Louisville Research Foundation, Inc. | Non-contact and passive measurement of arterial pulse through thermal IR imaging, and analysis of thermal IR imagery |
US20090046890A1 (en) * | 2007-08-13 | 2009-02-19 | Pioneer Hi-Bred International, Inc. | Method and system for digital image analysis of ear traits |
US20090169102A1 (en) * | 2007-11-29 | 2009-07-02 | Chao Zhang | Multi-scale multi-camera adaptive fusion with contrast normalization |
US20130024415A1 (en) * | 2011-07-19 | 2013-01-24 | Smartsignal Corporation | Monitoring Method Using Kernel Regression Modeling With Pattern Sequences |
US20150022521A1 (en) * | 2013-07-17 | 2015-01-22 | Microsoft Corporation | Sparse GPU Voxelization for 3D Surface Reconstruction |
US20150327835A1 (en) * | 2012-07-03 | 2015-11-19 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Method and apparatus to detect lipid contents in tissues using ultrasound |
US20160156858A1 (en) * | 2014-12-02 | 2016-06-02 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
US20160176114A1 (en) * | 2014-12-17 | 2016-06-23 | National Applied Research Laboratories | System for online monitoring powder-based 3d printing processes and method thereof |
US20160224017A1 (en) * | 2015-01-29 | 2016-08-04 | Alcoa Inc. | Systems and methods for modelling additively manufactured bodies |
US20160358070A1 (en) * | 2015-06-04 | 2016-12-08 | Samsung Electronics Co., Ltd. | Automatic tuning of artificial neural networks |
US20160380653A1 (en) * | 2015-06-25 | 2016-12-29 | Intel Corporation | Energy efficient polynomial kernel generation in full-duplex radio communication |
US20170072467A1 (en) * | 2015-09-16 | 2017-03-16 | Applied Materials, Inc. | Fabrication of base plate, fabrication of enclosure, and fabrication of support posts in additive manufacturing |
US20170239892A1 (en) * | 2016-02-18 | 2017-08-24 | Velo3D, Inc. | Accurate three-dimensional printing |
US20170243326A1 (en) * | 2016-02-19 | 2017-08-24 | Seek Thermal, Inc. | Pixel decimation for an imaging system |
US20170297095A1 (en) * | 2016-04-15 | 2017-10-19 | U.S.A. As Represented By The Administrator Of The National Aeronautics And Space Administration | System and Method for In-Situ Characterization and Inspection of Additive Manufacturing Deposits Using Transient Infrared Thermography |
US20170329979A1 (en) * | 2016-05-10 | 2017-11-16 | International Business Machines Corporation | Protecting enterprise data at each system layer |
US20180005417A1 (en) * | 2016-07-01 | 2018-01-04 | Cubisme, Inc. | System and method for forming a super-resolution biomarker map image |
US20180089534A1 (en) * | 2016-09-27 | 2018-03-29 | Canon Kabushiki Kaisha | Cross-modiality image matching method |
US20180096229A1 (en) * | 2016-01-26 | 2018-04-05 | Università della Svizzera italiana | System and a method for learning features on geometric domains |
US20180129893A1 (en) * | 2016-11-07 | 2018-05-10 | Samsung Electronics Co., Ltd. | Convolutional neural network processing method and apparatus |
US20180169948A1 (en) * | 2015-06-12 | 2018-06-21 | Materialise N.V. | System and method for ensuring consistency in additive manufacturing using thermal imaging |
US20180186083A1 (en) * | 2013-08-29 | 2018-07-05 | Hexcel Corporation | Method For Analytically Determining SLS Bed Temperatures |
US20180307310A1 (en) * | 2015-03-21 | 2018-10-25 | Mine One Gmbh | Virtual 3d methods, systems and software |
US20180343432A1 (en) * | 2017-05-23 | 2018-11-29 | Microsoft Technology Licensing, Llc | Reducing Blur in a Depth Camera System |
US20190065260A1 (en) * | 2017-08-30 | 2019-02-28 | Intel Corporation | Technologies for kernel scale-out |
US10291268B1 (en) * | 2017-07-25 | 2019-05-14 | United States Of America As Represented By Secretary Of The Navy | Methods and systems for performing radio-frequency signal noise reduction in the absence of noise models |
US20190192880A1 (en) * | 2016-09-07 | 2019-06-27 | Elekta, Inc. | System and method for learning models of radiotherapy treatment plans to predict radiotherapy dose distributions |
US20190206070A1 (en) * | 2016-05-18 | 2019-07-04 | Auckland Uniservices Limited | Image registration method |
US20190287268A1 (en) * | 2016-07-20 | 2019-09-19 | The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organizat | Radiometric imaging |
US10425603B2 (en) * | 2015-03-06 | 2019-09-24 | Flir Systems, Inc. | Anomalous pixel detection |
US20190385325A1 (en) * | 2017-02-22 | 2019-12-19 | Korea Advanced Institute Of Science And Technology | Apparatus and method for depth estimation based on thermal image, and neural network learning method therefof |
US10585801B2 (en) * | 2012-11-26 | 2020-03-10 | Advanced Micro Devices, Inc. | Prefetch kernels on a graphics processing unit |
US10688560B1 (en) * | 2017-03-21 | 2020-06-23 | United States Of America As Represented By The Administrator Of Nasa | Method of mapping melt pattern during directed energy fabrication |
US20200233400A1 (en) * | 2017-10-14 | 2020-07-23 | Hewlett-Packard Development Company, L.P. | Processing 3d object models |
US20200242734A1 (en) * | 2017-04-07 | 2020-07-30 | Intel Corporation | Methods and systems using improved convolutional neural networks for images processing |
US20200319937A1 (en) * | 2016-06-29 | 2020-10-08 | Intel Corporation | Distributed processing qos algorithm for system performance optimization under thermal constraints |
US20210373602A1 (en) * | 2016-09-22 | 2021-12-02 | Sang Kyu MIN | Foldable virtual reality device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050268189A1 (en) * | 2004-05-28 | 2005-12-01 | Hewlett-Packard Development Company, L.P. | Device testing using multiple test kernels |
JP5279745B2 (en) * | 2010-02-24 | 2013-09-04 | 株式会社東芝 | Mask layout creation method, mask layout creation device, lithography mask manufacturing method, semiconductor device manufacturing method, and computer-executable program |
-
2018
- 2018-03-26 US US16/966,529 patent/US20210056426A1/en active Pending
- 2018-03-26 WO PCT/US2018/024269 patent/WO2019190449A1/en active Application Filing
Patent Citations (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050093907A1 (en) * | 2003-10-31 | 2005-05-05 | Carl Staelin | Ink thickness consistency in digital printing presses |
US20050215897A1 (en) * | 2004-01-20 | 2005-09-29 | Kabushiki Kaisha Toshiba | Image data processing method and apparatus for ultrasonic diagnostic apparatus, and image processing apparatus |
US20080045847A1 (en) * | 2006-06-30 | 2008-02-21 | University Of Louisville Research Foundation, Inc. | Non-contact and passive measurement of arterial pulse through thermal IR imaging, and analysis of thermal IR imagery |
US20090046890A1 (en) * | 2007-08-13 | 2009-02-19 | Pioneer Hi-Bred International, Inc. | Method and system for digital image analysis of ear traits |
US20090169102A1 (en) * | 2007-11-29 | 2009-07-02 | Chao Zhang | Multi-scale multi-camera adaptive fusion with contrast normalization |
US20130024415A1 (en) * | 2011-07-19 | 2013-01-24 | Smartsignal Corporation | Monitoring Method Using Kernel Regression Modeling With Pattern Sequences |
US20150327835A1 (en) * | 2012-07-03 | 2015-11-19 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Method and apparatus to detect lipid contents in tissues using ultrasound |
US10585801B2 (en) * | 2012-11-26 | 2020-03-10 | Advanced Micro Devices, Inc. | Prefetch kernels on a graphics processing unit |
US20150022521A1 (en) * | 2013-07-17 | 2015-01-22 | Microsoft Corporation | Sparse GPU Voxelization for 3D Surface Reconstruction |
US20180186083A1 (en) * | 2013-08-29 | 2018-07-05 | Hexcel Corporation | Method For Analytically Determining SLS Bed Temperatures |
US20160156858A1 (en) * | 2014-12-02 | 2016-06-02 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
US20160176114A1 (en) * | 2014-12-17 | 2016-06-23 | National Applied Research Laboratories | System for online monitoring powder-based 3d printing processes and method thereof |
US20160224017A1 (en) * | 2015-01-29 | 2016-08-04 | Alcoa Inc. | Systems and methods for modelling additively manufactured bodies |
US10425603B2 (en) * | 2015-03-06 | 2019-09-24 | Flir Systems, Inc. | Anomalous pixel detection |
US20180307310A1 (en) * | 2015-03-21 | 2018-10-25 | Mine One Gmbh | Virtual 3d methods, systems and software |
US20160358070A1 (en) * | 2015-06-04 | 2016-12-08 | Samsung Electronics Co., Ltd. | Automatic tuning of artificial neural networks |
US20180169948A1 (en) * | 2015-06-12 | 2018-06-21 | Materialise N.V. | System and method for ensuring consistency in additive manufacturing using thermal imaging |
US20160380653A1 (en) * | 2015-06-25 | 2016-12-29 | Intel Corporation | Energy efficient polynomial kernel generation in full-duplex radio communication |
US20170072467A1 (en) * | 2015-09-16 | 2017-03-16 | Applied Materials, Inc. | Fabrication of base plate, fabrication of enclosure, and fabrication of support posts in additive manufacturing |
US20180096229A1 (en) * | 2016-01-26 | 2018-04-05 | Università della Svizzera italiana | System and a method for learning features on geometric domains |
US20170239892A1 (en) * | 2016-02-18 | 2017-08-24 | Velo3D, Inc. | Accurate three-dimensional printing |
US20170243326A1 (en) * | 2016-02-19 | 2017-08-24 | Seek Thermal, Inc. | Pixel decimation for an imaging system |
US20170297095A1 (en) * | 2016-04-15 | 2017-10-19 | U.S.A. As Represented By The Administrator Of The National Aeronautics And Space Administration | System and Method for In-Situ Characterization and Inspection of Additive Manufacturing Deposits Using Transient Infrared Thermography |
US20170329979A1 (en) * | 2016-05-10 | 2017-11-16 | International Business Machines Corporation | Protecting enterprise data at each system layer |
US20190206070A1 (en) * | 2016-05-18 | 2019-07-04 | Auckland Uniservices Limited | Image registration method |
US20200319937A1 (en) * | 2016-06-29 | 2020-10-08 | Intel Corporation | Distributed processing qos algorithm for system performance optimization under thermal constraints |
US20180005417A1 (en) * | 2016-07-01 | 2018-01-04 | Cubisme, Inc. | System and method for forming a super-resolution biomarker map image |
US20190287268A1 (en) * | 2016-07-20 | 2019-09-19 | The State Of Israel, Ministry Of Agriculture & Rural Development, Agricultural Research Organizat | Radiometric imaging |
US20190192880A1 (en) * | 2016-09-07 | 2019-06-27 | Elekta, Inc. | System and method for learning models of radiotherapy treatment plans to predict radiotherapy dose distributions |
US20210373602A1 (en) * | 2016-09-22 | 2021-12-02 | Sang Kyu MIN | Foldable virtual reality device |
US20180089534A1 (en) * | 2016-09-27 | 2018-03-29 | Canon Kabushiki Kaisha | Cross-modiality image matching method |
US20180129893A1 (en) * | 2016-11-07 | 2018-05-10 | Samsung Electronics Co., Ltd. | Convolutional neural network processing method and apparatus |
US20190385325A1 (en) * | 2017-02-22 | 2019-12-19 | Korea Advanced Institute Of Science And Technology | Apparatus and method for depth estimation based on thermal image, and neural network learning method therefof |
US10688560B1 (en) * | 2017-03-21 | 2020-06-23 | United States Of America As Represented By The Administrator Of Nasa | Method of mapping melt pattern during directed energy fabrication |
US20200242734A1 (en) * | 2017-04-07 | 2020-07-30 | Intel Corporation | Methods and systems using improved convolutional neural networks for images processing |
US20180343432A1 (en) * | 2017-05-23 | 2018-11-29 | Microsoft Technology Licensing, Llc | Reducing Blur in a Depth Camera System |
US10291268B1 (en) * | 2017-07-25 | 2019-05-14 | United States Of America As Represented By Secretary Of The Navy | Methods and systems for performing radio-frequency signal noise reduction in the absence of noise models |
US20190065260A1 (en) * | 2017-08-30 | 2019-02-28 | Intel Corporation | Technologies for kernel scale-out |
US20200233400A1 (en) * | 2017-10-14 | 2020-07-23 | Hewlett-Packard Development Company, L.P. | Processing 3d object models |
Non-Patent Citations (1)
Title |
---|
Liu et al. ("Deep convolutional neural networks for thermal infrared object tracking", 2017) (Year: 2017) * |
Also Published As
Publication number | Publication date |
---|---|
WO2019190449A1 (en) | 2019-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Long et al. | Sparseneus: Fast generalizable neural surface reconstruction from sparse views | |
Sayed et al. | Simplerecon: 3d reconstruction without 3d convolutions | |
Zeng et al. | Deep surface normal estimation with hierarchical rgb-d fusion | |
Wang et al. | Neuris: Neural reconstruction of indoor scenes using normal priors | |
US11232632B2 (en) | Learning-based 3D model creation apparatus and method | |
Li et al. | Fast guided global interpolation for depth and motion | |
US9330442B2 (en) | Method of reducing noise in image and image processing apparatus using the same | |
Delaunoy et al. | Photometric bundle adjustment for dense multi-view 3d modeling | |
Gal et al. | Seamless montage for texturing models | |
CN110223222B (en) | Image stitching method, image stitching device, and computer-readable storage medium | |
Maiti et al. | Performance analysis of different surface reconstruction algorithms for 3D reconstruction of outdoor objects from their digital images | |
Voesenek et al. | Automated reconstruction of three-dimensional fish motion, forces, and torques | |
CN114494589A (en) | Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium | |
Semerjian | A new variational framework for multiview surface reconstruction | |
Paramanand et al. | Shape from sharp and motion-blurred image pair | |
Singaraju et al. | Estimation of alpha mattes for multiple image layers | |
Hossain et al. | High dynamic range imaging of non-static scenes | |
Concha et al. | An evaluation of robust cost functions for RGB direct mapping | |
US20210056426A1 (en) | Generation of kernels based on physical states | |
Lin et al. | A-SATMVSNet: An attention-aware multi-view stereo matching network based on satellite imagery | |
CN115564639A (en) | Background blurring method and device, computer equipment and storage medium | |
Kanaeva et al. | Camera pose and focal length estimation using regularized distance constraints | |
Ito et al. | PM-MVS: PatchMatch multi-view stereo | |
US20220215528A1 (en) | Enhancing interpolated thermal images | |
Liao et al. | Dense multiview stereo based on image texture enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LAUN, HE;ZENG, JUN;REEL/FRAME:053365/0798 Effective date: 20180323 |
|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE ASSIGNOR NAME PREVIOUSLY RECORDED ON REEL 053365 FRAME 0798. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:LUAN, HE;ZENG, JUN;REEL/FRAME:054459/0137 Effective date: 20180323 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |