WO2021234782A1 - Image processing device, method, and program - Google Patents

Image processing device, method, and program Download PDF

Info

Publication number
WO2021234782A1
WO2021234782A1 PCT/JP2020/019658 JP2020019658W WO2021234782A1 WO 2021234782 A1 WO2021234782 A1 WO 2021234782A1 JP 2020019658 W JP2020019658 W JP 2020019658W WO 2021234782 A1 WO2021234782 A1 WO 2021234782A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
temperature
data
information
output
Prior art date
Application number
PCT/JP2020/019658
Other languages
French (fr)
Japanese (ja)
Inventor
昭宏 千葉
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to PCT/JP2020/019658 priority Critical patent/WO2021234782A1/en
Publication of WO2021234782A1 publication Critical patent/WO2021234782A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • Embodiments of the present invention relate to image processing devices, methods and programs.
  • the body temperature of a human is important biological information that represents the state of physical condition and activity of the human. Not only the temperature of the deep part of the human body, as measured by the side or ear of the human, but also the temperature of the surface of the human body reflects the autonomic nervous activity of the human and changes according to the human condition. Information.
  • the temperature of the human body surface can be measured as a non-contact thermal image by using thermography. If the human condition can be estimated based on the temperature of the human body surface measured by thermography, for example, a data processing system can easily grasp the human condition as a user and grasp this. Depending on the state of the user, it is possible to urge the user to interrupt the work or notify the administrator of the user's state.
  • thermography You may want to extract specific parts of a human in a thermal image taken using thermography. For example, a scene is assumed in which a region of the nose that reflects human autonomic nervous activity is extracted from a thermal image of a human face. At this time, it is necessary to determine the position of the nose in the thermal image including the nose portion.
  • Non-Patent Document 1 discloses a machine learning model that extracts the position of a target in an image.
  • Non-Patent Document 2 the position of the thermal image and the position of the RGB image including the color information are matched in advance by using the thermography and the RGB camera in combination, and the position information of the target recognized by the RGB camera is also obtained. It also discloses a technique for calculating the position of a target in a thermoimage.
  • Non-Patent Document 1 discloses a machine learning model that extracts the position of a target in an image.
  • a machine learning model that inputs an image and outputs a certain value, it is expected that an image of the same type as the image used for learning is input. Therefore, when an attempt is made to apply a technique as disclosed in Non-Patent Document 1, it is necessary to newly collect thermal images for learning.
  • thermography a special measuring instrument
  • Non-Patent Document 1 the position of the target in the thermal image is calculated by using the thermography and the RGB camera together.
  • Non-Patent Document 2 has a problem that the system is complicated and it is difficult to eliminate the error due to the parallax between the RGB camera and the thermography.
  • the present invention has been made in view of the above circumstances, and an object thereof is to provide an image processing apparatus, method and program capable of appropriately extracting a desired region in a thermal image of a living body. To do.
  • the image processing apparatus corresponds to an input image including color information by using a set consisting of an image including color information, information indicating a region in the image, and output information corresponding to the region.
  • a storage unit that stores a model trained to output information indicating a region in the input image and output information corresponding to the region, and an image having a higher brightness as the temperature is lower are input to the model. By doing so, an output unit for outputting information indicating an area in the input image and output information corresponding to the area is provided.
  • the image processing method is a method used in an image processing apparatus, and comprises a set consisting of an image including color information, information indicating a region in the image, and output information corresponding to the region.
  • An image that is trained to output information indicating a region in the input image and output information corresponding to the region, corresponding to the input image including color information, has a higher brightness as the temperature is lower. Is provided to output information indicating an area in the input image and output information corresponding to the area.
  • a desired region in a thermal image of a living body can be appropriately extracted.
  • FIG. 1 is a diagram showing an application example of a data processing system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram showing an example of the functional configuration of each part of the information processing apparatus.
  • FIG. 3 is a diagram showing an example of a thermal image measured by a thermal image measuring device.
  • FIG. 4 is a flow chart showing an example of a processing procedure related to a data processing system.
  • FIG. 5 is a diagram showing an example of a display screen in the information output device.
  • FIG. 6 is a diagram showing an example of a thermal image.
  • FIG. 7 is a flowchart showing an example of image processing.
  • FIG. 8 is a diagram showing an example of a nasal region in a thermal image.
  • FIG. 9 is a diagram showing an example of a template in a thermal image.
  • FIG. 10 is a diagram showing an example of facial components in a thermal image.
  • FIG. 11 is a diagram showing an example of a nasal region designated in a thermal image.
  • FIG. 12 is a flowchart showing an example of the learning process.
  • FIG. 13 is a diagram showing an example of quantized temperature data.
  • FIG. 14 is a diagram showing an example of the absolute value of the temperature difference.
  • FIG. 15 is a diagram showing an example of the processing result of the temperature data in a table format.
  • FIG. 16 is a diagram showing an example of the cumulative value of the distribution at each time in a graph format.
  • FIG. 17 is a diagram showing an example of temperature data on the body surface of a living body.
  • FIG. 18 is a diagram showing an example of the cumulative value of the temperature of the body surface of a living body.
  • FIG. 19 is a diagram showing an example of the quantized value of the temperature data in a table format.
  • FIG. 20 is a flowchart showing an example of prediction processing.
  • FIG. 21 is a block diagram showing an example of a hardware configuration of an information processing apparatus according to an embodiment of the present invention.
  • FIG. 1 is a diagram showing an application example of a data processing system according to an embodiment of the present invention.
  • the data processing system data analysis system
  • the data processing system includes a thermal image measuring device 1, an information input device 2, an information output device 3, and an information processing device 4.
  • FIG. 2 is a block diagram showing an example of the functional configuration of each part of the information processing apparatus.
  • the information processing apparatus 4 includes a storage unit 41, an image processing unit 42, a learning processing unit 43, and a prediction processing unit 44.
  • the storage unit 41, the image processing unit 42, the learning processing unit 43, and the prediction processing unit 44 in the information processing device 4 may be individual storage devices, image processing devices, learning processing devices, and prediction processing devices.
  • the storage unit 41 has a work data storage unit 41a, a temperature data storage unit 41b, and a prediction model (model) storage unit 41c.
  • the image processing unit 42 includes a data reading unit 42a, a detection unit 42b, and a calculation unit 42c.
  • the learning processing unit 43 includes a data reading unit 43a, a data quantization unit 43b, and a model learning unit 43c.
  • the prediction processing unit 44 includes a data reading unit 44a, a prediction unit 44b, and a prediction result output unit 44c. The processing by each part of the information processing apparatus 4 will be described later.
  • FIG. 3 is a diagram showing an example of a thermal image measured by a thermal image measuring device.
  • the thermal image measuring device 1 measures a thermal image of the user's face as shown in FIG.
  • the area a of the user's face, the area b of the user's eyes, the area c of the user's nose, the rectangular area d, and the area e around the user's nose become luminance. Indicated by the corresponding color information.
  • the rectangular area d is the entire area c of the user's nose and the area e around the user's nose. That is, the region e around the user's nose is the difference between the rectangular region d and all of the user's nose region c.
  • the information input device 2 is, for example, a keyboard (keyboard) or a mouse (mouse), and receives an input operation from a user.
  • the information output device 3 is, for example, a liquid crystal display or the like, and outputs information presented to the user.
  • the information processing device 4 inputs information from the thermal image measuring device 1 and the information input device 2, and outputs the processing result to the information output device 3.
  • FIG. 4 is a flowchart showing an example of a processing procedure related to a data processing system.
  • FIG. 5 is a diagram showing an example of a display screen in the information output device. (1) As shown in FIG. 5, the information output device 3 displays a screen G1 of a problem for numerical calculation for a user who is a subject, for example, by spreadsheet software (software), and each of the lines of this problem. In the column, one-digit numbers are randomly arranged and displayed.
  • the thermal image measuring device 1 measures a thermal image of the user's face during the input operation according to (2) above (S2).
  • FIG. 6 is a diagram showing an example of a thermal image.
  • the thermal image is measured as a thermal image of a plurality of frames at regular time intervals.
  • the correspondence between the measured luminance range of the thermal image and the temperature range on the user's body surface can be set.
  • a range of temperatures up to the highest possible temperature can be set as the temperature range represented in the thermal image.
  • the temperature range from the lowest to the highest temperature that can be taken by humans in normal times is "Morimoto Taketoshi, Human Thermoregulation, Textile Product Consumption Science, 2003. , Volume 44, Issue 5, p. 256-262, Release date 2010/1530, Online ISSN 1884-6599, Print ISSN 0037-2072, https://doi.org/10.11419/senshoshi1960.44.256, https: / According to "/www.jstage.jst.go.jp/article/senshoshi1960/44/5/44_5_256/_article/-char/ja", it is about 35 to 40 degrees Celsius, so this range is the above temperature. It is desirable to set it as a range.
  • the temperature range may be changed.
  • the current temperature difference is defined as the temperature at a point where the temperature is expected to decrease on the user's body surface in the current thermal image and the temperature increases or is constant on the user's body surface in the thermal image. It is the difference from the expected temperature.
  • the immediately preceding temperature difference is the temperature of a portion of the user's body surface where the temperature is expected to decrease in advance in the thermal image measured before the time when the current thermal image is measured, and the thermal image. Is the difference from the temperature at a point on the surface of the user's body where the temperature is expected to rise or be constant. In this way, by setting the temperature range represented by the thermal image according to the measurement target, the temperature suitable for the measurement target in the thermal image can be represented.
  • thermo image data which is a set of a plurality of frames of the measured thermal image
  • work data is stored in the work data storage unit 41a of the storage unit 41.
  • the above work is an example showing a human being a work for stimulating the human autonomic nervous activity, and by replacing this work with another work for stimulating the human autonomic nerve activity.
  • the work data related to the above input work may be replaced with the work data related to the above other work.
  • the above-mentioned human work may be replaced with the work of a human viewing a video, and the above-mentioned work data may be replaced with a questionnaire showing the comfort or discomfort when a human sees the video. ..
  • the work may be replaced with an interview such as a medical institution consultation, and the work data may be replaced with the physical condition of the user (patient) or the diagnosis result.
  • the thermal image data is not a continuous moving image, but is treated as one long-term thermal image data in which intermittent thermal image data recorded for each consultation is joined in chronological order.
  • the work data is created as an array in which the results diagnosed at each visit are joined in chronological order.
  • the measurement target related to thermal images is not limited to humans, and is effective as a measurement target for animals in which the autonomic nerve activity is developed and the autonomic nerve activity changes in response to work or external stimuli. Is.
  • the observer observing the animal may perform the input operation related to the work by the animal, or a sensor or the like is used.
  • the work data may be automatically recorded in the work data storage unit 41a.
  • FIG. 7 is a flowchart showing an example of image processing. The details of S3 will be described with reference to FIG. (6)
  • the data reading unit 42a of the image processing unit 42 reads the thermal image data recorded in (4) above.
  • the detection unit 42b detects the user's face in the read thermal image data.
  • the method to be used is specified in advance by the setting operation on the information processing apparatus 4 by the analyst.
  • OpenFace is disclosed, for example, below. (OpenFace) https://github.com/TadasBaltrusaitis/OpenFace
  • a "nose area" representing a portion corresponding to the user's nose is designated for the image of the first frame of the detected read thermal image data. .. This designation may be realized, for example, by enclosing the nose portion in a free region from the thermal image by an input operation by an analyst, or even if the nose portion is automatically detected from the thermal image by the detection unit 42b. good.
  • FIG. 8 is a diagram showing an example of a nasal region in a thermal image. In FIG. 8, the triangular region a is the nose region.
  • FIG. 9 is a diagram showing an example of a template in a thermal image.
  • the quadrangular area a is a template.
  • the rectangular region has, for example, the upper left corner of the entire thermal image as the origin, the upper left coordinates of the template region as (x1, y1), and the lower right coordinates of the region.
  • the rectangular region When is (x2, y2), it is a region surrounded by four points consisting of (x1, y1), (x1, y2), (x2, y1), and (x2, y2).
  • x1 and x2 correspond to the minimum and maximum values of the x-coordinate of the nasal region specified in (8), respectively, and y1 and y2 correspond to the minimum and maximum values of the y-coordinate of the nasal region specified in (8). Corresponds to the maximum value and each.
  • the position information of the nose area in the rectangular area of this template is recorded in the storage unit 41.
  • the location information of the nasal area represents the relative coordinates in the template. For example, with the upper left corner of the template as the origin, the coordinates of the boundary of the nose region specified in (8) in the x-axis direction and the coordinates in the y-axis direction are recorded in the storage unit 41 as a list.
  • three points consisting of relative coordinates (s1, t1), (s2, t2), and (s3, t3) in the template are position information.
  • s1 to s3 correspond to the relative coordinates in the x-axis direction in the template
  • t1 to t3 correspond to the relative coordinates in the y-axis direction in the template.
  • the detection unit 42b searches the nasal region in the thermal image based on the template.
  • the calculation unit 42c calculates the average value of the brightness of the pixels in the nasal region (the region surrounded by the coordinates of the position information of the nasal region), and the calculation result is used as the frame of the thermal image. It is recorded in the storage unit 41 together with the number.
  • the target area for calculating the average value of the brightness of the pixels is limited to the nasal area.
  • the calculation unit 42c sets the image of the nose region obtained in S3-4-1 as a new template. (14) The calculation unit 42c performs the processing of S3-4-1 to S3-6 for all frames of the thermal image (a in FIG. 7), and while tracking the nasal region, the pixels of the nasal portion. Calculate the average value of the brightness of.
  • the calculation unit 42c stores "temperature data" including the number of frames of the thermal image and the average value of the brightness of the pixels in the nose region within the frames. If the correspondence between the brightness and the temperature in each frame is obtained by the thermography setting or the like, the brightness may be converted into the temperature.
  • the detection unit 42b After S3-1, in S3-3, the detection unit 42b inverts the negative / positive of the thermal image of all frames. Generally, the thermal image is set so that the brightness of the high temperature portion is high, but the detection unit 42b converts the thermal image so that the brightness of the high temperature portion in the thermal image is low. In other words, the detection unit 42b converts the thermal image so that the brightness of the low temperature portion of the thermal image is increased. This makes it possible to apply the model trained by the RGB image to the thermal image.
  • FIG. 10 is a diagram showing an example of facial components in a thermal image.
  • the detection unit 42b inputs a thermal image in which the negative / positive is inverted in S3-3 into the face detector as shown in FIG. 10, so that the face including the nose in the thermal image is included. Detects (outputs) the components of.
  • this face detector uses an RGB image, information indicating the range of the nose in the RGB image, and a set of labels that are the names of this range as teachers, and inputs the RGB image to the nose in the image. It is a model trained to output the range of with the label that is the name of this range.
  • the model trained in the RGB image by applying the model trained in the RGB image to the thermal image, it is possible to reduce the financial and time costs for collecting the thermal image and creating the face recognition model for the thermal image. ..
  • the system can be simplified, the trouble of calibration can be eliminated, and the error due to the deviation due to the calibration can be eliminated. become.
  • FIG. 11 is a diagram showing an example of a nasal region designated in a thermal image.
  • the detection unit 42b is the upper part of the nose, the left end of the lower part of the nose, and the right end of the lower part of the nose in the thermal image, respectively, as shown in FIG.
  • the area a surrounded by the recognized three points is designated as the nasal area.
  • the calculation unit 42c calculates the average value of the brightness of the pixels in the designated nose region, and records the calculation result in the storage unit 41 together with the number of frames of the thermal image.
  • the region for calculating the average value of the brightness of the thermal pixels is limited to the nasal region.
  • the calculation unit 42c stores in the storage unit 41 "temperature data" which is the average value of the number of frames of the thermal image and the brightness of the pixels in the nose region within the frames. If the correspondence between the brightness and the temperature in each frame of the thermal image is obtained by the thermography setting or the like, the calculation unit 42c may convert the brightness of the thermal image into the temperature.
  • the calculation unit 42c replaces the portion designated as the nose region in the above description with the corresponding region, and creates temperature data corresponding to the region.
  • Examples of the region other than the nose include the forehead in which the temperature is not changed by the work, and the eyeball portion or the anal vulva where the temperature is considered to be increased by the work in the user.
  • the nose is illustrated as a place where the temperature drops as a result of the work by the user, but similarly, the ear, which is considered to have the temperature drop as a result of the work by the user, is used to create temperature data instead of the nose. It may be used. There is a negative correlation between the change in temperature at the point where the temperature rises on the body surface and the change in temperature at the point where the temperature drops on the body surface.
  • the calculation unit 42c creates temperature data in the nasal region, temperature data in the region other than the nasal region, and work data for each of the M experiments related to the work.
  • FIG. 12 is a flowchart showing an example of the learning process.
  • the data reading unit 43a of the learning processing unit 43 reads the temperature data and the working data created by the image processing unit 42 from the storage unit 41. At this time, if a plurality of temperature data exist, the data reading unit 43a calculates the difference for each frame, and uses this calculation result as the temperature data for the subsequent processing.
  • the data reading unit 43a uses the difference between the temperature data of the nose part and the temperature data of the non-nose part as the temperature data.
  • the data reading unit 43a uses the temperature data in which each is combined as shown in the following equation (1). To calculate.
  • Temperature data ((Temperature of the eyeball part-Temperature of the forehead part)-(Temperature of the nose part-Temperature of the forehead part))... Equation (1)
  • the combination of temperature data in this calculation can be specified in advance by an input operation by the analyst.
  • the place where the temperature rises on the body surface is considered to be the place where the autonomic nervous activity and the temperature change have a positive correlation.
  • the place where the temperature drops on the body surface is considered to be the place where the autonomic nervous activity and the temperature change have a negative correlation.
  • the data quantization unit 43b quantizes the temperature data. This quantization is divided into normal quantization and special quantization, and the quantization is performed by the method specified by the user.
  • the data quantization unit 43b quantizes the temperature data for each experiment.
  • the data quantization unit 43b calculates the average value of the temperature data for each number of frames obtained by dividing the difference between the number of first frames and the number of last frames of the temperature data by N (integer), and the calculation result is used as work in the work data. Save with the result.
  • FIG. 13 is a diagram showing an example of quantized temperature data.
  • the data quantization unit 43b acquires representative temperature data as shown in FIG. 13 as time-series data which is a set of temperature data measured at each of a plurality of timings.
  • the above-mentioned representative temperature data may be selected by an analyst from a plurality of time-series data, such as temperature data, and may be, for example, an average value of temperature data which is M measured time-series data. There may be.
  • an example of performing quantization based on the average of the processing results for each of the temperature data, which is M time series data will be described.
  • the data quantization unit 43b determines the difference in temperature adjacent to each other in time series with respect to the representative temperature data value as shown in FIG. 13, that is, the temperature data at a certain timing.
  • the difference between the value and the value of the temperature data at the timing adjacent to the timing is calculated, and the absolute value of this difference is calculated.
  • FIG. 14 is a diagram showing an example of the absolute value of the temperature difference. For example, since the difference between the temperature at the time "5" and the temperature at the time "6" shown in FIG. 13 is "1", the time "5" and the time “6” are shown in FIG.
  • the absolute value of the difference in temperature between "" and "1" is.
  • FIG. 15 is a diagram showing an example of the processing result of the temperature data in a table format.
  • the temperature difference, (3) the absolute value of the difference, (4) the distribution, and (5) the cumulative value are shown.
  • the data quantization unit 43b calculates the moving average for the time series data in advance, and the result of this calculation is processed thereafter. Or, calculate the average value ⁇ and standard deviation ⁇ from several points in the vicinity of the time series data, and use these calculated results to delete the time series data that does not fall within the range of ⁇ ⁇ 3 ⁇ . This result may be used in the subsequent processing.
  • the data quantization unit 43b divides the value obtained by multiplying each element of the "absolute value of the temperature difference" at each time by N by the sum of the absolute values of the temperature difference at each time.
  • the sum of the absolute values of the temperature differences at each time is the absolute value of the temperature difference "1" at the time "6" and the absolute value of the temperature difference at the time "7".
  • “1" absolute value of temperature difference at time “8””2
  • absolute value of temperature difference at time “9””2 absolute value of temperature difference at time "10””1
  • time It is the sum of "8” of the absolute value "1” of the temperature difference in "11”.
  • the value obtained by multiplying the absolute value "1" of the temperature difference at the time "6” by N is "5". Therefore, the value obtained by multiplying the above N is "5", and the value obtained by dividing the sum of the absolute values of the temperature differences at each time "8” is "0.625”, and this value is the distribution at the time "6". be.
  • the data quantization unit 43b calculates the “cumulative value”, which is the sum of the “distributions” calculated for each time along the time series, for each time.
  • the cumulative value at the time “7” is the total sum "1.250” of the cumulative values from the time “1” to the time “7”.
  • the cumulative value at the time “15” is the total sum "5.000" of the cumulative values from the time "1" to the time "15".
  • FIG. 16 is a graph showing an example of the cumulative value of the distribution at each time in a graph format.
  • the data quantization unit 43b detects a section formed by dividing the time related to the quantization as a window according to the cumulative value of the distribution at each time.
  • the data quantization unit 43b divides the range "0.000" to "5.000" of the calculated cumulative value into equal intervals.
  • the divided ranges are (1) a range in which the cumulative value is 1 or less (A in FIG. 16), (2) a range in which the cumulative value exceeds 1 and is 2 or less (B in FIG. 16), ( 3) The range where the cumulative value is more than 2 and 3 or less, (4) the range where the cumulative value is more than 3 and 4 or less, and (5) the range where the cumulative value is more than 4 and 5 or less (FIG. 16). E).
  • the data quantization unit 43b uses the time when the cumulative value is 1 or less (A in FIG. 16), here, the time from the times “1” to “6” (a in FIG. 16) as the window a.
  • the cumulative value exceeds 1 and is 2 or less (B in FIG. 16), and here, the time related to the time “7” is set as the window b, and the cumulative value exceeds 2 and is 3 or less.
  • the time here the time related to the time “7” is the window c, the time when the cumulative value exceeds 3 and is 4 or less, and here the time related to the time "8” is the window d, and the cumulative value exceeds 4.
  • the time that is 5 or less (E in FIG. 16), here, the time related to the times “9” to “15” is classified as the window e.
  • the data quantization unit 43b can effectively perform the quantization.
  • the data quantization unit 43b obtains an average value for each of the ranges corresponding to the windows classified as described above for M temperature data. That is, in each of the ranges delimited by the cumulative value, the data measured at the timing corresponding to the cumulative value included in the range is quantized. As a result, the process of S3-2-2 is completed.
  • FIG. 17 is a diagram showing an example of temperature data on the body surface of a living body.
  • the data quantization unit 43b acquires typical temperature data as shown in FIG.
  • the above representative temperature data may be selected by the analyst from the temperature data which is a plurality of measured time series data, and for example, the temperature data which is the measured M time series data may be selected. It may be an average value.
  • an example of performing quantization based on the average of the processing results for each of the temperature data, which is M time series data will be described.
  • FIG. 18 is a diagram showing an example of the cumulative value of the temperature of the body surface of a living body.
  • Equation (2) t represents an array of temperature data.
  • max (t) is the maximum value in the array of temperature data.
  • min (t) is the minimum value in the array of temperature data.
  • the data quantization unit 43b sorts the cumulative temperature at each time obtained by the above conversion in ascending order, divides it into N-2 sizes, and determines the range of quantization.
  • N is 5.
  • FIG. 19 is a diagram showing an example of the quantized value of the temperature data in a table format.
  • the data quantization unit 43b quantizes the value of the temperature data corresponding to the cumulative value of 1.069 or less to "1", and the temperature data corresponding to the cumulative value of 2.494 or less and more than 1.069.
  • the value is quantized to "2"
  • the value of the temperature data corresponding to the cumulative value of 2.69 or less and more than 2.494 is quantized to "3”
  • the value of the temperature data corresponding to the cumulative value of 2.716 or less and more than 2.69 is "4".
  • the value of the temperature data corresponding to the cumulative value exceeding 2.716 is quantized to "5".
  • the data quantization unit 43b quantizes the value of the temperature data of 2.718 ° C or higher to "1", and sets the value of the temperature data of 0.368 ° C or higher and lower than 2.718 ° C to "2". Quantumize the value of temperature data above 0.030 ° C and below 0.368 ° C to "3”, quantize the value of temperature data above 0.002 ° C and below 0.030 ° C to "4", and quantize the value of temperature data below 0.002 ° C. The value of is quantized to "5".
  • the value of the temperature data measured at the time “1” is quantized to "1"
  • the value of the temperature data measured from the time “2” to “5" is quantized to "2”
  • the value of the temperature data measured from time “6” to “10” is quantized to "3”
  • the temperature data measured from time "11” to “15” The value of is quantized to "4".
  • the data quantization unit 43b sorts the calculated cumulative value at each time according to the magnitude of the cumulative value, and one or a continuous time-series data corresponding to the sorted cumulative value.
  • the value of the temperature data at a plurality of times is quantized to the same value according to the magnitude of the value.
  • the part where the change in the temperature data value is larger than that when the temperature data value is simply quantized at regular intervals, in the example shown in FIG. 19, from time “1" to "5".
  • the window related to the quantization of the temperature data value in the above can be narrowed with respect to the window related to the quantization of the temperature data value after the time "6". Therefore, the data quantization unit 43b can effectively perform the quantization.
  • the data quantization unit 43b obtains an average value for each time temperature in the M temperature data for each range corresponding to the window divided as described above. That is, in each of the ranges delimited by the cumulative value, the data measured at the timing corresponding to the cumulative value included in the range is quantized. This completes the quantization. (31) As a result of the quantization in S4-2-1 or S4-2-2, the data quantization unit 43b finally has an array of temperature data having a size of M ⁇ N and a size of M ⁇ 1. Generate an array of working data with.
  • the model learning unit 43c learns the prediction model based on the quantized data.
  • the prediction model is a model in which work data of a living body is input, and the state of the living body is estimated and output.
  • the model learning unit 43c stores the prediction model obtained by learning in the prediction model storage unit 41c of the storage unit 41.
  • FIG. 20 is a flowchart showing an example of prediction processing. The details of S5 will be described with reference to FIG.
  • the data reading unit 44a of the prediction processing unit 44 inputs the newly measured temperature data and the work data related to the new work by the subject.
  • the prediction unit 44b predicts information related to the work performance of the subject from the read temperature data and work data.
  • the prediction unit 44b makes predictions as described in (a) to (c) below.
  • the prediction unit 44b predicts information related to the work performance of the subject by inputting new temperature data into the trained regression model shown in (a) of (32) above.
  • the prediction unit 44b predicts the presence or absence of work related to the subject by inputting new temperature data into the trained classification model shown in (b) of (32) above.
  • the prediction unit 44b predicts the deterioration of the work performance of the subject by inputting the new temperature data into the trained classification model shown in (c) of (32) above.
  • the prediction result output unit 44c outputs the result predicted by the prediction unit 44b to the information output device 3 or the like, and presents the result to the user.
  • the processing for the human face image has been described, but this embodiment is for an animal whose body surface temperature changes, such as a wild boar, a pig, or a dog. Can also be applied.
  • an input image including color information is supported by using a set consisting of an image including color information, information indicating a region in the image, and output information corresponding to the region.
  • the input image is obtained by inputting an image having a higher brightness as the temperature is lower into a model trained to output information indicating a region in the input image and output information corresponding to the region.
  • the information indicating the area in the above and the output information corresponding to the area are output. This makes it possible to appropriately extract a desired region in a thermal image of a living body while reducing financial and time costs.
  • FIG. 21 is a block diagram showing an example of the hardware configuration of the information processing apparatus according to the embodiment of the present invention.
  • the information processing apparatus 4 according to the above embodiment is configured by, for example, a server computer or a personal computer, and is a hardware processor 111A such as a CPU. Has. Then, the program memory 111B, the data memory 112, the input / output interface 113, and the communication interface 114 are connected to the hardware processor 111A via the bus 120. .. The same applies to the thermal image measuring device 1 shown in FIG.
  • the communication interface 114 includes, for example, one or more wireless communication interface units (units), and enables information to be transmitted / received to / from the communication network NW.
  • the wireless interface an interface adopted by a low power wireless data communication standard such as a wireless LAN (Local Area Network) is used.
  • An input device 20 (device) for an operator and an output device 30 attached to the information processing apparatus 4 are connected to the input / output interface 113.
  • the input device 20 corresponds to the information input device 2 shown in FIG. 1
  • the output device 30 corresponds to the information output device 3 shown in FIG.
  • the input / output interface 113 captures operation data input by an operator through an input device 20 such as a keyboard, touch panel, touchpad, mouse, etc., and outputs data as liquid crystal or organic.
  • a process of outputting to an output device 30 including a display device using an EL (Electro Luminescence) or the like for display is performed.
  • a device built in the information processing device 4 may be used for the input device 20 and the output device 30, and other information capable of communicating with the information processing device 4 via the network NW.
  • the input and output devices of the terminal may be used.
  • the program memory 111B is a non-volatile memory (non-volatile memory) that can be written and read at any time, such as an HDD (Hard Disk Drive) or SSD (Solid State Drive), as a non-temporary tangible storage medium. It is used in combination with a non-volatile memory such as a ROM (Read Only Memory), and stores programs necessary for executing various control processes according to one embodiment.
  • non-volatile memory non-volatile memory
  • HDD Hard Disk Drive
  • SSD Solid State Drive
  • ROM Read Only Memory
  • the data memory 112 is used as a tangible storage medium, for example, in combination with the above-mentioned non-volatile memory and a volatile memory such as RAM (RandomAccessMemory), and various processes are performed. It is used to store various data acquired and created in the process.
  • RAM RandomAccessMemory
  • the information processing apparatus 4 has data processing including a storage unit 41, an image processing unit 42, a learning processing unit 43, and a prediction processing unit 44 shown in FIG. 1 as processing function units by software. It can be configured as a device.
  • the storage unit 41 may be configured by using the data memory 112 shown in FIG. 21.
  • these areas are not indispensable in the information processing apparatus 4, and are, for example, an external storage medium such as a USB (Universal Serial Bus) memory, a database server (database server) arranged in the cloud, or the like. It may be an area provided in the storage device of.
  • USB Universal Serial Bus
  • database server database server
  • the processing function units in each of the image processing unit 42, the learning processing unit 43, and the prediction processing unit 44 all read and execute the program stored in the program memory 111B by the hardware processor 111A. Can be realized by. Part or all of these processing functions may be in various other formats, including integrated circuits such as ASICs (Application Specific Integrated Circuits) or FPGAs (Field-Programmable Gate Arrays). It may be realized.
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field-Programmable Gate Arrays
  • the method described in each embodiment is a program (software means) that can be executed by a computer (computer), for example, a magnetic disk (floppy (registered trademark) disk (Floppy disk), hard disk, etc.), an optical disk, etc. It can be stored in a recording medium such as (optical disc) (CD-ROM, DVD, MO, etc.), semiconductor memory (ROM, RAM, Flash memory, etc.), and can be transmitted and distributed by a communication medium.
  • the program stored on the medium side also includes a setting program for configuring the software means (including not only the execution program but also the table and the data structure) to be executed by the computer in the computer.
  • a computer that realizes this device reads a program recorded on a recording medium, constructs software means by a setting program in some cases, and executes the above-mentioned processing by controlling the operation by the software means.
  • the recording medium referred to in the present specification is not limited to distribution, and includes storage media such as magnetic disks and semiconductor memories provided in devices connected inside a computer or via a network.
  • the present invention is not limited to the above embodiment, and can be variously modified at the implementation stage without departing from the gist thereof.
  • each embodiment may be carried out in combination as appropriate, in which case the combined effect can be obtained.
  • the above-described embodiment includes various inventions, and various inventions can be extracted by a combination selected from a plurality of disclosed constituent requirements. For example, even if some constituent elements are deleted from all the constituent elements shown in the embodiment, if the problem can be solved and the effect is obtained, the configuration in which the constituent elements are deleted can be extracted as an invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

An image processing device according to one embodiment of the present invention comprises: a storage unit that stores a model that has been trained, using a set composed of an image containing color information, information indicating an area in said image, and information for output corresponding to said area, to output information corresponding to an input image that contains color information and indicating an area in said input image, and information for output corresponding to said area; and an output unit that, by means of inputting an image in which luminance increases as temperature decreases into the model, outputs information indicating an area in the inputted image and information for output corresponding to said area.

Description

画像処理装置、方法およびプログラムImage processing equipment, methods and programs
 本発明の実施形態は、画像処理装置、方法およびプログラムに関する。 Embodiments of the present invention relate to image processing devices, methods and programs.
 ヒト(human)の体温は、そのヒトの体調および活動の状態を表す、重要な生体情報である。 
 ヒトの脇または耳にて計測されるような、ヒトの深部の体温だけでなく、ヒトの体表面の温度もまた、ヒトの自律神経活動が反映されて、ヒトの状態に応じて変化する生体情報である。
The body temperature of a human is important biological information that represents the state of physical condition and activity of the human.
Not only the temperature of the deep part of the human body, as measured by the side or ear of the human, but also the temperature of the surface of the human body reflects the autonomic nervous activity of the human and changes according to the human condition. Information.
 ヒトの体表面の温度は、サーモグラフィ(thermography)が用いられることで、非接触で熱画像(thermogram)として計測され得る。 
 サーモグラフィにより計測された、ヒトの体表面の温度に基づいて、そのヒトの状態を推定することが出来れば、例えばデータ処理システムによりユーザ(user)であるヒトの状態を簡便に把握し、この把握された状態に応じて、ユーザによる作業の中断を促したり、ユーザの状態を管理者に通報したりすることが可能になる。
The temperature of the human body surface can be measured as a non-contact thermal image by using thermography.
If the human condition can be estimated based on the temperature of the human body surface measured by thermography, for example, a data processing system can easily grasp the human condition as a user and grasp this. Depending on the state of the user, it is possible to urge the user to interrupt the work or notify the administrator of the user's state.
 サーモグラフィを用いて撮影される熱画像における、ヒトに関する特定の部位を抽出したい場合がある。 
 例えば、ヒトの顔が撮影されてなる熱画像から、ヒトの自律神経活動が反映される鼻の領域を抽出するような場面が想定される。 
 このとき、鼻部分を含む熱画像における鼻の位置を判定する必要がある。
You may want to extract specific parts of a human in a thermal image taken using thermography.
For example, a scene is assumed in which a region of the nose that reflects human autonomic nervous activity is extracted from a thermal image of a human face.
At this time, it is necessary to determine the position of the nose in the thermal image including the nose portion.
 例えば、非特許文献1では、画像におけるターゲットの位置を抽出する機械学習モデルが開示される。 For example, Non-Patent Document 1 discloses a machine learning model that extracts the position of a target in an image.
 また、非特許文献2では、サーモグラフィとRGBカメラとを併用して、熱画像の位置と色情報を含むRGB画像の位置とを予め合わせておき、RGBカメラで認識されたターゲットの位置情報をもとに、熱画像内でのターゲットの位置を計算する技術が開示される。 Further, in Non-Patent Document 2, the position of the thermal image and the position of the RGB image including the color information are matched in advance by using the thermography and the RGB camera in combination, and the position information of the target recognized by the RGB camera is also obtained. It also discloses a technique for calculating the position of a target in a thermoimage.
 上記のように、非特許文献1では、画像におけるターゲットの位置を抽出する機械学習モデルが開示される。 
 一般に、画像を入力して、ある値を出力する機械学習モデルでは、学習に用いられた画像と同種の画像が入力されることが期待される。 
 よって、非特許文献1に開示されるような技術を応用しようとした場合、新たに学習用の熱画像を収集しなければならない。
As described above, Non-Patent Document 1 discloses a machine learning model that extracts the position of a target in an image.
Generally, in a machine learning model that inputs an image and outputs a certain value, it is expected that an image of the same type as the image used for learning is input.
Therefore, when an attempt is made to apply a technique as disclosed in Non-Patent Document 1, it is necessary to newly collect thermal images for learning.
 しかしながら、熱画像は、RGB画像と異なり、サーモグラフィという特殊な計測器でしか取得できない。よって、熱画像を大量に収集することが困難であり、再学習に金銭的および時間的なコストを要するという課題がある。 However, unlike RGB images, thermal images can only be acquired with a special measuring instrument called thermography. Therefore, it is difficult to collect a large amount of thermal images, and there is a problem that re-learning requires financial and time costs.
 また、上記のように、非特許文献1では、サーモグラフィとRGBカメラが併用されて、熱画像内でのターゲットの位置が計算される。 Further, as described above, in Non-Patent Document 1, the position of the target in the thermal image is calculated by using the thermography and the RGB camera together.
 しかし、非特許文献2に開示された技術では、システムが複雑であるとともに、RGBカメラとサーモグラフィとの間の視差による誤差を取り除くことが難しいという課題がある。 However, the technique disclosed in Non-Patent Document 2 has a problem that the system is complicated and it is difficult to eliminate the error due to the parallax between the RGB camera and the thermography.
 この発明は、上記事情に着目してなされたもので、その目的とするところは、生体の熱画像における所望の領域を適切に抽出することができるようにした画像処理装置、方法およびプログラムを提供することにある。 The present invention has been made in view of the above circumstances, and an object thereof is to provide an image processing apparatus, method and program capable of appropriately extracting a desired region in a thermal image of a living body. To do.
 本発明の一態様に係る画像処理装置は、色情報を含む画像、前記画像における領域を示す情報、および前記領域に対応する出力用情報でなる組を用いて、色情報を含む入力画像に対応する、前記入力画像における領域を示す情報と前記領域に対応する出力用情報とを出力するように学習されたモデルが記憶される記憶部と、温度が低いほど輝度が高い画像を前記モデルに入力することにより、当該入力された画像における領域を示す情報と当該領域に対応する出力用情報とを出力する出力部と、を備える。 The image processing apparatus according to one aspect of the present invention corresponds to an input image including color information by using a set consisting of an image including color information, information indicating a region in the image, and output information corresponding to the region. A storage unit that stores a model trained to output information indicating a region in the input image and output information corresponding to the region, and an image having a higher brightness as the temperature is lower are input to the model. By doing so, an output unit for outputting information indicating an area in the input image and output information corresponding to the area is provided.
 本発明の一態様に係る画像処理方法は、画像処理装置に用いられる方法であって、色情報を含む画像、前記画像における領域を示す情報、および前記領域に対応する出力用情報でなる組を用いて、色情報を含む入力画像に対応する、前記入力画像における領域を示す情報と前記領域に対応する出力用情報とを出力するように学習されたモデルに、温度が低いほど輝度が高い画像を入力することにより、当該入力された画像における領域を示す情報と当該領域に対応する出力用情報とを出力すること、を備える。 The image processing method according to one aspect of the present invention is a method used in an image processing apparatus, and comprises a set consisting of an image including color information, information indicating a region in the image, and output information corresponding to the region. An image that is trained to output information indicating a region in the input image and output information corresponding to the region, corresponding to the input image including color information, has a higher brightness as the temperature is lower. Is provided to output information indicating an area in the input image and output information corresponding to the area.
 本発明によれば、生体の熱画像における所望の領域を適切に抽出することができる。 According to the present invention, a desired region in a thermal image of a living body can be appropriately extracted.
図1は、本発明の一実施形態に係るデータ処理システムの適用例を示す図である。FIG. 1 is a diagram showing an application example of a data processing system according to an embodiment of the present invention. 図2は、情報処理装置の各部の機能構成の一例を示すブロック図(block diagram)である。FIG. 2 is a block diagram showing an example of the functional configuration of each part of the information processing apparatus. 図3は、熱画像計測装置により計測される熱画像の一例を示す図である。FIG. 3 is a diagram showing an example of a thermal image measured by a thermal image measuring device. 図4は、データ処理システムに係る処理手順の一例を示すフローチャート(flowchart)である。FIG. 4 is a flow chart showing an example of a processing procedure related to a data processing system. 図5は、情報出力装置での表示画面の一例を示す図である。FIG. 5 is a diagram showing an example of a display screen in the information output device. 図6は、熱画像の一例を示す図である。FIG. 6 is a diagram showing an example of a thermal image. 図7は、画像処理の一例を示すフローチャートである。FIG. 7 is a flowchart showing an example of image processing. 図8は、熱画像における鼻領域の一例を示す図である。FIG. 8 is a diagram showing an example of a nasal region in a thermal image. 図9は、熱画像におけるテンプレート(template)の一例を示す図である。FIG. 9 is a diagram showing an example of a template in a thermal image. 図10は、熱画像における顔の構成要素の一例を示す図である。FIG. 10 is a diagram showing an example of facial components in a thermal image. 図11は、熱画像において指定される鼻領域の一例を示す図である。FIG. 11 is a diagram showing an example of a nasal region designated in a thermal image. 図12は、学習処理の一例を示すフローチャートである。FIG. 12 is a flowchart showing an example of the learning process. 図13は、量子化される温度データの一例を示す図である。FIG. 13 is a diagram showing an example of quantized temperature data. 図14は、温度の差分の絶対値の一例を示す図である。FIG. 14 is a diagram showing an example of the absolute value of the temperature difference. 図15は、温度データの処理結果の一例が表形式で示される図である。FIG. 15 is a diagram showing an example of the processing result of the temperature data in a table format. 図16は、各時刻における分布の累積値の一例がグラフ(graph)形式で示される図である。FIG. 16 is a diagram showing an example of the cumulative value of the distribution at each time in a graph format. 図17は、生体の体表面の温度データの一例を示す図である。FIG. 17 is a diagram showing an example of temperature data on the body surface of a living body. 図18は、生体の体表面の温度の累積値の一例を示す図である。FIG. 18 is a diagram showing an example of the cumulative value of the temperature of the body surface of a living body. 図19は、温度データの量子化後の値の一例を表形式で示す図である。FIG. 19 is a diagram showing an example of the quantized value of the temperature data in a table format. 図20は、予測処理の一例を示すフローチャートである。FIG. 20 is a flowchart showing an example of prediction processing. 図21は、本発明の一実施形態に係る情報処理装置のハードウエア(hardware)構成の一例を示すブロック図である。FIG. 21 is a block diagram showing an example of a hardware configuration of an information processing apparatus according to an embodiment of the present invention.
 以下、図面を参照しながら、この発明に係わる一実施形態を説明する。 
 図1は、本発明の一実施形態に係るデータ処理システムの適用例を示す図である。 
 図1に示されるように、本発明の一実施形態に係るデータ処理システム(データ分析システム)は、熱画像計測装置1、情報入力装置2、情報出力装置3、および情報処理装置4を有する。
Hereinafter, an embodiment according to the present invention will be described with reference to the drawings.
FIG. 1 is a diagram showing an application example of a data processing system according to an embodiment of the present invention.
As shown in FIG. 1, the data processing system (data analysis system) according to the embodiment of the present invention includes a thermal image measuring device 1, an information input device 2, an information output device 3, and an information processing device 4.
 図2は、情報処理装置の各部の機能構成の一例を示すブロック図である。
 図2に示されるように、情報処理装置4は、記憶部41、画像処理部42、学習処理部43、および予測処理部44を有する。情報処理装置4内の記憶部41、画像処理部42、学習処理部43、および予測処理部44は、個別の記憶装置、画像処理装置、学習処理装置、および予測処理装置であってもよい。 
 記憶部41は、作業データ記憶部41a、温度データ記憶部41b、および予測モデル(model)記憶部41cを有する。 
 画像処理部42は、データ読込部42a、検出部42b、および計算部42cを有する。 
 学習処理部43は、データ読込部43a、データ量子化部43b、およびモデル学習部43cを有する。 
 予測処理部44は、データ読込部44a、予測部44b、および予測結果出力部44cを有する。情報処理装置4の各部による処理は後述する。
FIG. 2 is a block diagram showing an example of the functional configuration of each part of the information processing apparatus.
As shown in FIG. 2, the information processing apparatus 4 includes a storage unit 41, an image processing unit 42, a learning processing unit 43, and a prediction processing unit 44. The storage unit 41, the image processing unit 42, the learning processing unit 43, and the prediction processing unit 44 in the information processing device 4 may be individual storage devices, image processing devices, learning processing devices, and prediction processing devices.
The storage unit 41 has a work data storage unit 41a, a temperature data storage unit 41b, and a prediction model (model) storage unit 41c.
The image processing unit 42 includes a data reading unit 42a, a detection unit 42b, and a calculation unit 42c.
The learning processing unit 43 includes a data reading unit 43a, a data quantization unit 43b, and a model learning unit 43c.
The prediction processing unit 44 includes a data reading unit 44a, a prediction unit 44b, and a prediction result output unit 44c. The processing by each part of the information processing apparatus 4 will be described later.
 図3は、熱画像計測装置により計測される熱画像の一例を示す図である。
 熱画像計測装置1は、図3に示されるような、ユーザの顔面の熱画像を計測する。図3に示された例では、熱画像では、ユーザの顔の領域a、ユーザの目の領域b、ユーザの鼻の領域c、矩形領域d、およびユーザの鼻の周辺の領域eが輝度に対応する色情報で示される。矩形領域dは、ユーザの鼻の領域cの全てと、ユーザの鼻の周辺の領域eとでなる。つまり、ユーザの鼻の周辺の領域eは、矩形領域dとユーザの鼻の領域cの全てとの差分である。
FIG. 3 is a diagram showing an example of a thermal image measured by a thermal image measuring device.
The thermal image measuring device 1 measures a thermal image of the user's face as shown in FIG. In the example shown in FIG. 3, in the thermal image, the area a of the user's face, the area b of the user's eyes, the area c of the user's nose, the rectangular area d, and the area e around the user's nose become luminance. Indicated by the corresponding color information. The rectangular area d is the entire area c of the user's nose and the area e around the user's nose. That is, the region e around the user's nose is the difference between the rectangular region d and all of the user's nose region c.
 情報入力装置2は、例えばキーボード(keyboard)またはマウス(mouse)などであり、ユーザからの入力操作を受け付ける。 
 情報出力装置3は、例えば液晶ディスプレイ(liquid crystal display)などであり、ユーザへ提示される情報を出力する。 
 情報処理装置4は、熱画像計測装置1と情報入力装置2からの情報を入力して、処理結果を情報出力装置3に出力する。
The information input device 2 is, for example, a keyboard (keyboard) or a mouse (mouse), and receives an input operation from a user.
The information output device 3 is, for example, a liquid crystal display or the like, and outputs information presented to the user.
The information processing device 4 inputs information from the thermal image measuring device 1 and the information input device 2, and outputs the processing result to the information output device 3.
 次に、各部による処理について見出し(1)~(37)を付して説明する。 
 図4は、データ処理システムに係る処理手順の一例を示すフローチャートである。図5は、情報出力装置での表示画面の一例を示す図である。 
 (1) 図5に示されるように情報出力装置3には、例えば表計算ソフトウエア(software)による、被験者であるユーザ対する数値計算用の問題の画面G1が表示され、この問題の行の各列には、ランダム(random)に1桁の数字が並べられて表示される。
Next, the processing by each part will be described with the headings (1) to (37).
FIG. 4 is a flowchart showing an example of a processing procedure related to a data processing system. FIG. 5 is a diagram showing an example of a display screen in the information output device.
(1) As shown in FIG. 5, the information output device 3 displays a screen G1 of a problem for numerical calculation for a user who is a subject, for example, by spreadsheet software (software), and each of the lines of this problem. In the column, one-digit numbers are randomly arranged and displayed.
 (2) ユーザは、問題の行に表示された、各列で隣り合う数字の和を、作業データとして情報入力装置2を用いて回答欄に入力する(S1)。 
 例えば、情報入力装置2により、は、図5に示された画面G1に表示された問題No.「1」の回答欄には、問題No.「1」の問題欄に表示される「2」と問題No.「2」の問題欄に表示される「3」の和である「5」を入力する。
(2) The user inputs the sum of the numbers adjacent to each other in each column displayed in the row in question as work data in the answer column using the information input device 2 (S1).
For example, by the information input device 2, is displayed in the question column of question No. "1" in the answer column of question No. "1" displayed on the screen G1 shown in FIG. And enter "5", which is the sum of "3" displayed in the question column of question No. "2".
 また、ユーザは、図5に示された問題No.「2」の回答欄には、問題No.「2」の問題欄に表示される「3」と問題No.「3」の回答欄に表示される「4」の和である「7」を入力する。 In addition, the user can use the answer column of question No. "2" shown in FIG. 5 in the answer column of "3" and question No. "3" displayed in the question column of question No. "2". Enter "7", which is the sum of the displayed "4".
 (3) 熱画像計測装置1は、上記(2)による入力作業中であるユーザ顔の熱画像を計測する(S2)。図6は、熱画像の一例を示す図である。 
 熱画像は一定の時間間隔で、複数のフレームの熱画像として計測される。
 このとき、計測される熱画像の輝度の範囲とユーザの体表面における温度の範囲との対応関係が設定可能であるとする。この場合、ユーザの体表面における、当該ユーザの作業により温度が低下すると予め予想される箇所の温度が取り得る最低の温度から、ユーザの体表面における、当該ユーザの作業により温度が上昇(増加)する、または一定であると予め予想される箇所の温度が取り得る最高の温度までの範囲が、熱画像において表わされる温度の範囲として設定され得る。
(3) The thermal image measuring device 1 measures a thermal image of the user's face during the input operation according to (2) above (S2). FIG. 6 is a diagram showing an example of a thermal image.
The thermal image is measured as a thermal image of a plurality of frames at regular time intervals.
At this time, it is assumed that the correspondence between the measured luminance range of the thermal image and the temperature range on the user's body surface can be set. In this case, the temperature rises (increases) due to the user's work on the user's body surface from the lowest temperature that can be taken at the place where the temperature is expected to decrease in advance due to the user's work on the user's body surface. A range of temperatures up to the highest possible temperature can be set as the temperature range represented in the thermal image.
 例えば、計測される対象がヒトである場合、ヒトによる平常時の対応から取り得る温度の最低値から最高値までの温度の範囲は、「森本 武利, ヒトの体温調節, 繊維製品消費科学, 2003, 44 巻, 5 号, p. 256-262, 公開日 2010/09/30, Online ISSN 1884-6599, Print ISSN 0037-2072, https://doi.org/10.11419/senshoshi1960.44.256, https://www.jstage.jst.go.jp/article/senshoshi1960/44/5/44_5_256/_article/-char/ja」によれば、摂氏35度から40度程度であるので、この範囲が上記の温度の範囲として設定されることが望ましい。 For example, when the object to be measured is a human, the temperature range from the lowest to the highest temperature that can be taken by humans in normal times is "Morimoto Taketoshi, Human Thermoregulation, Textile Product Consumption Science, 2003. , Volume 44, Issue 5, p. 256-262, Release date 2010/09/30, Online ISSN 1884-6599, Print ISSN 0037-2072, https://doi.org/10.11419/senshoshi1960.44.256, https: / According to "/www.jstage.jst.go.jp/article/senshoshi1960/44/5/44_5_256/_article/-char/ja", it is about 35 to 40 degrees Celsius, so this range is the above temperature. It is desirable to set it as a range.
 また、熱画像の輝度の範囲とユーザの体表面における温度の範囲との対応関係が動的に変更可能な場合は、下記で説明する現在温度差と直前温度差との差の大きさに応じて、温度の範囲が変更されてもよい。 
 現在温度差とは、現在の熱画像における、ユーザの体表面において温度が低下すると予め予想される箇所の温度と、当該熱画像における、ユーザの体表面において温度が上昇する、または一定であると予想される箇所の温度との差である。
If the correspondence between the luminance range of the thermal image and the temperature range on the user's body surface can be dynamically changed, it depends on the magnitude of the difference between the current temperature difference and the immediately preceding temperature difference described below. The temperature range may be changed.
The current temperature difference is defined as the temperature at a point where the temperature is expected to decrease on the user's body surface in the current thermal image and the temperature increases or is constant on the user's body surface in the thermal image. It is the difference from the expected temperature.
 直前温度差とは、上記の現在の熱画像が計測される時点より前の時点で計測された熱画像における、ユーザの体表面において温度が低下すると予め予想される箇所の温度と、当該熱画像における、ユーザの体表面において温度が上昇する、または一定であると予想される箇所の温度との差である。 
 このように、熱画像により表される温度の範囲が計測対象にあわせて設定されることで、熱画像における、計測対象に適した温度が表されるようになる。
The immediately preceding temperature difference is the temperature of a portion of the user's body surface where the temperature is expected to decrease in advance in the thermal image measured before the time when the current thermal image is measured, and the thermal image. Is the difference from the temperature at a point on the surface of the user's body where the temperature is expected to rise or be constant.
In this way, by setting the temperature range represented by the thermal image according to the measurement target, the temperature suitable for the measurement target in the thermal image can be represented.
 (4) 上記(2)の作業が数分から数十分間にわたって実施された後に、計測された熱画像の複数のフレームの集合である「熱画像データ」が情報処理装置4の記憶部41に記憶され、上記のユーザによる入力された結果である「作業データ」が記憶部41の作業データ記憶部41aに記憶される。 (4) After the work of (2) above is carried out for several minutes to several tens of minutes, "thermal image data", which is a set of a plurality of frames of the measured thermal image, is stored in the storage unit 41 of the information processing apparatus 4. The "work data" that is stored and is the result input by the user is stored in the work data storage unit 41a of the storage unit 41.
 上記の作業は、ヒトに対して、ヒトの自律神経活動を刺激するための作業を示した例であり、この作業を、ヒトの自律神経活動を刺激するような別の作業に置き換えることで、上記の入力作業に係る作業データを、上記の別の作業に係る作業データに置き換えてもよい。 
 例えば、上記のヒトの作業が、ヒトが映像を見る作業に置き換えられ、上記の作業データが、ヒトが当該映像を見たときの快適または不快が表わされるアンケート(questionnaire)に置き換えられてもよい。 
 また、作業を医療機関の受診などの面談に置き換えて、作業データをユーザ(患者)の体調または診断結果などに置き換えても良い。このとき、熱画像データは連続した動画ではなく、受診ごとに録画された断続的な熱画像データが時系列に沿って繋ぎ合わせられた、一つの長時間の熱画像データとして扱われる。同様に、作業データは、受診ごとに診断された結果が時系列に沿って繋ぎ合わせられた配列として作成される。
The above work is an example showing a human being a work for stimulating the human autonomic nervous activity, and by replacing this work with another work for stimulating the human autonomic nerve activity. The work data related to the above input work may be replaced with the work data related to the above other work.
For example, the above-mentioned human work may be replaced with the work of a human viewing a video, and the above-mentioned work data may be replaced with a questionnaire showing the comfort or discomfort when a human sees the video. ..
Further, the work may be replaced with an interview such as a medical institution consultation, and the work data may be replaced with the physical condition of the user (patient) or the diagnosis result. At this time, the thermal image data is not a continuous moving image, but is treated as one long-term thermal image data in which intermittent thermal image data recorded for each consultation is joined in chronological order. Similarly, the work data is created as an array in which the results diagnosed at each visit are joined in chronological order.
 さらに、熱画像に係る計測対象はヒトに限定されるものではなく、自律神経活動が発達していて、作業または外部からの刺激に対して自律神経活動が変化する動物であれば計測対象として有効である。 Furthermore, the measurement target related to thermal images is not limited to humans, and is effective as a measurement target for animals in which the autonomic nerve activity is developed and the autonomic nerve activity changes in response to work or external stimuli. Is.
 動物自身が作業データに係る入力操作を行なえない場合は、この動物を観察している観察者が動物による作業に係る入力操作を行なってもよく、または、センサ(sensor)などが用いられて、作業データが自動的に作業データ記憶部41aに記録されてもよい。 If the animal itself cannot perform the input operation related to the work data, the observer observing the animal may perform the input operation related to the work by the animal, or a sensor or the like is used. The work data may be automatically recorded in the work data storage unit 41a.
 (画像処理)
 (5) 続いて、画像処理部42は、熱画像におけるユーザの熱画像における鼻に相当する部分の温度変化を抽出する画像処理を行なう(S4)。図7は、画像処理の一例を示すフローチャートである。S3の詳細を図7により説明する。 
 (6) S3-1で、画像処理部42のデータ読込部42aは、上記(4)で記録された熱画像データを読み込む。
(Image processing)
(5) Subsequently, the image processing unit 42 performs image processing for extracting the temperature change of the portion corresponding to the nose in the user's thermal image in the thermal image (S4). FIG. 7 is a flowchart showing an example of image processing. The details of S3 will be described with reference to FIG.
(6) In S3-1, the data reading unit 42a of the image processing unit 42 reads the thermal image data recorded in (4) above.
 (7) 検出部42bは、読み込まれた熱画像データにおけるユーザの顔を検出する。この顔を検出する手法は、テンプレートが用いられる手法と、例えばOpenFaceのようなRGB画像により学習済みの顔検出器が用いられる手法との2種類が挙げられる。使用される手法は、分析者による情報処理装置4への設定操作により予め指定される。OpenFaceは、例えば以下に開示される。 
 (OpenFace)https://github.com/TadasBaltrusaitis/OpenFace
(7) The detection unit 42b detects the user's face in the read thermal image data. There are two types of methods for detecting this face: a method using a template and a method using a face detector trained by an RGB image such as OpenFace. The method to be used is specified in advance by the setting operation on the information processing apparatus 4 by the analyst. OpenFace is disclosed, for example, below.
(OpenFace) https://github.com/TadasBaltrusaitis/OpenFace
 (テンプレート画像が用いられる手法)
 (8) S3-1の後、S3-2で、検出された読み込まれた熱画像データの1フレーム目の画像に対して、ユーザの鼻に相当する部分を表す「鼻領域」が指定される。 
 この指定は、例えば、分析者による入力操作により、熱画像から鼻部分を自由領域で囲うことで実現されてもよいし、検出部42bにより熱画像から鼻の部分が自動的に検出されてもよい。図8は、熱画像における鼻領域の一例を示す図である。図8においては三角形である領域aが鼻領域である。
(Method using template images)
(8) After S3-1, in S3-2, a "nose area" representing a portion corresponding to the user's nose is designated for the image of the first frame of the detected read thermal image data. ..
This designation may be realized, for example, by enclosing the nose portion in a free region from the thermal image by an input operation by an analyst, or even if the nose portion is automatically detected from the thermal image by the detection unit 42b. good. FIG. 8 is a diagram showing an example of a nasal region in a thermal image. In FIG. 8, the triangular region a is the nose region.
 (9) この鼻領域を含む矩形領域は「テンプレート」として記憶部41に登録される。図9は、熱画像におけるテンプレートの一例を示す図である。図9においては四角形である領域aがテンプレートである。 
 図9に示された例では、矩形領域は、例えば、熱画像全体での左上の角を原点として、テンプレートである領域の左上の座標を(x1, y1)とし、当該領域の右下の座標を(x2, y2)としたとき、(x1, y1), (x1, y2), (x2, y1),および (x2, y2)でなる4点で囲われる領域である。
(9) The rectangular area including the nose area is registered in the storage unit 41 as a “template”. FIG. 9 is a diagram showing an example of a template in a thermal image. In FIG. 9, the quadrangular area a is a template.
In the example shown in FIG. 9, the rectangular region has, for example, the upper left corner of the entire thermal image as the origin, the upper left coordinates of the template region as (x1, y1), and the lower right coordinates of the region. When is (x2, y2), it is a region surrounded by four points consisting of (x1, y1), (x1, y2), (x2, y1), and (x2, y2).
 x1及びx2は、(8)で指定された鼻領域のx座標の最小値と最大値とにそれぞれ対応し、y1及びy2は、(8)で指定された鼻領域のy座標の最小値と最大値とにそれぞれ対応する。 x1 and x2 correspond to the minimum and maximum values of the x-coordinate of the nasal region specified in (8), respectively, and y1 and y2 correspond to the minimum and maximum values of the y-coordinate of the nasal region specified in (8). Corresponds to the maximum value and each.
 (10) このテンプレートの矩形領域内での鼻領域の位置情報は記憶部41に記録される。 
 鼻領域の位置情報は、テンプレート内での相対的な座標を表す。例えば、テンプレートの左上の角を原点として、(8)で指定された鼻領域の境界のx軸方向の座標、およびy軸方向の座標のリスト(list)として記憶部41に記録される。図8では、テンプレート内での相対的な座標(s1, t1), (s2, t2), および(s3, t3) でなる3点が位置情報である。s1~s3は、テンプレート内でのx軸方向の相対的な座標に対応し、t1~t3は、テンプレート内でのy軸方向の相対的な座標に対応する。
(10) The position information of the nose area in the rectangular area of this template is recorded in the storage unit 41.
The location information of the nasal area represents the relative coordinates in the template. For example, with the upper left corner of the template as the origin, the coordinates of the boundary of the nose region specified in (8) in the x-axis direction and the coordinates in the y-axis direction are recorded in the storage unit 41 as a list. In FIG. 8, three points consisting of relative coordinates (s1, t1), (s2, t2), and (s3, t3) in the template are position information. s1 to s3 correspond to the relative coordinates in the x-axis direction in the template, and t1 to t3 correspond to the relative coordinates in the y-axis direction in the template.
 (11) S3-4-1で、検出部42bは、テンプレートをもとに熱画像内での鼻領域を検索する。 
 (12) S3-5-1で、計算部42cは、鼻領域内(鼻領域の位置情報の座標で囲まれる領域)の画素の輝度の平均値を計算し、この計算結果を熱画像のフレーム数とともに記憶部41に記録する。 
 このように、本実施形態では、画素の輝度の平均値の算出の対象領域が鼻領域に限定される。
(11) In S3-4-1, the detection unit 42b searches the nasal region in the thermal image based on the template.
(12) In S3-5-1, the calculation unit 42c calculates the average value of the brightness of the pixels in the nasal region (the region surrounded by the coordinates of the position information of the nasal region), and the calculation result is used as the frame of the thermal image. It is recorded in the storage unit 41 together with the number.
As described above, in the present embodiment, the target area for calculating the average value of the brightness of the pixels is limited to the nasal area.
 (13) S3-6で、計算部42cは、S3-4-1で得られた鼻領域の画像を新たなテンプレートとして設定する。 
 (14) 計算部42cは、S3-4-1からS3-6の処理を熱画像の全てのフレームに対して行なうことで(図7のa)、鼻領域を追跡しながら、鼻部分の画素の輝度の平均値を計算する。
(13) In S3-6, the calculation unit 42c sets the image of the nose region obtained in S3-4-1 as a new template.
(14) The calculation unit 42c performs the processing of S3-4-1 to S3-6 for all frames of the thermal image (a in FIG. 7), and while tracking the nasal region, the pixels of the nasal portion. Calculate the average value of the brightness of.
 (15) S3-7で、計算部42cは、熱画像のフレーム数と、このフレーム内での鼻領域の画素の輝度の平均値を含む「温度データ」を保存する。 
 なお、サーモグラフィの設定等によって、各フレームにおける輝度と温度との対応関係が得られている場合は、輝度が温度に変換されてもよい。
(15) In S3-7, the calculation unit 42c stores "temperature data" including the number of frames of the thermal image and the average value of the brightness of the pixels in the nose region within the frames.
If the correspondence between the brightness and the temperature in each frame is obtained by the thermography setting or the like, the brightness may be converted into the temperature.
 (顔検出器が用いられる手法)
 (16) S3-1の後、S3-3で、検出部42bは、すべてのフレームの熱画像のネガポジ(negative / positive)を反転させる。 
 一般に、熱画像は温度の高い箇所の輝度が高くなるように設定されているが、検出部42bは、熱画像における温度の高い箇所の輝度が低くなるように当該熱画像を変換する。言い換えると、検出部42bは、熱画像における温度の低い箇所の輝度が高くなるように当該熱画像を変換する。これにより、RGB画像により学習されたモデルを熱画像に適用可能になる。
(Method using face detector)
(16) After S3-1, in S3-3, the detection unit 42b inverts the negative / positive of the thermal image of all frames.
Generally, the thermal image is set so that the brightness of the high temperature portion is high, but the detection unit 42b converts the thermal image so that the brightness of the high temperature portion in the thermal image is low. In other words, the detection unit 42b converts the thermal image so that the brightness of the low temperature portion of the thermal image is increased. This makes it possible to apply the model trained by the RGB image to the thermal image.
 (17) 図10は、熱画像における顔の構成要素の一例を示す図である。S3-4-2で、検出部42bは、図10に示されるような、S3-3でネガポジが反転された熱画像を顔検出器に入力することで、熱画像内での鼻を含む顔の構成要素を検出(出力)する。 
 なお、この顔検出器は、RGB画像、当該RGB画像内での鼻の範囲を示す情報、およびこの範囲の名称であるラベルでなる組を教師として、RGB画像を入力することで、画像における鼻の範囲を、この範囲の名称であるラベルとともに出力するように学習されたモデルである。
(17) FIG. 10 is a diagram showing an example of facial components in a thermal image. In S3-4-2, the detection unit 42b inputs a thermal image in which the negative / positive is inverted in S3-3 into the face detector as shown in FIG. 10, so that the face including the nose in the thermal image is included. Detects (outputs) the components of.
In addition, this face detector uses an RGB image, information indicating the range of the nose in the RGB image, and a set of labels that are the names of this range as teachers, and inputs the RGB image to the nose in the image. It is a model trained to output the range of with the label that is the name of this range.
 本実施形態ではRGB画像で学習済みのモデルが熱画像に適用されることで、熱画像の収集および熱画像用の顔認識モデルの作成にかかる金銭的および時間的なコストを低減させることができる。 
 このような適用により、サーモグラフィとRGBカメラとが併用される必要がなくなり、システムを簡便にすることが可能になるとともに、キャリブレーションの手間をなくし、当該キャリブレーションによるズレによる誤差をなくすことが可能になる。
In the present embodiment, by applying the model trained in the RGB image to the thermal image, it is possible to reduce the financial and time costs for collecting the thermal image and creating the face recognition model for the thermal image. ..
By such application, it is not necessary to use the thermography and the RGB camera together, the system can be simplified, the trouble of calibration can be eliminated, and the error due to the deviation due to the calibration can be eliminated. become.
 (18) 図11は、熱画像において指定される鼻領域の一例を示す図である。S3-5-2で、検出部42bは、例えば、(17)において、図11で示されるような、熱画像における鼻の上部、鼻の下部の左端、および鼻の下部の右端であるとそれぞれ認識された3点で囲まれる領域aを鼻領域として指定する。 
 計算部42cは、指定された鼻領域内の画素の輝度の平均値を計算し、この計算結果を熱画像のフレーム数とともに記憶部41に記録する。 
 ここでは、熱画素の輝度の平均値を算出する領域は鼻領域に限定される。
(18) FIG. 11 is a diagram showing an example of a nasal region designated in a thermal image. In S3-5-2, for example, in (17), the detection unit 42b is the upper part of the nose, the left end of the lower part of the nose, and the right end of the lower part of the nose in the thermal image, respectively, as shown in FIG. The area a surrounded by the recognized three points is designated as the nasal area.
The calculation unit 42c calculates the average value of the brightness of the pixels in the designated nose region, and records the calculation result in the storage unit 41 together with the number of frames of the thermal image.
Here, the region for calculating the average value of the brightness of the thermal pixels is limited to the nasal region.
 (19) S3-4-2からS3-5-2の処理が熱画像の全フレームに対して実行されることで、熱画像における鼻領域が時系列で追跡されながら、鼻部分の画素の輝度の平均値が計算される。 (19) By executing the processes S3-4-2 to S3-5-2 for all frames of the thermal image, the nasal region in the thermal image is tracked in chronological order, and the brightness of the pixels of the nasal portion. The average value of is calculated.
 (20) S3-7で、計算部42cは、熱画像のフレーム数と、そのフレーム内での鼻領域の画素の輝度の平均値でなる「温度データ」を記憶部41に保存する。 
 なお、サーモグラフィの設定等によって、熱画像の各フレームにおける輝度と温度との対応関係が得られている場合は、計算部42cは、熱画像の輝度を温度に変換してもよい。
(20) In S3-7, the calculation unit 42c stores in the storage unit 41 "temperature data" which is the average value of the number of frames of the thermal image and the brightness of the pixels in the nose region within the frames.
If the correspondence between the brightness and the temperature in each frame of the thermal image is obtained by the thermography setting or the like, the calculation unit 42c may convert the brightness of the thermal image into the temperature.
 (21) 同様に、計算部42cは、熱画像における鼻以外の領域に関して、上記の説明で鼻領域としていた箇所を該当領域に代え、当該領域に対応する温度データを作成する。 (21) Similarly, with respect to the region other than the nose in the thermal image, the calculation unit 42c replaces the portion designated as the nose region in the above description with the corresponding region, and creates temperature data corresponding to the region.
 (22) 鼻以外の領域の候補としては、ユーザにおける、作業によって温度が変化しないとされる額や、作業によって温度が上昇すると考えられる眼球部分または肛門外陰部が挙げられる。また、上記では、ユーザによる作業の結果、温度が低下する箇所として鼻を例示したが、同様にして、ユーザによる作業の結果、温度が低下すると考えられる耳が鼻の代わりに温度データの作成に用いられても良い。体表面における、温度が上昇する箇所の温度の変化と、体表面における、温度が低下する箇所の温度の変化とは、負の相関関係を有する。 
 (23) 計算部42cは、作業に係るM回の実験ごとに、鼻領域の温度データと、鼻領域以外の領域の温度データと、作業データとを作成する。
(22) Examples of the region other than the nose include the forehead in which the temperature is not changed by the work, and the eyeball portion or the anal vulva where the temperature is considered to be increased by the work in the user. Further, in the above, the nose is illustrated as a place where the temperature drops as a result of the work by the user, but similarly, the ear, which is considered to have the temperature drop as a result of the work by the user, is used to create temperature data instead of the nose. It may be used. There is a negative correlation between the change in temperature at the point where the temperature rises on the body surface and the change in temperature at the point where the temperature drops on the body surface.
(23) The calculation unit 42c creates temperature data in the nasal region, temperature data in the region other than the nasal region, and work data for each of the M experiments related to the work.
 (学習処理)
 (24) 画像処理部42による画像処理が完了した後、学習処理部43は学習処理を行なう。図12は、学習処理の一例を示すフローチャートである。S4の詳細を図12により説明する。 
 (25) S4-1で、学習処理部43のデータ読込部43aは、画像処理部42により作成された温度データと作業データとを記憶部41から読み込む。このとき、複数の温度データが存在する場合は、データ読込部43aは、これらをフレームごとに差分を計算し、この計算結果を以降の処理の温度データとする。
(Learning process)
(24) After the image processing by the image processing unit 42 is completed, the learning processing unit 43 performs learning processing. FIG. 12 is a flowchart showing an example of the learning process. The details of S4 will be described with reference to FIG.
(25) In S4-1, the data reading unit 43a of the learning processing unit 43 reads the temperature data and the working data created by the image processing unit 42 from the storage unit 41. At this time, if a plurality of temperature data exist, the data reading unit 43a calculates the difference for each frame, and uses this calculation result as the temperature data for the subsequent processing.
 つまり、鼻部分の温度データと鼻部分でない領域の温度データがそれぞれ読み込まれた場合、データ読込部43aは、鼻部分の温度データと、鼻でない部分の温度データの差分を温度データとする。 That is, when the temperature data of the nose part and the temperature data of the non-nose part are read respectively, the data reading unit 43a uses the difference between the temperature data of the nose part and the temperature data of the non-nose part as the temperature data.
 また、例えば、鼻部分、額部分、および眼球部分に係る、3つ以上の温度データが入力された場合、データ読込部43aは、以下の式(1)のように、それぞれを組み合わせた温度データを計算する。 Further, for example, when three or more temperature data relating to the nose portion, the forehead portion, and the eyeball portion are input, the data reading unit 43a uses the temperature data in which each is combined as shown in the following equation (1). To calculate.
 温度データ=((眼球部分の温度-額部分の温度)-(鼻部分の温度-額部分の温度)) …式(1)
 この計算における温度データの組み合わせは、あらかじめ分析者による入力操作により指定され得る。
Temperature data = ((Temperature of the eyeball part-Temperature of the forehead part)-(Temperature of the nose part-Temperature of the forehead part))… Equation (1)
The combination of temperature data in this calculation can be specified in advance by an input operation by the analyst.
 これにより、ユーザの顔における、例えば鼻部分の温度変化が小さい場合でも、温度の上昇する箇所における温度データと、温度が低下する箇所の温度データとの差分が求められることで、画像処理部42による画像処理の分解能が低い場合でも、温度の変化が検出され易い。 
 (26) ヒトの作業により体表面の温度が上昇または下降する理由は、ヒトの自律神経活動の亢進に伴って、ヒトの血流が増加または低下するためである。
As a result, even when the temperature change of the nose portion of the user's face, for example, is small, the difference between the temperature data at the portion where the temperature rises and the temperature data at the portion where the temperature decreases can be obtained, so that the image processing unit 42 Even when the resolution of image processing is low, changes in temperature are easily detected.
(26) The reason why the temperature of the body surface rises or falls due to human work is that the blood flow in humans increases or decreases as the autonomic nervous activity of humans increases.
 すなわち、体表面における、温度が上昇する箇所は、自律神経活動と温度変化とが正の相関関係を有する箇所であると考えられる。また、体表面における、温度が低下する箇所は、自律神経活動と温度変化とが負の相関関係を有する箇所であると考えられる。 That is, the place where the temperature rises on the body surface is considered to be the place where the autonomic nervous activity and the temperature change have a positive correlation. In addition, the place where the temperature drops on the body surface is considered to be the place where the autonomic nervous activity and the temperature change have a negative correlation.
 (27) S4-1の実施後は、データ量子化部43bは、温度データを量子化する。この量子化は、通常の量子化と特殊な量子化に区分され、これらのうちユーザにより指定される方法で量子化がなされる。 (27) After the implementation of S4-1, the data quantization unit 43b quantizes the temperature data. This quantization is divided into normal quantization and special quantization, and the quantization is performed by the method specified by the user.
 まず、通常の量子化について説明する。 
 (28) S4-1の後、S4-2-1で、データ量子化部43bは、実験ごとに温度データを量子化する。データ量子化部43bは、温度データの最初のフレーム数と最後のフレーム数の差をN(整数)で割ったフレーム数ごとに温度データの平均値を計算し、この計算結果を作業データにおける作業結果とともに保存する。
First, ordinary quantization will be described.
(28) After S4-1, in S4-2-1, the data quantization unit 43b quantizes the temperature data for each experiment. The data quantization unit 43b calculates the average value of the temperature data for each number of frames obtained by dividing the difference between the number of first frames and the number of last frames of the temperature data by N (integer), and the calculation result is used as work in the work data. Save with the result.
 Nは分析者により指定可能であるパラメータ(parameter)の一つであり、例えば、10秒間で計測された熱画像を1秒ごとに量子化して分析する場合は、N=10であり、Nの値を大きくすると、より細かい粒度で量子化される。 N is one of the parameters that can be specified by the analyst. For example, when the thermal image measured in 10 seconds is quantized and analyzed every second, N = 10 and N The higher the value, the finer the grain size.
 次に、特殊な量子化について説明する。 
 (29) S4-2-2では、データ量子化部43bは、以下の方法で温度データを量子化する。図13は、量子化される温度データの一例を示す図である。 
 (29-1) まず、データ量子化部43bは、複数のタイミングの各々で計測された温度データの集合である時系列データとして、図13に示されるような代表的な温度データを取得する。 
 上記の代表的な温度データは、複数の時系列データである温度データの中から分析者により選択されても良いし、例えば、計測されたM個の時系列データである温度データの平均値であってもよい。以下では、M個の時系列データである温度データの各々についての処理結果の平均をもとに量子化を行なう例について説明する。
Next, a special quantization will be described.
(29) In S4-2-2, the data quantization unit 43b quantizes the temperature data by the following method. FIG. 13 is a diagram showing an example of quantized temperature data.
(29-1) First, the data quantization unit 43b acquires representative temperature data as shown in FIG. 13 as time-series data which is a set of temperature data measured at each of a plurality of timings.
The above-mentioned representative temperature data may be selected by an analyst from a plurality of time-series data, such as temperature data, and may be, for example, an average value of temperature data which is M measured time-series data. There may be. In the following, an example of performing quantization based on the average of the processing results for each of the temperature data, which is M time series data, will be described.
 (29-2) 次に、データ量子化部43bは、図13に示されるような、代表的な温度データの値について、時系列で隣接する温度の差分、つまり、あるタイミングでの温度データの値と、当該タイミングに隣接するタイミングにおける温度データの値との差分を計算し、この差分の絶対値を計算する。図14は、温度の差分の絶対値の一例を示す図である。例えば、図13に示される時刻「5」での温度と、時刻「6」での温度との差は「1」であるので、図14に示されるように、時刻「5」と時刻「6」と間での温度の差分の絶対値は「1」である。 (29-2) Next, the data quantization unit 43b determines the difference in temperature adjacent to each other in time series with respect to the representative temperature data value as shown in FIG. 13, that is, the temperature data at a certain timing. The difference between the value and the value of the temperature data at the timing adjacent to the timing is calculated, and the absolute value of this difference is calculated. FIG. 14 is a diagram showing an example of the absolute value of the temperature difference. For example, since the difference between the temperature at the time "5" and the temperature at the time "6" shown in FIG. 13 is "1", the time "5" and the time "6" are shown in FIG. The absolute value of the difference in temperature between "" and "1" is.
 図15は、温度データの処理結果の一例が表形式で示される図である。 
 図15に示された例では、時刻「1」から「15」までの「1」刻みの時刻における、(1)図13に示された温度データの値、(2)図14に示された温度の差分、(3)当該差分の絶対値、(4)分布、および(5)累積値、が示される。
FIG. 15 is a diagram showing an example of the processing result of the temperature data in a table format.
In the example shown in FIG. 15, (1) the value of the temperature data shown in FIG. 13 and (2) shown in FIG. 14 at the time in "1" increments from the time "1" to "15". The temperature difference, (3) the absolute value of the difference, (4) the distribution, and (5) the cumulative value are shown.
 このとき、温度データに瞬時的に混入するノイズ(noise)に対応するために、データ量子化部43bは、あらかじめ時系列データに対して移動平均を計算して、この計算の結果を以降の処理で用いたり、時系列データにおける近傍の数点から平均値μおよび標準偏差σを算出し、これらの算出された結果を用いて、μ±3σの範囲に入らない時系列データを削除して、この結果を以降の処理で用いたりしても良い。 At this time, in order to deal with the noise that is instantaneously mixed in the temperature data, the data quantization unit 43b calculates the moving average for the time series data in advance, and the result of this calculation is processed thereafter. Or, calculate the average value μ and standard deviation σ from several points in the vicinity of the time series data, and use these calculated results to delete the time series data that does not fall within the range of μ ± 3σ. This result may be used in the subsequent processing.
 (29-3) そして、データ量子化部43bは、各時刻における「温度の差分の絶対値」の各要素にNを掛けた値を、各時刻における温度の差分の絶対値の総和で割った値を、図15に示されるような、各時刻における「分布」として計算する。この例では、N=5として分布が計算される。 (29-3) Then, the data quantization unit 43b divides the value obtained by multiplying each element of the "absolute value of the temperature difference" at each time by N by the sum of the absolute values of the temperature difference at each time. The values are calculated as a "distribution" at each time, as shown in FIG. In this example, the distribution is calculated with N = 5.
 例えば、図15に示された例では、各時刻における温度の差分の絶対値の総和は、時刻「6」における温度の差分の絶対値「1」、時刻「7」における温度の差分の絶対値「1」、時刻「8」における温度の差分の絶対値「2」時刻「9」における温度の差分の絶対値「2」、時刻「10」における温度の差分の絶対値「1」、および時刻「11」における温度の差分の絶対値「1」の総和「8」である。 
 そして、時刻「6」における温度の差分の絶対値「1」に、上記Nを掛けた値は「5」である。 
 よって、上記Nを掛けた値は「5」を、上記各時刻における温度の差分の絶対値の和「8」で割った値は「0.625」であり、この値は時刻「6」における分布である。
For example, in the example shown in FIG. 15, the sum of the absolute values of the temperature differences at each time is the absolute value of the temperature difference "1" at the time "6" and the absolute value of the temperature difference at the time "7". "1", absolute value of temperature difference at time "8""2" absolute value of temperature difference at time "9""2", absolute value of temperature difference at time "10""1", and time It is the sum of "8" of the absolute value "1" of the temperature difference in "11".
Then, the value obtained by multiplying the absolute value "1" of the temperature difference at the time "6" by N is "5".
Therefore, the value obtained by multiplying the above N is "5", and the value obtained by dividing the sum of the absolute values of the temperature differences at each time "8" is "0.625", and this value is the distribution at the time "6". be.
 (29-4) さらに、データ量子化部43bは、時系列に沿った、各時刻について計算された「分布」の総和である「累積値」を、各時刻について計算する。 
 図15に示された例では、時刻「7」における累積値は、時刻「1」から時刻「7」までの累積値の総和「1.250」である。また、図15に示された例では、時刻「15」における累積値は、時刻「1」から時刻「15」までの累積値の総和「5.000」である。
(29-4) Further, the data quantization unit 43b calculates the “cumulative value”, which is the sum of the “distributions” calculated for each time along the time series, for each time.
In the example shown in FIG. 15, the cumulative value at the time "7" is the total sum "1.250" of the cumulative values from the time "1" to the time "7". Further, in the example shown in FIG. 15, the cumulative value at the time "15" is the total sum "5.000" of the cumulative values from the time "1" to the time "15".
 (29-5) 図16は、各時刻における分布の累積値の一例がグラフ形式で示される図である。 
 データ量子化部43bは、各時刻における分布の累積値に応じて、量子化に係る時間を窓として区分してなる区間を検出する。 
(29-5) FIG. 16 is a graph showing an example of the cumulative value of the distribution at each time in a graph format.
The data quantization unit 43b detects a section formed by dividing the time related to the quantization as a window according to the cumulative value of the distribution at each time.
 図16に示された例では、データ量子化部43bは、上記計算された累積値の範囲「0.000」から「5.000」を等間隔の範囲で区切る。 
 ここでは、区切られた範囲は、(1)累積値が1以下である範囲(図16のA)、(2)累積値が1を超えて2以下である範囲(図16のB)、(3)累積値が2を超えて3以下である範囲、(4)累積値が3を超えて4以下である範囲、および(5)累積値が4を超えて5以下である範囲(図16のE)である。 
 そして、データ量子化部43bは、累積値が1以下である(図16のA)時間、ここでは時刻「1」から「6」までの時間(図16のa)を窓aとする。
In the example shown in FIG. 16, the data quantization unit 43b divides the range "0.000" to "5.000" of the calculated cumulative value into equal intervals.
Here, the divided ranges are (1) a range in which the cumulative value is 1 or less (A in FIG. 16), (2) a range in which the cumulative value exceeds 1 and is 2 or less (B in FIG. 16), ( 3) The range where the cumulative value is more than 2 and 3 or less, (4) the range where the cumulative value is more than 3 and 4 or less, and (5) the range where the cumulative value is more than 4 and 5 or less (FIG. 16). E).
Then, the data quantization unit 43b uses the time when the cumulative value is 1 or less (A in FIG. 16), here, the time from the times “1” to “6” (a in FIG. 16) as the window a.
 同様に、データ量子化部43bは、累積値が1を超え2以下である(図16のB)時間、ここでは時刻「7」に係る時間を窓bとし、累積値が2を超え3以下である時間、ここでは時刻「7」に係る時間を窓cとし、累積値が3を超え4以下である時間、ここでは時刻「8」に係る時間を窓dとし、累積値が4を超え5以下である(図16のE)時間、ここでは時刻「9」から「15」に係る時間を窓eとして区分する。 Similarly, in the data quantization unit 43b, the cumulative value exceeds 1 and is 2 or less (B in FIG. 16), and here, the time related to the time “7” is set as the window b, and the cumulative value exceeds 2 and is 3 or less. The time, here the time related to the time "7" is the window c, the time when the cumulative value exceeds 3 and is 4 or less, and here the time related to the time "8" is the window d, and the cumulative value exceeds 4. The time that is 5 or less (E in FIG. 16), here, the time related to the times “9” to “15” is classified as the window e.
 この区分の結果、温度の変化が大きい時間に対応する窓の範囲は狭く、温度の変化が小さい時間に対応する窓の範囲は広い。よって、データ量子化部43bは、効果的に量子化を行なうことが可能になる。 As a result of this classification, the range of windows corresponding to the time when the temperature change is large is narrow, and the range of the window corresponding to the time when the temperature change is small is wide. Therefore, the data quantization unit 43b can effectively perform the quantization.
 (29-6) 最後に、データ量子化部43bは、M個の温度データに対して、上記のように区分された窓に対応する範囲ごとに平均値を求める。つまり、累積値について区切られた範囲の各々において、当該範囲に含まれる累積値に対応するタイミングで計測されたデータが量子化される。これにより、S3-2-2の処理は完了する。 (29-6) Finally, the data quantization unit 43b obtains an average value for each of the ranges corresponding to the windows classified as described above for M temperature data. That is, in each of the ranges delimited by the cumulative value, the data measured at the timing corresponding to the cumulative value included in the range is quantized. As a result, the process of S3-2-2 is completed.
 (30) 次に、別の量子化の方法について述べる。図17は、生体の体表面の温度データの一例を示す図である。 
 (30-1) まず、データ量子化部43bは、図17に示されるような代表的な温度データを取得する。 
 上記の代表的な温度データは、計測された複数の時系列データである温度データの中から分析者により選択されても良いし、例えば、計測されたM個の時系列データである温度データの平均値であっても良い。以下では、M個の時系列データである温度データの各々についての処理結果の平均をもとに量子化を行なう例について説明する。
(30) Next, another quantization method will be described. FIG. 17 is a diagram showing an example of temperature data on the body surface of a living body.
(30-1) First, the data quantization unit 43b acquires typical temperature data as shown in FIG.
The above representative temperature data may be selected by the analyst from the temperature data which is a plurality of measured time series data, and for example, the temperature data which is the measured M time series data may be selected. It may be an average value. In the following, an example of performing quantization based on the average of the processing results for each of the temperature data, which is M time series data, will be described.
 (30-2) 次に、データ量子化部43bは、代表的な温度データにおける各時刻における温度を、以下の式(2)により時系列に沿った累積値である累積温度に変換する。図18は、生体の体表面の温度の累積値の一例を示す図である。 (30-2) Next, the data quantization unit 43b converts the temperature at each time in the representative temperature data into the cumulative temperature which is the cumulative value along the time series by the following equation (2). FIG. 18 is a diagram showing an example of the cumulative value of the temperature of the body surface of a living body.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、式(2)のLは温度データのサイズ(size)である。図15の例では、Lは15である。 
 式(2)tは、温度データの配列を表す。max(t)は、温度データの配列の中での最大値である。また、min(t)は、温度データの配列の中での最小値である。
Here, L in the equation (2) is the size of the temperature data. In the example of FIG. 15, L is 15.
Equation (2) t represents an array of temperature data. max (t) is the maximum value in the array of temperature data. In addition, min (t) is the minimum value in the array of temperature data.
 (30-3) データ量子化部43bは、上記変換されて得られた、各時刻における累積温度を昇順に並び替え、N-2の大きさに分割し、量子化の範囲を決定する。ここでは、Nは5である。図19は、温度データの量子化後の値の一例を表形式で示す図である。 
 図19に示された例では、データ量子化部43bは、1.069以下の累積値に対応する温度データの値を「1」に量子化し、2.494以下かつ1.069を超える累積値に対応する温度データの値を「2」に量子化し、2.699以下かつ2.494を超える累積値に対応する温度データの値を「3」に量子化し、2.716以下かつ2.699を超える累積値に対応する温度データの値を「4」に量子化し、2.716を超える累積値に対応する温度データの値を「5」に量子化する。
(30-3) The data quantization unit 43b sorts the cumulative temperature at each time obtained by the above conversion in ascending order, divides it into N-2 sizes, and determines the range of quantization. Here, N is 5. FIG. 19 is a diagram showing an example of the quantized value of the temperature data in a table format.
In the example shown in FIG. 19, the data quantization unit 43b quantizes the value of the temperature data corresponding to the cumulative value of 1.069 or less to "1", and the temperature data corresponding to the cumulative value of 2.494 or less and more than 1.069. The value is quantized to "2", the value of the temperature data corresponding to the cumulative value of 2.69 or less and more than 2.494 is quantized to "3", and the value of the temperature data corresponding to the cumulative value of 2.716 or less and more than 2.69 is "4". , And the value of the temperature data corresponding to the cumulative value exceeding 2.716 is quantized to "5".
 つまり、図19に示された例では、データ量子化部43bは、2.718℃以上の温度データの値を「1」に量子化し、0.368℃以上かつ2.718℃未満の温度データの値を「2」に量子化し、0.030℃以上かつ0.368℃未満の温度データの値を「3」に量子化し、0.002℃以上かつ0.030℃未満の温度データの値を「4」に量子化し、0.002℃未満の温度データの値を「5」に量子化する。 That is, in the example shown in FIG. 19, the data quantization unit 43b quantizes the value of the temperature data of 2.718 ° C or higher to "1", and sets the value of the temperature data of 0.368 ° C or higher and lower than 2.718 ° C to "2". Quantumize the value of temperature data above 0.030 ° C and below 0.368 ° C to "3", quantize the value of temperature data above 0.002 ° C and below 0.030 ° C to "4", and quantize the value of temperature data below 0.002 ° C. The value of is quantized to "5".
 これにより、図19に示された例では、時刻「1」に計測された温度データの値が「1」に量子化され、時刻「2」から「5」までに計測された温度データの値が「2」に量子化され、時刻「6」から「10」までに計測された温度データの値が「3」に量子化され、時刻「11」から「15」までに計測された温度データの値が「4」に量子化される。 As a result, in the example shown in FIG. 19, the value of the temperature data measured at the time "1" is quantized to "1", and the value of the temperature data measured from the time "2" to "5". Is quantized to "2", the value of the temperature data measured from time "6" to "10" is quantized to "3", and the temperature data measured from time "11" to "15" The value of is quantized to "4".
 つまり、データ量子化部43bは、計算された、各時刻の累積値を当該累積値の大小に応じて並べ替え、この並べ替えられた累積値に対応する、時系列データにおける1つまたは連続する複数の時刻での温度データの値を当該値の大小に応じた同一の値に量子化する。 That is, the data quantization unit 43b sorts the calculated cumulative value at each time according to the magnitude of the cumulative value, and one or a continuous time-series data corresponding to the sorted cumulative value. The value of the temperature data at a plurality of times is quantized to the same value according to the magnitude of the value.
 これにより、単に一定時間ごとに温度データの値が量子化されるときに比べて、温度データの値の変化が大きい部分、図19に示された例では、時刻「1」から「5」までにおける温度データの値の量子化に係る窓を、時刻「6」以降における温度データの値の量子化に係る窓に対して狭くできる。よって、データ量子化部43bは、効果的に量子化を行なうことが可能になる。 As a result, the part where the change in the temperature data value is larger than that when the temperature data value is simply quantized at regular intervals, in the example shown in FIG. 19, from time "1" to "5". The window related to the quantization of the temperature data value in the above can be narrowed with respect to the window related to the quantization of the temperature data value after the time "6". Therefore, the data quantization unit 43b can effectively perform the quantization.
 (30-4) 最後に、データ量子化部43bは、M個の温度データにおける各時刻の温度に対して、上記のように区分された窓に対応する範囲ごとに平均値を求める。つまり、累積値について区切られた範囲の各々において、当該範囲に含まれる累積値に対応するタイミングで計測されたデータが量子化される。これにより量子化が完了する。 
 (31) S4-2-1またはS4-2-2における量子化の結果、データ量子化部43bは、最終的にM×Nの大きさを有する温度データの配列と、M×1の大きさを有する作業データの配列を生成する。
(30-4) Finally, the data quantization unit 43b obtains an average value for each time temperature in the M temperature data for each range corresponding to the window divided as described above. That is, in each of the ranges delimited by the cumulative value, the data measured at the timing corresponding to the cumulative value included in the range is quantized. This completes the quantization.
(31) As a result of the quantization in S4-2-1 or S4-2-2, the data quantization unit 43b finally has an array of temperature data having a size of M × N and a size of M × 1. Generate an array of working data with.
 (32) S4-3では、モデル学習部43cは、量子化されたデータをもとに、予測モデルの学習を行なう。予測モデルは、生体の作業データを入力して、当該生体の状態を推定して出力するモデルである。 (32) In S4-3, the model learning unit 43c learns the prediction model based on the quantized data. The prediction model is a model in which work data of a living body is input, and the state of the living body is estimated and output.
 学習としては、例えば、下記の(a)~(c)のような学習が考えられる。 
 (a) 作業データの成績の値を応答変数として、温度データを説明変数とする、重回帰分析、サポートベクタ(support vector)回帰問題、またはニューラルネットワークによる回帰問題に係る学習
 (b) 生体による作業の有無が記載された作業データを応答変数として、温度データを説明変数とする、ロジスティック(logistic)回帰、サポートベクターマシン(support vector machine)、またはニューラルネットワークによる分類問題に係る学習
 (c) 生体の作業成績に閾値を設けて、作業成績の値が閾値を下回ったか否かを示す値を応答変数として、温度データを説明変数とする、ロジスティック回帰、サポートベクターマシン、またはニューラルネットワークによる分類問題に係る学習
As learning, for example, the following learnings (a) to (c) can be considered.
(A) Learning related to multiple regression analysis, support vector regression problem, or regression problem by neural network, using the performance value of work data as the response variable and temperature data as the explanatory variable (b) Work by the living body Learning related to classification problems by logistic regression, support vector machine, or neural network, using work data that describes the presence or absence as a response variable and temperature data as an explanatory variable (c) Living body It relates to a classification problem by logistic regression, a support vector machine, or a neural network, in which a threshold is set for the work result, a value indicating whether or not the value of the work result is below the threshold is used as a response variable, and temperature data is used as an explanatory variable. study
 (33) S4-4で、モデル学習部43cは、学習によって得られた予測モデルを記憶部41の予測モデル記憶部41cに保存する。 (33) In S4-4, the model learning unit 43c stores the prediction model obtained by learning in the prediction model storage unit 41c of the storage unit 41.
 (予測処理)
 (34) 図20は、予測処理の一例を示すフローチャートである。S5の詳細を図20により説明する。 
 S5-1で、予測処理部44のデータ読込部44aは、新たに計測された温度データ、および被験者による新たな作業に係る作業データを入力する。
 (35) S5-2で、予測部44bは、読み込まれた温度データおよび作業データから、被験者に係る作業成績に関わる情報を予測する。
(Prediction processing)
(34) FIG. 20 is a flowchart showing an example of prediction processing. The details of S5 will be described with reference to FIG.
In S5-1, the data reading unit 44a of the prediction processing unit 44 inputs the newly measured temperature data and the work data related to the new work by the subject.
(35) In S5-2, the prediction unit 44b predicts information related to the work performance of the subject from the read temperature data and work data.
 具体的には、予測部44bは、下記の(a)~(c)のような予測を行なう。 
 (a) 予測部44bは、新たな温度データを、上記(32)の(a)で示される、学習済みの回帰モデルに入力することで、被験者に係る作業成績に関わる情報を予測する。 
 (b) 予測部44bは、新たな温度データを、上記(32)の(b)で示される、学習済みの分類モデルに入力することで、被験者に係る作業の有無を予測する。 
 (c) 予測部44bは、新たな温度データを、上記(32)の(c)で示される、学習済みの分類モデルに入力することで、被験者に係る作業の成績の悪化を予測する。
Specifically, the prediction unit 44b makes predictions as described in (a) to (c) below.
(A) The prediction unit 44b predicts information related to the work performance of the subject by inputting new temperature data into the trained regression model shown in (a) of (32) above.
(B) The prediction unit 44b predicts the presence or absence of work related to the subject by inputting new temperature data into the trained classification model shown in (b) of (32) above.
(C) The prediction unit 44b predicts the deterioration of the work performance of the subject by inputting the new temperature data into the trained classification model shown in (c) of (32) above.
 (36) S6-3で、予測結果出力部44cは、予測部44bにより予測された結果を情報出力装置3などに出力することで、ユーザに提示する。
 (37) 上記では、ヒトの顔画像に対しての処理を説明したが、本実施形態は、例えば、猪(イノシシ)、豚、または犬などの、体表面の温度が変化する動物に対しても適用され得る。
(36) In S6-3, the prediction result output unit 44c outputs the result predicted by the prediction unit 44b to the information output device 3 or the like, and presents the result to the user.
(37) In the above, the processing for the human face image has been described, but this embodiment is for an animal whose body surface temperature changes, such as a wild boar, a pig, or a dog. Can also be applied.
 このように、本発明の一実施形態では、色情報を含む画像、前記画像における領域を示す情報、および前記領域に対応する出力用情報でなる組を用いて、色情報を含む入力画像に対応する、前記入力画像における領域を示す情報と前記領域に対応する出力用情報とを出力するように学習されたモデルに、温度が低いほど輝度が高い画像を入力することにより、当該入力された画像における領域を示す情報と当該領域に対応する出力用情報とを出力する。
 これにより、金銭的および時間的なコストを低減しつつ、生体の熱画像における所望の領域を適切に抽出することができる。
As described above, in one embodiment of the present invention, an input image including color information is supported by using a set consisting of an image including color information, information indicating a region in the image, and output information corresponding to the region. The input image is obtained by inputting an image having a higher brightness as the temperature is lower into a model trained to output information indicating a region in the input image and output information corresponding to the region. The information indicating the area in the above and the output information corresponding to the area are output.
This makes it possible to appropriately extract a desired region in a thermal image of a living body while reducing financial and time costs.
 図21は、本発明の一実施形態に係る情報処理装置のハードウエア構成の一例を示すブロック図である。 
 図21に示された例では、上記の実施形態に係る情報処理装置4は、例えばサーバコンピュータ(server computer)またはパーソナルコンピュータ(personal computer)により構成され、CPU等のハードウエアプロセッサ(hardware processor)111Aを有する。そして、このハードウエアプロセッサ111Aに対し、プログラムメモリ(program memory)111B、データメモリ(data memory)112、入出力インタフェース(interface)113及び通信インタフェース114が、バス(bus)120を介して接続される。図1に示される熱画像計測装置1についても同様である。
FIG. 21 is a block diagram showing an example of the hardware configuration of the information processing apparatus according to the embodiment of the present invention.
In the example shown in FIG. 21, the information processing apparatus 4 according to the above embodiment is configured by, for example, a server computer or a personal computer, and is a hardware processor 111A such as a CPU. Has. Then, the program memory 111B, the data memory 112, the input / output interface 113, and the communication interface 114 are connected to the hardware processor 111A via the bus 120. .. The same applies to the thermal image measuring device 1 shown in FIG.
 通信インタフェース114は、例えば1つ以上の無線の通信インタフェースユニット(unit)を含んでおり、通信ネットワークNWとの間で情報の送受信を可能にする。無線インタフェースとしては、例えば無線LAN(Local Area Network)などの小電力無線データ通信規格が採用されたインタフェースが使用される。 The communication interface 114 includes, for example, one or more wireless communication interface units (units), and enables information to be transmitted / received to / from the communication network NW. As the wireless interface, an interface adopted by a low power wireless data communication standard such as a wireless LAN (Local Area Network) is used.
 入出力インタフェース113には、情報処理装置4に付設される、オペレータ(operator)用の入力デバイス20(device)および出力デバイス30が接続される。入力デバイス20は図1に示される情報入力装置2に対応し、出力デバイス30は図1に示される情報出力装置3に対応する。 
 入出力インタフェース113は、キーボード(keyboard)、タッチパネル(touch panel)、タッチパッド(touchpad)、マウス(mouse)等の入力デバイス20を通じてオペレータにより入力された操作データを取り込むとともに、出力データを液晶または有機EL(Electro Luminescence)等が用いられた表示デバイスを含む出力デバイス30へ出力して表示させる処理を行なう。なお、入力デバイス20および出力デバイス30には、情報処理装置4に内蔵されたデバイスが使用されてもよく、また、ネットワーク(network)NWを介して情報処理装置4と通信可能である他の情報端末の入力デバイスおよび出力デバイスが使用されてもよい。
An input device 20 (device) for an operator and an output device 30 attached to the information processing apparatus 4 are connected to the input / output interface 113. The input device 20 corresponds to the information input device 2 shown in FIG. 1, and the output device 30 corresponds to the information output device 3 shown in FIG.
The input / output interface 113 captures operation data input by an operator through an input device 20 such as a keyboard, touch panel, touchpad, mouse, etc., and outputs data as liquid crystal or organic. A process of outputting to an output device 30 including a display device using an EL (Electro Luminescence) or the like for display is performed. A device built in the information processing device 4 may be used for the input device 20 and the output device 30, and other information capable of communicating with the information processing device 4 via the network NW. The input and output devices of the terminal may be used.
 プログラムメモリ111Bは、非一時的な有形の記憶媒体として、例えば、HDD(Hard Disk Drive)またはSSD(Solid State Drive)等の随時書込みおよび読出しが可能な不揮発性メモリ(non-volatile memory)と、ROM(Read Only Memory)等の不揮発性メモリとが組み合わせて使用されたもので、一実施形態に係る各種制御処理を実行する為に必要なプログラムが格納されている。 The program memory 111B is a non-volatile memory (non-volatile memory) that can be written and read at any time, such as an HDD (Hard Disk Drive) or SSD (Solid State Drive), as a non-temporary tangible storage medium. It is used in combination with a non-volatile memory such as a ROM (Read Only Memory), and stores programs necessary for executing various control processes according to one embodiment.
 データメモリ112は、有形の記憶媒体として、例えば、上記の不揮発性メモリと、RAM(Random Access Memory)等の揮発性メモリ(volatile memory)とが組み合わせて使用されたもので、各種処理が行なわれる過程で取得および作成された各種データが記憶される為に用いられる。 The data memory 112 is used as a tangible storage medium, for example, in combination with the above-mentioned non-volatile memory and a volatile memory such as RAM (RandomAccessMemory), and various processes are performed. It is used to store various data acquired and created in the process.
 本発明の一実施形態に係る情報処理装置4は、ソフトウエアによる処理機能部として、図1に示される記憶部41、画像処理部42、学習処理部43、および予測処理部44を有するデータ処理装置として構成され得る。 The information processing apparatus 4 according to the embodiment of the present invention has data processing including a storage unit 41, an image processing unit 42, a learning processing unit 43, and a prediction processing unit 44 shown in FIG. 1 as processing function units by software. It can be configured as a device.
 記憶部41は、図21に示されたデータメモリ112が用いられることで構成され得る。ただし、これらの領域は情報処理装置4内に必須の構成ではなく、例えば、USB(Universal Serial Bus)メモリなどの外付け記憶媒体、又はクラウド(cloud)に配置されたデータベースサーバ(database server)等の記憶装置に設けられた領域であってもよい。 The storage unit 41 may be configured by using the data memory 112 shown in FIG. 21. However, these areas are not indispensable in the information processing apparatus 4, and are, for example, an external storage medium such as a USB (Universal Serial Bus) memory, a database server (database server) arranged in the cloud, or the like. It may be an area provided in the storage device of.
 上記の画像処理部42、学習処理部43、および予測処理部44の各部における処理機能部は、いずれも、プログラムメモリ111Bに格納されたプログラムを上記ハードウエアプロセッサ111Aにより読み出させて実行させることにより実現され得る。なお、これらの処理機能部の一部または全部は、特定用途向け集積回路(ASIC(Application Specific Integrated Circuit))またはFPGA(Field-Programmable Gate Array)などの集積回路を含む、他の多様な形式によって実現されてもよい。 The processing function units in each of the image processing unit 42, the learning processing unit 43, and the prediction processing unit 44 all read and execute the program stored in the program memory 111B by the hardware processor 111A. Can be realized by. Part or all of these processing functions may be in various other formats, including integrated circuits such as ASICs (Application Specific Integrated Circuits) or FPGAs (Field-Programmable Gate Arrays). It may be realized.
 また、各実施形態に記載された手法は、計算機(コンピュータ)に実行させることができるプログラム(ソフトウエア手段)として、例えば磁気ディスク(フロッピー(登録商標)ディスク(Floppy disk)、ハードディスク等)、光ディスク(optical disc)(CD-ROM、DVD、MO等)、半導体メモリ(ROM、RAM、フラッシュメモリ(Flash memory)等)等の記録媒体に格納し、また通信媒体により伝送して頒布され得る。なお、媒体側に格納されるプログラムには、計算機に実行させるソフトウエア手段(実行プログラムのみならずテーブル、データ構造も含む)を計算機内に構成させる設定プログラムをも含む。本装置を実現する計算機は、記録媒体に記録されたプログラムを読み込み、また場合により設定プログラムによりソフトウエア手段を構築し、このソフトウエア手段によって動作が制御されることにより上述した処理を実行する。なお、本明細書でいう記録媒体は、頒布用に限らず、計算機内部あるいはネットワークを介して接続される機器に設けられた磁気ディスク、半導体メモリ等の記憶媒体を含むものである。 Further, the method described in each embodiment is a program (software means) that can be executed by a computer (computer), for example, a magnetic disk (floppy (registered trademark) disk (Floppy disk), hard disk, etc.), an optical disk, etc. It can be stored in a recording medium such as (optical disc) (CD-ROM, DVD, MO, etc.), semiconductor memory (ROM, RAM, Flash memory, etc.), and can be transmitted and distributed by a communication medium. The program stored on the medium side also includes a setting program for configuring the software means (including not only the execution program but also the table and the data structure) to be executed by the computer in the computer. A computer that realizes this device reads a program recorded on a recording medium, constructs software means by a setting program in some cases, and executes the above-mentioned processing by controlling the operation by the software means. The recording medium referred to in the present specification is not limited to distribution, and includes storage media such as magnetic disks and semiconductor memories provided in devices connected inside a computer or via a network.
 なお、本発明は、上記実施形態に限定されるものではなく、実施段階ではその要旨を逸脱しない範囲で種々に変形することが可能である。また、各実施形態は適宜組み合わせて実施してもよく、その場合組み合わせた効果が得られる。更に、上記実施形態には種々の発明が含まれており、開示される複数の構成要件から選択された組み合わせにより種々の発明が抽出され得る。例えば、実施形態に示される全構成要件からいくつかの構成要件が削除されても、課題が解決でき、効果が得られる場合には、この構成要件が削除された構成が発明として抽出され得る。 The present invention is not limited to the above embodiment, and can be variously modified at the implementation stage without departing from the gist thereof. In addition, each embodiment may be carried out in combination as appropriate, in which case the combined effect can be obtained. Further, the above-described embodiment includes various inventions, and various inventions can be extracted by a combination selected from a plurality of disclosed constituent requirements. For example, even if some constituent elements are deleted from all the constituent elements shown in the embodiment, if the problem can be solved and the effect is obtained, the configuration in which the constituent elements are deleted can be extracted as an invention.
  1…熱画像計測装置
  2…情報入力装置
  3…情報出力装置
  4…情報処理装置
  41…記憶部
  41a…作業データ記憶部
  41b…温度データ記憶部
  41c…予測モデル記憶部
  42…画像処理部
  42a,43a,44a…データ読込部
  42b…検出部
  42c…計算部
  43…学習処理部
  43b…データ量子化部
  43c…モデル学習部
  44…予測処理部
  44b…予測部
  44c…予測結果出力部
1 ... Thermal image measurement device 2 ... Information input device 3 ... Information output device 4 ... Information processing device 41 ... Storage unit 41a ... Work data storage unit 41b ... Temperature data storage unit 41c ... Prediction model storage unit 42 ... Image processing unit 42a, 43a, 44a ... Data reading unit 42b ... Detection unit 42c ... Calculation unit 43 ... Learning processing unit 43b ... Data quantization unit 43c ... Model learning unit 44 ... Prediction processing unit 44b ... Prediction unit 44c ... Prediction result output unit

Claims (5)

  1.  色情報を含む画像、前記画像における領域を示す情報、および前記領域に対応する出力用情報でなる組を用いて、色情報を含む入力画像に対応する、前記入力画像における領域を示す情報と前記領域に対応する出力用情報とを出力するように学習されたモデルが記憶される記憶部と、
     温度が低いほど輝度が高い画像を前記モデルに入力することにより、当該入力された画像における領域を示す情報と当該領域に対応する出力用情報とを出力する出力部と、
     を備える画像処理装置。
    Using a set consisting of an image including color information, information indicating an area in the image, and output information corresponding to the area, information indicating an area in the input image corresponding to the input image including color information and the above. A storage unit that stores a model trained to output information for output corresponding to the area, and a storage unit.
    By inputting an image having a higher brightness to the model as the temperature is lower, an output unit that outputs information indicating a region in the input image and output information corresponding to the region, and an output unit.
    An image processing device comprising.
  2.  温度が高いほど輝度が高い画像を、温度が低いほど輝度が高い画像に変換する変換部をさらに備え、
     前記出力部は、
      前記変換部により変換された画像を前記モデルに入力することで、前記出力用情報を出力する、
     請求項1に記載の画像処理装置。
    It is further equipped with a conversion unit that converts an image with higher brightness as the temperature is higher into an image with higher brightness as the temperature is lower.
    The output unit is
    By inputting the image converted by the conversion unit into the model, the output information is output.
    The image processing apparatus according to claim 1.
  3.  画像処理装置に用いられる方法であって、
     色情報を含む画像、前記画像における領域を示す情報、および前記領域に対応する出力用情報でなる組を用いて、色情報を含む入力画像に対応する、前記入力画像における領域を示す情報と前記領域に対応する出力用情報とを出力するように学習されたモデルに、温度が低いほど輝度が高い画像を入力することにより、当該入力された画像における領域を示す情報と当該領域に対応する出力用情報とを出力すること、
     を備える画像処理方法。
    This is the method used for image processing equipment.
    Using a set consisting of an image including color information, information indicating an area in the image, and output information corresponding to the area, information indicating an area in the input image corresponding to the input image including color information and the above. By inputting an image whose brightness is higher as the temperature is lower into a model trained to output information for output corresponding to a region, information indicating a region in the input image and output corresponding to the region are output. To output information for
    Image processing method comprising.
  4.  温度が高いほど輝度が高い画像を、温度が低いほど輝度が高い画像に変換することをさらに備え、
     前記出力することは、
      前記変換された画像を前記モデルに入力することで、前記出力用情報を出力することを含む、
     請求項3に記載の画像処理方法。
    Further prepared to convert an image with higher brightness as the temperature is higher into an image with higher brightness as the temperature is lower.
    The output is
    By inputting the converted image into the model, the output information is included.
    The image processing method according to claim 3.
  5.  請求項1または2に記載の画像処理装置の前記各部としてプロセッサを機能させる画像処理プログラム。 An image processing program that causes a processor to function as each of the parts of the image processing apparatus according to claim 1 or 2.
PCT/JP2020/019658 2020-05-18 2020-05-18 Image processing device, method, and program WO2021234782A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/019658 WO2021234782A1 (en) 2020-05-18 2020-05-18 Image processing device, method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/019658 WO2021234782A1 (en) 2020-05-18 2020-05-18 Image processing device, method, and program

Publications (1)

Publication Number Publication Date
WO2021234782A1 true WO2021234782A1 (en) 2021-11-25

Family

ID=78708220

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/019658 WO2021234782A1 (en) 2020-05-18 2020-05-18 Image processing device, method, and program

Country Status (1)

Country Link
WO (1) WO2021234782A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014002534A1 (en) * 2012-06-26 2014-01-03 本田技研工業株式会社 Object recognition device
JP2019028591A (en) * 2017-07-27 2019-02-21 三菱日立パワーシステムズ株式会社 Model learning device, generation method of learnt model, program, learnt model, monitoring device, and monitoring method
JP2019204147A (en) * 2018-05-21 2019-11-28 株式会社デンソーアイティーラボラトリ Learning apparatus, learning method, program, learnt model and lip reading apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014002534A1 (en) * 2012-06-26 2014-01-03 本田技研工業株式会社 Object recognition device
JP2019028591A (en) * 2017-07-27 2019-02-21 三菱日立パワーシステムズ株式会社 Model learning device, generation method of learnt model, program, learnt model, monitoring device, and monitoring method
JP2019204147A (en) * 2018-05-21 2019-11-28 株式会社デンソーアイティーラボラトリ Learning apparatus, learning method, program, learnt model and lip reading apparatus

Similar Documents

Publication Publication Date Title
JP6837597B2 (en) Active learning systems and methods
JP7341310B2 (en) Image processing for skin analysis, system and method for visualizing skin analysis
Jerritta et al. Emotion recognition from facial EMG signals using higher order statistics and principal component analysis
Stahl et al. Novel machine learning methods for ERP analysis: a validation from research on infants at risk for autism
CA2986204A1 (en) Image classification by brain computer interface
EP3364868B1 (en) Generating natural language representations of mental content from functional brain images
Haase et al. Automated and objective action coding of facial expressions in patients with acute facial palsy
Mousavi et al. Spatio-temporal analysis of error-related brain activity in active and passive brain–computer interfaces
Pakzad et al. CIRCLe: Color invariant representation learning for unbiased classification of skin lesions
Sarapata et al. Video-based activity recognition for automated motor assessment of Parkinson's disease
Subudhi et al. Automated delimitation and classification of autistic disorder using EEG signal
Lu et al. Predicting progressions of cognitive outcomes via high-order multi-modal multi-task feature learning
WO2021234782A1 (en) Image processing device, method, and program
Machado et al. Penalised maximum likelihood estimation in multi-state models for interval-censored data
WO2021234781A1 (en) Information processing device, method, and program
WO2021234784A1 (en) Information processing device, method, and program
WO2021234783A1 (en) Information-processing device, method, and program
JP2018082766A (en) Diagnostic system, diagnostic method and program
Boochoon et al. Deep learning for the assessment of facial nerve palsy: opportunities and challenges
Ma et al. Work engagement recognition in smart office
Xie et al. A vision-based hand hygiene monitoring approach using self-attention convolutional neural network
Morresi et al. Measuring thermal comfort using wearable technology in transient conditions during office activities
Deenadayalan et al. EEG based learner’s learning style and preference prediction for E-learning
Li et al. Calibration error prediction: ensuring high-quality mobile eye-tracking
Papanikolaou et al. Lévy Flight Model of Gaze Trajectories to Assist in ADHD Diagnoses

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936102

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20936102

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP