WO2020012906A1 - Display device, image processing device, and control method - Google Patents

Display device, image processing device, and control method Download PDF

Info

Publication number
WO2020012906A1
WO2020012906A1 PCT/JP2019/024410 JP2019024410W WO2020012906A1 WO 2020012906 A1 WO2020012906 A1 WO 2020012906A1 JP 2019024410 W JP2019024410 W JP 2019024410W WO 2020012906 A1 WO2020012906 A1 WO 2020012906A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
aerosol
space
type
sensor
Prior art date
Application number
PCT/JP2019/024410
Other languages
French (fr)
Japanese (ja)
Inventor
大山 達史
宮下 万里子
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019108042A external-priority patent/JP7113375B2/en
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to CN201980029258.5A priority Critical patent/CN112041665A/en
Publication of WO2020012906A1 publication Critical patent/WO2020012906A1/en
Priority to US17/120,085 priority patent/US11694659B2/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/47Scattering, i.e. diffuse reflection
    • G01N21/49Scattering, i.e. diffuse reflection within a body or fluid
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A50/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE in human health protection, e.g. against extreme weather
    • Y02A50/20Air quality improvement or preservation, e.g. vehicle emission control or emission reduction by using catalytic converters

Definitions

  • the present disclosure relates to a display device, an image processing device, and a control method.
  • Patent Documents 1 and 2 disclose such a terminal device.
  • the present disclosure provides a display device, an image processing device, and a control method capable of accurately presenting the position of an aerosol.
  • a display screen a first image obtained by imaging a space with a camera, and a second image representing at least one type of aerosol existing in the space are synthesized.
  • the image processing device includes an acquisition circuit that acquires three-dimensional coordinate data representing a position in the space of at least one type of aerosol existing in the space, and a processor.
  • the processor is configured to synthesize a first image obtained by imaging the space with a camera and a second image representing the at least one aerosol existing in the space, based on the three-dimensional coordinate data.
  • a composite image is generated. The position of the at least one aerosol in the depth direction in the first image is reflected in the second image.
  • a control method includes a light source that emits irradiation light toward at least one type of object in a space, and a light detection that detects return light from the at least one type of object. And a sensor that outputs data representing the result of detection of the return light by the photodetector, and a display device, comprising: obtaining the data from the sensor; Based on data, generating three-dimensional coordinate data representing the position of the at least one type of object in the space, and obtaining the three-dimensional coordinate data by imaging the space with a camera based on the three-dimensional coordinate data.
  • the second image imaged to generate a synthesized image synthesized and the display device comprises, possible to display the composite image.
  • one embodiment of the present disclosure can be realized as a program that causes a computer to execute the control method.
  • it can be realized as a non-transitory computer-readable recording medium storing the program.
  • the position of the aerosol can be presented with high accuracy.
  • FIG. 1 is a top view illustrating a space to which the non-contact sensing system according to the embodiment is applied.
  • FIG. 2 is a block diagram illustrating a configuration of the non-contact sensing system according to the embodiment.
  • FIG. 3 is a diagram illustrating an example of the sensor device according to the embodiment.
  • FIG. 4 is a diagram illustrating an example of sensor data output from the sensor device according to the embodiment.
  • FIG. 5 is a block diagram showing a conditional expression for determining a management level in the non-contact sensing system according to the embodiment.
  • FIG. 6 is a diagram illustrating an example of a reference value database indicating a reference value for each substance.
  • FIG. 7 is a diagram for explaining a method of determining an aerosol contour by the non-contact sensing system according to the embodiment.
  • FIG. 8 is a diagram illustrating a representative value of the management level for each object in the space obtained by the non-contact sensing system according to the embodiment.
  • FIG. 9 is a diagram illustrating a display example on the display screen of the display device according to the embodiment.
  • FIG. 10 is a sequence diagram showing an operation of the non-contact sensing system according to the embodiment.
  • FIG. 11 is a flowchart illustrating a process of converting captured image data into a 3D database, among operations of the non-contact sensing system according to the embodiment.
  • FIG. 12 is a flowchart illustrating a process of converting the sensor data into a 3D database in the operation of the non-contact sensing system according to the embodiment.
  • FIG. 13 is a diagram illustrating an example of a 3D database generated by the non-contact sensing system according to the embodiment.
  • FIG. 14 is a flowchart illustrating a level distribution generation process in the operation of the non-contact sensing system according to the embodiment.
  • FIG. 15 is a flowchart illustrating a process of generating auxiliary information in the operation of the non-contact sensing system according to the embodiment.
  • FIG. 16 is a diagram showing another example of the display on the display screen of the display device according to the embodiment.
  • FIG. 17 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment.
  • FIG. 18 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment.
  • FIG. 19 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment.
  • FIG. 20 is a diagram showing another example of the display on the display screen of the display device according to the embodiment.
  • FIG. 21 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment.
  • FIG. 22 is a diagram illustrating a display device integrally including the non-contact sensing system according to the embodiment.
  • a display screen In the display device according to an embodiment of the present disclosure, a display screen, a first image obtained by imaging a space with a camera, and a second image representing at least one type of aerosol existing in the space are synthesized.
  • the position of the aerosol in the depth direction is reflected in the second image representing the aerosol. For this reason, the position of the aerosol in the depth direction with respect to the first image as well as the position of the aerosol in the vertical and horizontal directions represented by the first image is displayed on the display screen. Thereby, the position of the aerosol can be accurately presented.
  • the first image represents a two-dimensional space
  • the control unit further projects three-dimensional coordinate data representing a position of the at least one type of aerosol in the space on the two-dimensional space.
  • the second image may be generated, and the first image and the second image may be synthesized to generate the synthesized image.
  • control unit further obtains the three-dimensional coordinate data from a sensor that obtains a position of the at least one type of aerosol in the space, and converts the first image into a pseudo three-dimensional image.
  • the second image may be generated by associating the three-dimensional image with the three-dimensional coordinate data and projecting the three-dimensional coordinate data onto the two-dimensional space.
  • the second image may include a contour indicating a range where the at least one type of aerosol exists, and distance information indicating a distance from a reference position in the space to a representative position in the contour.
  • the range in which the aerosol exists can be displayed on the display screen, and the position in the depth direction of the aerosol can be displayed as a representative position. For this reason, the position of the aerosol can be displayed simply, that is, in a display mode that is easy for the user viewing the display screen to understand.
  • the representative position may be a center of gravity of the concentration distribution of the at least one type of aerosol in the outline.
  • the position of the aerosol can be presented with high accuracy by setting the center of gravity of the concentration distribution as the representative position.
  • the distance information may be a numerical value indicating the distance.
  • the position of the aerosol can be displayed in a display mode that is easy for the user to understand.
  • the distance information may be a color given in the outline, which is predetermined according to the distance.
  • the position of the aerosol can be displayed in a display mode that is easy for the user to understand.
  • the composite image may show a three-dimensional model including the space and a contour representing a range in which the at least one type of aerosol exists.
  • the second image is a moving image in which a plurality of images are temporally switched, each of the plurality of images corresponds to a distance from a reference position in the space, and It may include a contour representing a range where one type of aerosol exists.
  • the second image may further reflect the concentration of the at least one aerosol.
  • the second image may include level information indicating a concentration level of the at least one type of aerosol.
  • the aerosol concentration can be displayed in a simple manner, that is, in a display mode that is easy to understand for a user viewing the display screen.
  • the at least one type of aerosol may include a plurality of types of aerosols, and the second image may represent the plurality of types of aerosols in different display modes.
  • control unit may display an image for calling a user's attention on the display screen.
  • the image processing apparatus includes an acquisition circuit that acquires three-dimensional coordinate data representing a position in the space of at least one type of aerosol existing in the space, and a processor,
  • the processor is configured to synthesize a first image obtained by imaging the space with a camera and a second image representing the at least one aerosol existing in the space, based on the three-dimensional coordinate data.
  • the second image reflects the position of the at least one aerosol in the depth direction in the first image.
  • the position of the aerosol in the depth direction is reflected in the second image representing the aerosol.
  • the composite image displayed on the display screen not only the position of the aerosol in the vertical and horizontal directions represented by the first image but also the position of the aerosol in the depth direction with respect to the first image appear.
  • control method detects a light source that emits irradiation light toward at least one type of object in a space, and detects return light from the at least one type of object.
  • a control method for a system including a photodetector and outputting a data representing a result of detection of the return light by the photodetector, and a display device, wherein the data is obtained from the sensor.
  • a second image representing the at least one type of object existing in the space wherein the at least one type of object is a depth direction in the first image.
  • the second image position is reflected to generate a synthesized image synthesized, and the display device comprises, possible to display the composite image.
  • the display device displays not only the position of the object in the vertical and horizontal directions represented by the first image but also the position of the object in the depth direction with respect to the first image. Thereby, the position of the target object can be presented with high accuracy.
  • the return light is fluorescence emitted when the at least one kind of object is excited by the irradiation light, and in the generation of the composite image, the fluorescence is further analyzed to obtain the at least one kind of light. May be determined, and the type may be reflected in the second image.
  • the irradiation light includes a predetermined polarization component
  • the type of the at least one target object is further determined based on a degree of depolarization of the polarization component included in the return light. May be determined, and the type may be reflected in the second image.
  • the three-dimensional coordinate data is calculated based on a difference between a time at which the irradiation light is emitted and a time at which the return light is detected. It may be generated using a relative positional relationship and coordinates of the sensor in the space.
  • the detection of the object and the distance to the detected object can be executed by the same light source and photodetector.
  • the configuration of the sensor device can be simplified.
  • the at least one type of object may be an organic substance attached to an object existing in the space.
  • a substance containing an organic substance such as a vomit or pollen can be detected, and its position can be presented with high accuracy.
  • the at least one type of object may be an aerosol existing in the space.
  • a substance floating in the air such as pollen or dust
  • its position can be presented with high accuracy.
  • the return light may be backscattered light generated by the irradiation light being scattered by the at least one type of object.
  • the aerosol can be detected with high accuracy.
  • a computer-readable recording medium includes a light source that emits irradiation light toward at least one type of object in a space, and a return light from the at least one type of object.
  • a computer readable program storing a program for controlling a system including a photodetector for detecting light, a sensor for outputting data representing a result of detection of the return light by the photodetector, and a display device
  • a recording medium wherein when the program is executed by the computer, acquiring the data from the sensor; and representing a position of the at least one type of object in the space based on the data.
  • a program includes a light source that emits irradiation light toward at least one type of object in a space, and a light that detects return light from the at least one type of object.
  • a sensor that includes a detector and outputs a data representing a result of detection of the return light by the light detector, and a display device, a computer-executable program for controlling a system including: Obtaining the data, generating three-dimensional coordinate data representing the position of the at least one type of object in the space based on the data, and using the camera to generate the three-dimensional coordinate data based on the three-dimensional coordinate data.
  • a computer is configured to generate a combined image in which a second image reflecting the position of the elephant in the first image in the depth direction is reflected, and to cause the display device to display the combined image. Let it.
  • all or a part of a circuit, a unit, a device, a member, or a part, or all or a part of a functional block in a block diagram is, for example, a semiconductor device, a semiconductor integrated circuit (IC), or an LSI (large scale integration). ) May be performed by one or more electronic circuits.
  • the LSI or IC may be integrated on one chip or may be configured by combining a plurality of chips.
  • functional blocks other than the storage element may be integrated on one chip.
  • LSI LSI
  • IC integrated circuit
  • FPGA Field Programmable Gate Array
  • the software is recorded on one or more non-transitory recording media such as a ROM, an optical disk, a hard disk drive, etc., and when the software is executed by a processor, a function specified by the software is executed. It is performed by a processor and peripheral devices.
  • the system or apparatus may include one or more non-transitory storage media on which the software is recorded, a processor, and any required hardware devices, such as an interface.
  • each drawing is a schematic diagram, and is not necessarily strictly illustrated. Therefore, for example, the scales and the like do not always match in each drawing. Further, in each of the drawings, substantially the same configuration is denoted by the same reference numeral, and redundant description will be omitted or simplified.
  • the non-contact sensing system captures an image of a space and detects an object existing in the space in a non-contact manner.
  • the non-contact sensing system displays, on a display screen, a combined image in which a first image indicating a captured space and a second image representing a detected target object are combined. At this time, the position of the detected object in the depth direction in the first image is reflected in the second image.
  • FIG. 1 is a top view showing a space 95 to which the non-contact sensing system according to the present embodiment is applied.
  • the space 95 is, for example, a room in a building such as a residence, an office, a nursing facility, or a hospital.
  • the space 95 is, for example, a space partitioned by walls, windows, doors, floors, ceilings, and the like, and is a closed space, but is not limited thereto.
  • the space 95 may be an outdoor open space.
  • the space 95 may be an internal space of a moving object such as a bus or an airplane.
  • a first target object 90 to be detected by the non-contact sensing system exists in a space 95.
  • the first object 90 is, specifically, an aerosol floating in the space 95.
  • the aerosol includes dust such as dust or dust, suspended particulate matter such as PM2.5, biological particles such as pollen, or fine water droplets. Biological particles also include molds and mites floating in the air.
  • the aerosol may also include substances that are dynamically generated from the human body, such as coughing or sneezing.
  • the aerosol may include a substance of air quality, such as carbon dioxide (CO 2 ).
  • the detection target is not limited to the aerosol.
  • the target object may be an organic stain.
  • the organic soil is food or vomit attached to an object such as a wall, a floor, or furniture that forms the space 95, and does not have to be floating in the air.
  • FIG. 2 is a block diagram showing a configuration of the non-contact sensing system 10 according to the present embodiment.
  • the non-contact sensing system 10 includes a camera 20, a first sensor 30, a second sensor 40, a third sensor 50, a computer 60, a server device 70, a tablet terminal 80, Is provided.
  • the configuration of the non-contact sensing system 10 is not limited to the example shown in FIG.
  • the non-contact sensing system 10 may include only one of the first sensor 30, the second sensor 40, and the third sensor 50. That is, the number of sensor devices included in the non-contact sensing system 10 may be only one, or may be plural.
  • the non-contact sensing system 10 may not include the computer 60 and the server device 70. Further, for example, the non-contact sensing system 10 may include a display connected to the computer 60 instead of the tablet terminal 80.
  • each of the camera 20, the first sensor 30, the second sensor 40, the third sensor, the server device 70, and the tablet terminal 80 has a communication interface. Various data and information are transmitted and received via the communication interface.
  • the camera 20 generates a captured image by capturing an image of the space 95.
  • the captured image is an example of a first image generated when the camera 20 captures an image of the space 95.
  • the camera 20 is, for example, a fixed-point camera fixed at a position where the space 95 can be imaged, but is not limited thereto.
  • the camera 20 may be a movable camera in which at least one of the shooting direction and the shooting position is variable.
  • the camera 20 may generate a plurality of captured images by imaging the space 95 from a plurality of viewpoints.
  • the camera 20 transmits the captured image data obtained by the imaging to the computer 60.
  • the camera 20 may be a visible light camera that captures a space visible to humans.
  • the first sensor 30, the second sensor 40, and the third sensor 50 are each an example of a sensor device that detects an object to be detected in a non-contact manner. That is, the non-contact sensing system 10 according to the present embodiment includes three sensor devices according to the type of the detection target.
  • the first target object 90 illustrated in FIG. 2 is pollen detected by the first sensor 30.
  • the second object 92 is dust detected by the second sensor 40.
  • the third object 94 is an organic stain detected by the third sensor 50.
  • Each of the first sensor 30, the second sensor 40, and the third sensor 50 is a non-contact sensor device using, for example, LIDAR (Laser Imaging Detection and Ranging).
  • LIDAR Laser Imaging Detection and Ranging
  • FIG. 3 is a diagram showing a first sensor 30 which is an example of the sensor device according to the present embodiment.
  • the first sensor 30 is an autonomously moving sensor device.
  • the second sensor 40 and the third sensor 50 have the same configuration as the first sensor 30.
  • the first sensor 30 can run on the floor 96 of the space 95. After emitting irradiation light L1 at a predetermined position on floor surface 96, first sensor 30 receives return light L2 returning from first object 90. The first sensor 30 measures a distance to the first object 90 based on a time difference between emission of the irradiation light L1 and reception of the return light L2. Further, the first sensor 30 measures the density of the first target 90 based on the intensity of the return light L2.
  • the first sensor 30 includes a light source 32, a photodetector 34, and a signal processing circuit 36, as shown in FIG.
  • the light source 32 is a light source that emits the irradiation light L1 toward the first object 90 in the space 95.
  • the light source 32 is, for example, an LED (Light Emitting Diode) or a laser element.
  • the irradiation light L1 emitted from the light source 32 includes a wavelength component for exciting the first object 90.
  • the irradiation light L1 is light having a peak wavelength in a range from 220 nm to 550 nm.
  • the irradiation light L1 is, for example, pulsed light.
  • the photodetector 34 is a photodetector that detects the return light L2 from the first object 90.
  • the return light L2 detected by the light detector 34 is fluorescence emitted when the first object 90 is excited by the irradiation light L1 emitted from the light source 32. Fluorescence is light containing more long wavelength components than the irradiation light L1.
  • the photodetector 34 is, for example, a photodiode having a light receiving sensitivity to a wavelength component of fluorescence.
  • the photodetector outputs an output signal corresponding to the intensity of the received fluorescence to the signal processing circuit.
  • the output signal is, for example, an electric signal whose signal intensity increases as the intensity of the received fluorescence increases.
  • the signal processing circuit 36 processes the output signal output from the photodetector 34 to determine the distance to the first object 90 and the density of the first object 90. As shown in FIG. 2, the signal processing circuit 36 includes a position information obtaining unit 37 and a density information obtaining unit 38.
  • the position information acquiring unit 37 acquires position information indicating a three-dimensional position of the first object 90 in the space 95.
  • the position information includes a distance and a direction to the first target object 90.
  • the position information acquisition unit 37 calculates the distance by a TOF (Time $ Of $ Flight) method.
  • the position information acquisition unit 37 acquires distance information based on a time difference between emission of the irradiation light L1 by the light source 32 and detection of fluorescence by the photodetector 34.
  • the distance information includes a distance ri to the first object 90, a horizontal angle ⁇ i indicating a direction in which the first object 90 is detected, and a vertical angle ⁇ i.
  • the direction in which the first target object 90 is detected is the direction in which the light source 32 emits the irradiation light L1.
  • the density information acquisition unit 38 acquires density information indicating the density of the first target object 90. Specifically, the density information acquisition unit 38 determines the density of the first target 90 according to the signal strength of the output signal. For example, when the signal intensity is Si, the density Di is calculated based on the following equation (1).
  • is a constant.
  • the subscripts “i” of Di and Si and the above-mentioned ri, ⁇ i, and ⁇ i indicate the data numbers of the sensor data. Note that the method of calculating the density Di used by the density information acquisition unit 38 is not limited to this. For example, instead of the output signal itself, the density information acquisition unit 38 may use a signal obtained by removing a noise component from the output signal.
  • the signal processing circuit 36 may determine the type of the first target object 90 by analyzing the fluorescence. Specifically, the signal processing circuit 36 determines the type of the first object 90 based on a combination of the wavelength of the irradiation light and the wavelength of the fluorescence. For example, in the first sensor 30, the light source 32 may emit a plurality of irradiation lights corresponding to a plurality of excitation wavelengths, and the photodetector 34 may receive a plurality of fluorescences corresponding to a plurality of light reception wavelengths. The signal processing circuit 36 can accurately determine the type of the first object 90 that has generated the fluorescence by generating a three-dimensional matrix of the excitation wavelength, the reception wavelength, and the reception intensity, that is, a so-called fluorescent fingerprint.
  • the signal processing circuit 36 outputs the density information indicating the determined density Di and the position information to the computer 60 as sensor data.
  • the first sensor 30 and the computer 60 are wirelessly connected, for example, so that data can be transmitted and received.
  • the first sensor 30 performs wireless communication based on a wireless communication standard such as Wi-Fi (registered trademark), Bluetooth (registered trademark), or ZigBee (registered trademark). Note that the first sensor 30 and the computer 60 may be connected by wire.
  • the second sensor 40 detects the second object 92 by emitting irradiation light toward the second object 92 and receiving return light from the second object 92.
  • the second object 92 is a substance that does not emit fluorescence, and is, for example, dust.
  • the second sensor 40 includes a light source 42, a photodetector 44, and a signal processing circuit 46.
  • the light source 42, the light detector 44, and the signal processing circuit 46 correspond to the light source 32, the light detector 34, and the signal processing circuit 36 of the first sensor 30, respectively.
  • the light source 42 is a light source that emits irradiation light toward the second object 92.
  • the light source 42 is, for example, an LED or a laser element.
  • the irradiation light emitted from the light source 42 does not need to excite the second object 92. Therefore, a wavelength component selected from a wide wavelength band can be used as the wavelength component of the irradiation light.
  • the irradiation light emitted from the light source 42 is light having a peak wavelength in a range from 300 nm to 1300 nm. That is, the irradiation light may be ultraviolet light, visible light, or near-infrared light.
  • the irradiation light is, for example, pulsed light.
  • the photodetector 44 is a photodetector that detects the return light from the second object 92.
  • the return light detected by the photodetector 44 is backscattered light generated when the irradiation light emitted from the light source 42 is scattered by the second object 92.
  • the backscattered light is, for example, scattered light due to Mie scattering.
  • the backscattered light has the same wavelength component as the irradiation light.
  • the photodetector 44 is a photodiode having a light receiving sensitivity to a wavelength component of irradiation light.
  • the photodetector 44 outputs an output signal corresponding to the intensity of the received backscattered light to the signal processing circuit 46.
  • the output signal is, for example, an electric signal whose signal intensity increases as the intensity of the received backscattered light increases.
  • the irradiation light emitted from the light source 42 may include a predetermined polarization component.
  • the signal processing circuit 46 may determine the type of the second object 92 based on the degree of depolarization of the polarization component included in the return light.
  • the polarization component is, for example, linearly polarized light, but may be circularly polarized light or elliptically polarized light.
  • the signal processing circuit 46 can determine the type of the second object 92 based on the degree of depolarization of the backscattered light. For example, the degree of depolarization of yellow sand is about 10%, and the degree of depolarization of pollen is about 1 to about 4%.
  • the signal processing circuit 46 determines the distance to the second object 92 and the density of the second object 92 by processing the output signal output from the photodetector 44.
  • the signal processing circuit 46 includes a position information acquisition unit 47 and a density information acquisition unit 48, as shown in FIG. The specific operation of determining the distance and the density is the same as that of the signal processing circuit 36 of the first sensor 30.
  • the third sensor 50 detects the third object 94 by emitting irradiation light toward the third object 94 and receiving return light from the third object 94.
  • the third object 94 is an organic stain that emits fluorescence when irradiated with excitation light.
  • the third sensor 50 includes a light source 52, a photodetector 54, and a signal processing circuit 56.
  • the signal processing circuit 56 includes a position information acquisition unit 57 and a density information acquisition unit 58.
  • the light source 52, the light detector 54, and the signal processing circuit 56 correspond to the light source 32, the light detector 34, and the signal processing circuit 36 of the first sensor 30, respectively.
  • the first sensor 30 and the third sensor 50 differ in the direction in which each light source emits irradiation light. For example, while the light source 32 emits irradiation light toward the air in the space 95, the light source 52 emits irradiation light toward the floor or wall surface of the space 95.
  • the operation of each of the light source 52, the photodetector 54, and the signal processing circuit 56 is the same as that of each of the light source 32, the photodetector 34, and the signal processing circuit 36.
  • the first sensor 30, the second sensor 40, and the third sensor 50 each detect an object located in a direction in which the irradiation light is emitted. At this time, when there are a plurality of objects in the emission direction of the irradiation light, the return light returns at different times according to the positions of the objects. Therefore, a plurality of objects located in the emission direction of the irradiation light can be detected at a time based on the time at which the return light is received. When the target object does not exist in the emission direction of the irradiation light, the return light does not return. Therefore, when the return light does not return, it is detected that the target object does not exist on the path of the irradiation light. Each of the first sensor 30, the second sensor 40, and the third sensor 50 transmits a detection result to the computer 60 as sensor data.
  • FIG. 4 is a diagram showing an example of a database including sensor data output from the sensor device according to the present embodiment.
  • the database shown in FIG. 4 is managed by the processor 64 of the computer 60 and stored in the memory 66.
  • the substance name Mi, the sensor data, and the sensor reference position are associated with each other for each i.
  • the sensor data includes the density Di, the distance ri, the horizontal angle ⁇ i, and the vertical angle ⁇ i.
  • Data number No. i is attached to each sensor data received by the computer 60.
  • the processor 64 assigns data numbers in ascending order, for example, in the order in which the communication interface 62 receives the sensor data.
  • the substance name Mi is information indicating the type of the detection target.
  • the type of the target object corresponds to each sensor device. Therefore, the processor 64 can determine the substance name Mi corresponding to the sensor data by determining the transmission destination of the sensor data received by the communication interface 62.
  • the sensor data of data number 1 indicates that the sensor data is transmitted from the first sensor 30 that detects pollen.
  • the density Di is a value calculated based on the above equation (1).
  • Each of the signal processing circuits 36, 46, and 56 of each sensor device calculates based on the signal strength Si.
  • the distance ri, the horizontal angle ⁇ i, and the vertical angle ⁇ i are data indicating the three-dimensional position of the object obtained using LIDAR. Since the position data obtained by LIDAR is shown in a polar coordinate system, in the present embodiment, the computer 60 converts the position data into a three-dimensional orthogonal coordinate system. Details of the coordinate conversion will be described later.
  • the sensor reference position is, for example, the installation position of the sensor device that has transmitted the sensor data among the first sensor 30, the second sensor 40, and the third sensor 50.
  • the sensor reference position does not change.
  • the computer 60 is an example of an image processing device, and includes a communication interface 62, a processor 64, and a memory 66, as shown in FIG.
  • the communication interface 62 transmits and receives data by communicating with each device constituting the non-contact sensing system 10.
  • Communication with each device is, for example, wireless communication based on a wireless communication standard such as Wi-Fi (registered trademark), Bluetooth (registered trademark), or ZigBee (registered trademark), but may be wired communication.
  • the communication interface 62 is an example of an acquisition circuit that acquires three-dimensional coordinate data.
  • the communication interface 62 acquires sensor data from each of the first sensor 30, the second sensor 40, and the third sensor 50 by communicating with each of the first sensor 30, the second sensor 40, and the third sensor 50.
  • the sensor data includes position information, which is an example of three-dimensional coordinate data representing the position of at least one type of object in the space 95. Further, the sensor data includes density information.
  • the three-dimensional coordinate data is calculated based on the difference between the time at which the irradiation light is emitted and the time at which the return light is detected, and the relative positional relationship between the sensor device and the object, and the space 95 of the sensor device Is generated using the coordinates of The relative positional relationship corresponds to the distance ri shown in FIG.
  • the coordinates in the space 95 of the sensor device correspond to the coordinates (x0, y0, z0) indicating the reference position shown in FIG.
  • the communication interface 62 acquires captured image data from the camera 20 by, for example, communicating with the camera 20.
  • the communication interface 62 may transmit a control signal including a shooting instruction or a sensing instruction to at least one of the camera 20, the first sensor 30, the second sensor 40, and the third sensor 50.
  • the communication interface 62 further transmits level distribution information corresponding to the concentration distribution of the target object to the server device 70 by communicating with the server device 70.
  • the communication interface 62 transmits the composite image data to the tablet terminal 80 by communicating with the tablet terminal 80.
  • the processor 64 generates a composite image based on the sensor data acquired by the communication interface 62.
  • the composite image is a composite image in which a captured image representing the space 95 captured by the camera 20 and an object image are composited.
  • the object image is an example of a second image representing at least one type of object existing in the space 95.
  • the processor 64 generates a concentration distribution of the object in the space 95 based on the sensor data. Specifically, the processor 64 generates a three-dimensional distribution of the density by expressing the space 95 by coordinates in a three-dimensional orthogonal coordinate system and associating the density with each coordinate.
  • the x-axis, y-axis, and z-axis shown in FIG. 1 indicate three axes of a three-dimensional orthogonal coordinate system.
  • the x axis and the y axis are two axes parallel to the floor of the space 95, and the z axis is one axis perpendicular to the floor.
  • the setting example of the three axes is not limited to this.
  • the processor 64 generates a level distribution which is an example of the concentration distribution of the object.
  • the level distribution is a distribution of the management level Ci determined based on the density information.
  • the density Di is classified into a plurality of level values according to its magnitude.
  • the management level Ci is a level value at which the density Di indicated by the density information is classified.
  • the processor 64 determines the management level Ci based on the conditional expression shown in FIG.
  • FIG. 5 is a diagram showing a conditional expression for determining the management level Ci in the non-contact sensing system 10 according to the present embodiment.
  • the conditional expression is stored in the memory 66, for example.
  • the management level Ci is represented by five levels from “1” to “5”. Based on the relationship between the density Di and the reference value Lm, the processor 64 determines the management level Ci. As shown in FIG. 6, the reference value Lm is a value predetermined for each type of target object.
  • FIG. 6 is a diagram illustrating an example of a reference value database indicating a reference value for each substance. The reference value database is stored in the memory 66, for example.
  • the level of the management level Ci is not limited to five levels, but may be two levels, three levels, or four levels, or may be six levels or more. In the conditional expression shown in FIG. 5, the value of the coefficient (for example, “0.4”) by which the reference value Lm is multiplied is merely an example.
  • the processor 64 further determines the contour of the object based on the generated three-dimensional distribution. Further, the processor 64 determines a predetermined position in the determined contour as a representative position.
  • the object image includes the determined outline and the representative position.
  • the processor 64 determines the contour of the target object based on the density Di for each coordinate. Specifically, the processor 64 determines the contour of the aerosol existing in the space 95 based on the management level Ci calculated based on the density Di for each coordinate.
  • FIG. 7 is a diagram for explaining a method of determining an aerosol contour by the non-contact sensing system 10 according to the present embodiment.
  • a method of determining a contour in a two-dimensional level distribution defined by the x-axis and the y-axis will be described.
  • the same method can be applied to a three-dimensional case.
  • the management level Ci is calculated for each coordinate represented by the x coordinate and the y coordinate.
  • the processor 64 determines, for example, a region where the management level Ci is equal to or greater than the set value, and determines the outline of the region as the aerosol outline. For example, when the set value is “2”, the processor 64 determines the outline 90 a of the area where the management level Ci is “2” or more as the aerosol outline. In FIG. 7, the area where the management level Ci is “2” or more is shaded with dots. The example shown in FIG. 7 indicates that aerosol was detected at two places in the space.
  • the set value for determining the contour may be changeable. For example, when the set value is increased, only the portion where the concentration of the aerosol is sufficiently high can be determined as the aerosol existence range. Alternatively, when the set value is reduced, it can be determined as the existence range of the aerosol including the portion where the concentration of the aerosol is low.
  • the processor 64 may determine the contour for each set value using a plurality of set values. For example, in the example shown in FIG. 7, a contour 90a corresponding to the set value "2" and a contour 90b corresponding to the set value "3" are determined.
  • the contour 90a is the outermost contour of the determined plurality of contours, and corresponds to a contour indicating an aerosol existence range.
  • the outline 90b corresponds to an outline indicating a region where the concentration of the aerosol is higher in the range where the aerosol exists. As described above, the difference in the concentration of the aerosol can be represented by the outline within the aerosol existing range.
  • the representative position in the contour is the center of gravity of the aerosol concentration distribution in the contour.
  • the processor 64 determines the center of gravity based on the management level Ci for each coordinate existing in the contour. For example, when the coordinates of the center of gravity are (Xc, Yc, Zc), the processor 64 determines the coordinates of the center of gravity based on the following equation (2).
  • ⁇ () is an arithmetic symbol representing the sum in (). i corresponds to the coordinates located within the determined contour.
  • the representative position may be the center of gravity of the three-dimensional figure having the determined contour as the outer periphery.
  • the memory 66 is a storage device for storing captured image data and sensor data.
  • the memory 66 stores a program executed by the processor 64, parameters necessary for executing the program, and the like.
  • the memory 66 also functions as an area for executing a program by the processor 64.
  • the memory 66 has, for example, a non-volatile memory such as an HDD (Hard Disk Drive) or a semiconductor memory, and a volatile memory such as a RAM (Random Access Memory).
  • the server device 70 receives the level distribution information transmitted from the computer 60, and performs a predetermined process using the received level distribution information. Specifically, server device 70 alerts a person using space 95 based on the level distribution information. For example, the server device 70 generates an attention image which is an image for alerting, and transmits the generated attention image to the tablet terminal 80.
  • the server device 70 determines whether or not the detected concentration of at least one type of target object exceeds a threshold value. Specifically, the server device 70 determines whether or not the representative management level C in the space 95 exceeds a threshold. If the server device 70 determines that the representative management level C exceeds the threshold, it generates a caution image.
  • the threshold value is a predetermined fixed value, but is not limited to this. For example, the threshold may be appropriately updated by machine learning.
  • the representative management level C is calculated based on, for example, a representative value Cm of the management level for each object.
  • the representative value Cm is a value representing the management level of the corresponding object, and is, for example, the maximum value of the management level in the level distribution of the corresponding object.
  • the server device 70 calculates a representative value Cm for each object based on the level distribution.
  • FIG. 8 is a diagram showing a representative value Cm of the management level for each object in the space 95 obtained by the non-contact sensing system 10 according to the present embodiment.
  • the server device 70 calculates the representative management level C by averaging the representative values for each object. For example, in the example shown in FIG. 8, the representative management level C is “3.8”.
  • the representative management level C may not be the average value of the plurality of representative values Cm.
  • the representative management level C may be a weighted addition value of a plurality of representative values Cm.
  • the weight of pollen and dust when the weight of pollen and dust is set to 1, the weights of CO 2 , moisture, and surface organic soil may be set to 0.3, 0.1, and 0.1, respectively.
  • the weight value is not limited to these, and may be changeable based on an instruction from a user or the like.
  • the server device 70 may control an air conditioner installed in the space 95. Alternatively, the server device 70 may give preventive advice for suppressing an increase in the concentration of pollen or dust, for example.
  • the preventive advice is, for example, an instruction to prompt the user to ventilate the space 95 or an instruction to drive the device such as an air purifier disposed in the space 95.
  • the server device 70 outputs image data or audio data including preventive advice to the tablet terminal 80. For example, the server device 70 obtains information on alerting or preventive advice by referring to weather observation data and the like. In addition, the server device 70 may generate information on alerting or preventive advice by performing machine learning based on a temporal change in the concentration or the management level.
  • the tablet terminal 80 is a portable information processing terminal.
  • the tablet terminal 80 may be, for example, a multifunctional information terminal such as a tablet PC or a smartphone, or may be an information terminal dedicated to the non-contact sensing system 10.
  • the tablet terminal 80 is an example of a display device including a display screen 82 and a control unit 84.
  • the display screen 82 displays the composite image.
  • the display screen 82 is, for example, a liquid crystal display panel, but is not limited to this.
  • the display screen 82 may be a self-luminous display panel using an organic EL (Electroluminescence) element.
  • the display screen 82 is, for example, a touch panel display, and may be capable of receiving an input from a user.
  • the control unit 84 causes the display screen 82 to display the composite image.
  • the control unit 84 includes, for example, a nonvolatile memory storing a program, a volatile memory serving as a temporary storage area for executing the program, an input / output port, a processor executing the program, and the like.
  • control unit 84 acquires the composite image data transmitted from the computer 60, and displays the composite image on the display screen 82 based on the acquired composite image data. For example, the control unit 84 causes the display screen 82 to display the composite image 100 shown in FIG.
  • FIG. 9 is a diagram showing a display example on the display screen 82 of the tablet terminal 80 which is an example of the display device according to the present embodiment. As shown in FIG. 9, the composite image 100 is displayed on the display screen 82.
  • the composite image 100 is an image in which the captured image 101 and the aerosol image 102 are composited.
  • the composite image 100 is, for example, a still image.
  • the photographed image 101 represents the space 95 captured by the camera 20.
  • the captured image 101 is an example of a first image.
  • the captured image 101 is an image obtained by capturing the space 95 in the horizontal direction, but is not limited to this.
  • the captured image 101 may be, for example, an image obtained by imaging the space 95 from above. In this case, the captured image 101 corresponds to the top view illustrated in FIG.
  • the aerosol image 102 is an example of an object image representing at least one type of object existing in the space 95.
  • the aerosol image 102 represents pollen, which is an example of an aerosol.
  • the aerosol image 102 reflects the position of at least one type of object in the captured image 101 in the depth direction.
  • the aerosol image 102 is an example of a second image.
  • the aerosol image 102 includes a contour 102a and distance information 102b.
  • the outline 102a represents, for example, a range where the first object 90 detected by the first sensor 30 exists.
  • the distance information 102b is a numerical value indicating the distance from the reference position to the representative position in the outline 102a.
  • the reference position is a position existing in the space 95.
  • the reference position is the installation position of the camera 20.
  • the reference position may be a position of a person or an apparatus such as an air purifier existing in the space 95.
  • the aerosol image 102 may reflect the concentration of the aerosol.
  • the aerosol image 102 may include level information indicating the management level Ci of the concentration of the aerosol.
  • the aerosol image may represent two or more types of aerosols in different display modes. Further, when the concentration of the aerosol exceeds the threshold value, a caution image for calling the user's attention may be displayed on the display screen 82.
  • FIG. 10 is a sequence diagram showing an operation of the non-contact sensing system 10 according to the present embodiment.
  • the camera 20 captures an image of the space 95 (S10).
  • the camera 20 transmits the captured image data obtained by the imaging to the computer 60 (S12).
  • the first sensor 30 performs a process of detecting the first object 90 (S14). Specifically, in the first sensor 30, the light source 32 emits irradiation light toward the first target 90, and the photodetector 34 receives the return light from the first target 90.
  • the signal processing circuit 36 generates sensor data including the distance and the density of the first object 90 based on the signal intensity of the return light.
  • the first sensor 30 transmits the generated sensor data to the computer 60 (S16).
  • the second sensor 40 performs a process of detecting the second object 92 (S18). Specifically, in the second sensor 40, the light source 42 emits irradiation light toward the second object 92, and the photodetector 44 receives return light from the second object 92. The signal processing circuit 46 generates sensor data including the distance and the density of the second object 92 based on the signal intensity of the return light. The second sensor 40 transmits the generated sensor data to the computer 60 (S20).
  • the third sensor 50 performs the detection processing of the third object 94 (S22). Specifically, in the third sensor 50, the light source 52 emits irradiation light toward the third object 94, and the photodetector 54 receives return light from the third object 94. The signal processing circuit 56 generates sensor data including the distance and the density of the third object 94 based on the signal intensity of the return light. The third sensor 50 transmits the generated sensor data to the computer 60 (S24).
  • any one of the imaging by the camera 20 (S10), the detection processing by the first sensor 30 (S14), the detection processing by the second sensor 40 (S18), and the detection processing by the third sensor 50 (S22) is performed first. Or these may be performed simultaneously.
  • the timing at which the imaging (S10) and the detection processing (S14, S18, and S22) are performed may be performed based on an instruction from the computer 60, the server device 70, or the like.
  • Each device transmits the captured image data or the sensor data when the captured image data or the sensor data is obtained. Alternatively, each device may transmit captured image data or sensor data when receiving a request from the computer 60.
  • the computer 60 receives the captured image data and each sensor data, and performs a process of creating a 3D database based on the received captured image data and each sensor data (S26). Specifically, the processor 64 of the computer 60 converts a two-dimensional captured image into a pseudo three-dimensional image. Further, the processor 64 performs coordinate conversion of the sensor data obtained in the polar coordinate system into a three-dimensional orthogonal coordinate system.
  • FIG. 11 is a flowchart showing a process of converting captured image data into a 3D database in the operation of the non-contact sensing system 10 according to the present embodiment.
  • FIG. 11 is an example of the detailed operation of step S26 in FIG.
  • the processor 64 acquires captured image data via the communication interface 62 (S102).
  • the captured image included in the captured image data is a two-dimensional image.
  • the processor 64 converts the two-dimensional captured image into a pseudo three-dimensional image by using a generally known technique of converting a two-dimensional image into a pseudo three-dimensional image (S104).
  • the captured image data may include a distance image indicating a distance to a wall, a floor, and a ceiling constituting the space 95, a person and furniture located in the space 95, and the like.
  • the captured image data may include a plurality of captured images captured from a plurality of different viewpoints.
  • the processor 64 may generate a three-dimensional image using a captured image and a distance image, or using a plurality of captured images. Thereby, the certainty of the three-dimensional image can be increased.
  • FIG. 12 is a flowchart showing a process of converting the sensor data into a 3D database in the operation of the non-contact sensing system 10 according to the present embodiment.
  • FIG. 12 is an example of the detailed operation of step S26 in FIG.
  • the processor 64 acquires sensor data from the database stored in the memory 66 (S112). Specifically, the processor 64 acquires the distance ri, the horizontal angle ⁇ i, the vertical angle ⁇ i, and the substance name Mi. The processor 64 converts the acquired sensor data into spatial coordinates, which is a three-dimensional orthogonal coordinate system, based on the following equation (3) (S114).
  • Either the pseudo three-dimensionalization of the captured image data shown in FIG. 11 and the three-dimensionalization of the sensor data shown in FIG. 12 may be performed first or simultaneously.
  • the spatial coordinates (Xi, Yi, Zi) expressed by the three-dimensional orthogonal coordinate system are converted to the data numbers No. i.
  • FIG. 13 is a diagram illustrating an example of a 3D database generated by the non-contact sensing system 10 according to the present embodiment.
  • a substance name Mi, a concentration Di, a management level Ci, and space coordinates (Xi, Yi, Zi) are associated with each i.
  • the computer 60 After the 3D database is generated, as shown in FIG. 10, the computer 60 generates a level distribution based on the generated 3D database (S28). The computer 60 transmits the generated level distribution information indicating the level distribution to the server device 70 (S30).
  • FIG. 14 is a flowchart showing a level distribution generation process in the operation of the non-contact sensing system 10 according to the present embodiment.
  • FIG. 14 shows an example of the detailed operation of step S28 in FIG.
  • the processor 64 acquires density information and spatial coordinates (Xi, Yi, Zi) by reading from the memory 66 (S122).
  • the processor 64 determines the management level Ci based on the comparison with the reference value Lm for each substance, and generates a level distribution (S124).
  • the processor 64 determines a contour and a representative position in the contour based on the generated level distribution (S126). The process of determining the contour and the representative position is, for example, as described above with reference to FIG.
  • the computer 60 After the level distribution is generated, as shown in FIG. 10, the computer 60 generates a composite image (S32). Specifically, the computer 60 maps the contour and distance information and the captured image by mapping the level distribution to the captured image. The computer 60 transmits the composite image data to the tablet terminal 80 (S34).
  • the image including the contour and the distance information is an example of a second image generated by projecting three-dimensional coordinate data representing a position in the space of at least one type of aerosol onto a two-dimensional space represented by the captured image. is there.
  • the image including the contour and the distance information is, for example, the aerosol image 102 shown in FIG.
  • the computer 60 generates an image including contour and distance information by projecting three-dimensional coordinate data representing a position of at least one type of aerosol in the space into a two-dimensional space represented by the captured image. .
  • the computer 60 generates an image including contour and distance information by expanding the captured image into a pseudo three-dimensional image and projecting the expanded three-dimensional image in association with the three-dimensional coordinate data.
  • the correspondence between the three-dimensional image and the three-dimensional coordinate data means that the origin and three axes of the three-dimensional coordinates of the three-dimensional image and the origin and three axes of the three-dimensional coordinate of the three-dimensional coordinate data are at the same position in space. It is to match.
  • the computer 60 generates a synthesized image by synthesizing the image including the contour and the distance information with the photographed image.
  • the server device 70 acquires auxiliary information based on the level distribution information transmitted from the computer 60 (S36).
  • the auxiliary information is information including, for example, alert or preventive advice.
  • the server device 70 transmits the acquired auxiliary information to the tablet terminal 80 (S38).
  • FIG. 15 is a flowchart illustrating a process of generating auxiliary information in the operation of the non-contact sensing system 10 according to the present embodiment.
  • FIG. 15 shows an example of the detailed operation of step S36 in FIG.
  • the server device 70 determines a representative value Cm of the management level for each object in the space 95 (S132). Next, the server device 70 determines a representative management level C in the space 95 (S134). The specific method of determining the representative management level C is as described above with reference to FIG.
  • the server device 70 compares the representative management level C with the threshold (S136). When the representative management level C is larger than the threshold (Yes in S136), the server device 70 generates a caution image (S138). Preventive advice may be generated instead of a caution image. When the representative management level C is equal to or smaller than the threshold (No in S136), the processing for generating the auxiliary information ends.
  • the server device 70 may compare the representative value Cm of the management level for each object with the threshold value. That is, the server device 70 need not determine the representative management level C. For example, when at least one representative value Cm among the representative values Cm of the management levels of a plurality of objects such as pollen and dust is larger than a threshold, the server device 70 may generate a caution image.
  • the tablet terminal 80 acquires the composite image data transmitted from the computer 60 and the auxiliary information transmitted from the server device 70, and displays the composite image on the display screen 82.
  • the composite image displayed on the display screen 82 may not include the auxiliary information.
  • a composite image 100 as shown in FIG. 9 is displayed on the display screen 82.
  • FIG. 9 shows a display example that does not include auxiliary information. A display example including the auxiliary information will be described later with reference to FIG.
  • FIG. 16 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 16, a composite image 110 is displayed on the display screen 82.
  • the composite image 110 is an image in which the captured image 101 and the aerosol images 112 and 114 are composited.
  • Each of the aerosol images 112 and 114 is an example of a second image representing at least one type of aerosol present in the space 95.
  • the aerosol images 112 and 114 each represent pollen.
  • the aerosol image 112 includes a contour 112a and distance information 112b.
  • the aerosol image 114 includes a contour 114a and distance information 114b.
  • the distance information 112b is a color given in the outline 112a, which is predetermined according to the distance.
  • the type or shade of color is predetermined in accordance with the distance.
  • the color is represented by the density of the hatched dots provided in the outline 112a.
  • the color given in the outline 114a as the distance information 114b is a darker color than the color given in the outline 112a as the distance information 112b.
  • the composite image 110 shows that the pollen represented by the aerosol image 114 is shorter in distance than the pollen represented by the aerosol image 112.
  • the distance information 112b and 114b may be represented by shades of shade instead of colors.
  • the distance may be represented by the density of dots provided in the outline 112a or 114a.
  • the aerosol image 112 further includes level information 112c.
  • the aerosol image 114 further includes level information 114c.
  • the level information 112c indicates the type and density of the aerosol represented by the aerosol image 112.
  • the density represented by the level information 112c is, for example, a value representing the management level Ci of each coordinate in the outline 112a.
  • the level information 112c indicates the maximum value or the average value of the management level Ci of each coordinate in the outline 112a.
  • the level information 112c includes a character representing pollen, which is the type of aerosol, and a numerical value indicating the management level Ci. The same applies to the level information 114c.
  • the distance to the aerosol is displayed in a display mode other than the numerical values
  • the number of characters including the numerical values in the image is increased, and the complexity can be suppressed.
  • the concentration of the aerosol can be represented using the numerical value and characters. This makes it possible to increase the amount of information that can be presented to the user while suppressing complication in the image.
  • FIG. 17 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 17, on the display screen 82, a composite image 120 is displayed.
  • the composite image 120 is an image in which the captured image 101 and the aerosol images 122, 124, 126, and 128 are composited.
  • Each of the aerosol images 122, 124, 126 and 128 represents at least one type of aerosol present in the space 95.
  • the aerosol images 122 and 128 represent pollen.
  • Aerosol images 124 and 126 represent dust.
  • the aerosol image 122 includes a contour 122a and distance information 122b.
  • the aerosol image 124 includes an outline 124a and distance information 124b.
  • the aerosol image 126 includes an outline 126a and distance information 126b.
  • the aerosol image 128 includes an outline 128a and distance information 128b.
  • Each of the distance information 122b, 124b, 126b, and 128b is a numerical value representing the distance, similarly to the composite image 100 shown in FIG.
  • the aerosol image 122 further includes the level information 122c.
  • the aerosol image 124 further includes level information 124c.
  • the aerosol image 126 further includes level information 126c.
  • the aerosol image 128 further includes level information 128c.
  • the level information 122c is a color or hatching added to the outline 122a. Specifically, the level information 122c indicates the magnitude of the management level Ci by shading the color or shading density. For example, the level information 122c indicates that the management level Ci is higher as the color is darker or as the shade is denser. The level information 122c indicates that the management level is smaller as the color is lighter or the hatching is sparser. The same applies to the level information 124c, 126c and 128c.
  • the level information 122c indicates the type of the aerosol by the color or the type of hatching. That is, the same type of color or shading means the same aerosol. For example, in the example shown in FIG. 17, the hatching of dots represents pollen, and the grid-like shading represents dust. The same applies to the level information 124c, 126c and 128c.
  • the aerosol image 122 is of the same type as the aerosol represented by the aerosol image 128, and has a lower concentration and a greater distance than the aerosol represented by the aerosol image 128.
  • the aerosol image 124 is of the same type as the aerosol represented by the aerosol image 126, and has a higher concentration and a shorter distance than the aerosol represented by the aerosol image 126.
  • FIG. 18 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 18, a composite image 130 is displayed on the display screen 82.
  • the composite image 130 is an image in which the photographed image 101 and the aerosol images 132, 134, 136, and 138 are composited. Aerosol images 132, 134, 136 and 138 each represent at least one aerosol present in space 95. In the example shown in FIG. 18, the aerosol images 132 and 138 represent pollen. Aerosol images 134 and 136 represent dust.
  • the aerosol image 132 includes a contour 132a, distance information 132b, and level information 132c.
  • the aerosol image 134 includes an outline 134a, distance information 134b, and level information 134c.
  • the aerosol image 136 includes an outline 136a, distance information 136b, and level information 136c.
  • the aerosol image 138 includes an outline 138a, distance information 138b, and level information 138c.
  • Distance information 132b, 134b, 136b, and 138b are each a color given in the outline that is predetermined according to the distance, similarly to the composite image 110 shown in FIG.
  • the distance information 132b, 134b, 136b, and 138b indicates the type of the aerosol by the color or the type of hatching. That is, the same type of color or shading means the same aerosol.
  • the hatching of dots represents pollen
  • the grid-like shading represents dust.
  • the level information 132c, 134c, 136c, and 138c each include a character indicating pollen, which is the type of aerosol, and a numerical value indicating the management level Ci, similarly to the composite image 110 illustrated in FIG.
  • FIG. 19 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 19, on the display screen 82, a composite image 140 is displayed.
  • the synthesized image 140 is different from the synthesized image 130 shown in FIG. 18 in that an aerosol image 148 is synthesized instead of the aerosol image 138.
  • the aerosol image 148 includes an outline 148a, distance information 148b, and level information 148c.
  • the contour 148a, the distance information 148b, and the level information 148c are the same as the contour 138a, the distance information 138b, and the level information 138c shown in FIG. 18, respectively.
  • the level information 148c of the aerosol image 148 has the management level Ci of “3”. Since the management level Ci exceeds the threshold value, the display screen 82 displays a caution image 141 for calling attention.
  • the attention image 141 is, for example, a character calling for attention, but is not limited thereto.
  • the attention image 141 may be, for example, a predetermined figure.
  • the display mode is not particularly limited as long as it can attract the user's attention.
  • the whole of the composite image 140 displayed on the display screen 82 may be displayed blinking or the color tone may be changed.
  • preventive advice may be displayed on the display screen 82 in addition to or instead of the caution image 141.
  • the preventive advice is displayed, for example, as character information.
  • a URL Uniform Resource Locator
  • a QR code registered trademark
  • FIG. 20 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 20, on the display screen 82, a composite image 200 is displayed.
  • the composite image 200 is a three-dimensional modeled image of the space 95 and a contour representing a range in which at least one type of aerosol exists. Specifically, the composite image 200 is a pseudo three-dimensional image whose viewpoint can be changed.
  • the composite image 200 is an image in which the captured image 201 and the aerosol image 202 are composited.
  • the aerosol image 202 includes an outline 202a and level information 202c.
  • the composite image 200 when the space 95 is viewed in the horizontal direction is displayed on the display screen 82 as in FIG.
  • the display screen 82 displays the composite image 200 when the space 95 is viewed from obliquely above, as shown in part (b) of FIG.
  • the viewpoint can be freely changed.
  • the composite image 200 may be displayed so as to be freely enlarged and reduced.
  • FIG. 21 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. Portions (a) to (e) of FIG. 21 each show a time change of the display on the display screen 82.
  • the composite image 300 displayed on the display screen 82 is sequentially switched, for example, from one second to several seconds.
  • the composite image 300 is an image in which the captured image 301 and a plurality of aerosol images 312, 322, 332, and 334 are composited.
  • Each of the plurality of aerosol images 312, 322, 332, and 334 corresponds to a distance from the reference position.
  • distance information 302 is displayed on the display screen 82.
  • the distance information 302 represents the distance in the depth direction by a numerical value.
  • the plurality of aerosol images 312, 322, 332, and 334 represent aerosols at distances of 0.8 m, 1.1 m, 1.4 m, and 1.7 m.
  • the aerosol image 312 includes an outline 312a and level information 312c.
  • the aerosol image 322 includes an outline 322a and level information 322c.
  • the aerosol image 332 includes an outline 332a and level information 332c.
  • the aerosol image 342 includes an outline 342a and level information 342c.
  • the contours 312a, 322a, 332a, and 342a each represent the aerosol presence range at the corresponding distance.
  • the level information 312c, 322c, 332c, and 342c indicates the concentration of the aerosol at the corresponding distance.
  • the level information 312c, 322c, 332c, and 342c represent the maximum value of the concentration of each coordinate in the outline of the aerosol at the corresponding distance. As shown in part (d) of FIG. 21, when the distance is 1.4 m, the management level Ci of the aerosol, that is, the highest concentration appears.
  • the captured image 301 is a still image, but may change with time. That is, the captured image 301 may be a moving image.
  • the first sensor 30, the second sensor 40, and the third sensor 50 have been described as examples in which each of the sensors is an autonomous mobile sensor, but the present invention is not limited to this.
  • At least one of the first sensor 30, the second sensor 40, and the third sensor 50 may be a stationary sensor device fixed at a predetermined position in the space 95.
  • the predetermined position is, for example, a ceiling, a floor, a wall, or the like that forms the space 95.
  • the value of the density itself may be displayed as a numerical value instead of the management level.
  • the line type of the outline may be different. For example, pollen may be represented by a solid outline, and dust may be represented by a broken outline.
  • the non-contact sensing system 10 may not include the camera 20.
  • a photographed image of the space 95 may be stored in the memory 66 of the computer 60 in advance.
  • At least one of the first sensor 30, the second sensor 40, and the third sensor 50 may be a contact-type sensor.
  • the communication method between the devices described in the above embodiment is not particularly limited.
  • the wireless communication method is, for example, ZigBee (registered trademark), Bluetooth (registered trademark), or short-range wireless communication such as wireless LAN (Local Area Network).
  • the wireless communication method may be communication via a wide area communication network such as the Internet. Wired communication may be performed between the devices instead of wireless communication.
  • the wired communication is power line communication (PLC) or communication using a wired LAN.
  • another processing unit may execute a process executed by a specific processing unit. Further, the order of the plurality of processes may be changed, or the plurality of processes may be executed in parallel.
  • the distribution of the components included in the non-contact sensing system 10 to a plurality of devices is an example. For example, components provided in one device may be provided in another device. Further, the non-contact sensing system 10 may be realized as a single device.
  • FIG. 22 is a diagram showing a tablet terminal 480 integrally provided with the non-contact sensing system 10 according to the embodiment.
  • the tablet terminal 480 is a plate-like device.
  • Parts (a) and (b) of FIG. 22 are plan views showing one surface and the other surface of the tablet terminal 480, respectively.
  • a display screen 482 is provided on one surface of the tablet terminal 480.
  • the camera 20, the light source 32, and the photodetector 34 are provided on the other surface of the tablet terminal 480.
  • the tablet terminal 480 includes the processor 64 and the memory 66 of the computer 60 in the embodiment.
  • the tablet terminal 480 may be configured such that the display screen 482 displaying the composite image 481, the camera 20, the sensor device, and the computer 60 are integrated.
  • the processing performed by the server device 70 may be performed by the computer 60 or the tablet terminal 80.
  • the processing performed by the computer 60 may be performed by the server device 70 or the tablet terminal 80.
  • control unit 84 of the tablet terminal 80 may generate a composite image.
  • the control unit 84 may perform the processing of converting the captured image data into a 3D database and the processing of converting the sensor data into a 3D database illustrated in FIG. 11.
  • the control unit 84 may generate the 3D database (S26), generate the level distribution (S28), and generate the composite image (S32) shown in FIG.
  • the processing described in the above embodiment may be realized by centralized processing using a single device or system, or may be realized by distributed processing using a plurality of devices.
  • the number of processors that execute the program may be one or more. That is, centralized processing or distributed processing may be performed.
  • all or a part of the components such as the control unit may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component. Is also good.
  • Each component may be realized by a program execution unit such as a CPU (Central Processing Unit) or a processor reading and executing a software program recorded on a recording medium such as an HDD (Hard Disk Drive) or a semiconductor memory. Good.
  • a program execution unit such as a CPU (Central Processing Unit) or a processor reading and executing a software program recorded on a recording medium such as an HDD (Hard Disk Drive) or a semiconductor memory. Good.
  • the components such as the control unit may be configured by one or a plurality of electronic circuits.
  • Each of the one or more electronic circuits may be a general-purpose circuit or a dedicated circuit.
  • the one or more electronic circuits may include, for example, a semiconductor device, an integrated circuit (IC), or a large scale integration (LSI).
  • the IC or LSI may be integrated on one chip, or may be integrated on a plurality of chips.
  • IC or LSI the term varies depending on the degree of integration, and may be referred to as a system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration).
  • an FPGA Field Programmable Gate Array programmed after the manufacture of the LSI can be used for the same purpose.
  • general or specific aspects of the present disclosure may be realized by a system, an apparatus, a method, an integrated circuit, or a computer program.
  • the present invention may be implemented by a computer-readable non-transitory recording medium such as an optical disk, an HDD, or a semiconductor memory in which the computer program is stored.
  • the present invention may be realized by any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
  • the present disclosure can be used as a display device or the like that can accurately indicate the exact position of an aerosol, and can be used, for example, for air conditioning control or control of space purification processing.
  • Non-Contact Sensing System 20 Camera 30 First Sensors 32, 42, 52 Light Sources 34, 44, 54 Light Detectors 36, 46, 56 Signal Processing Circuits 37, 47, 57 Position Information Acquisition Units 38, 48, 58 Acquisition of Density Information Unit 40 second sensor 50 third sensor 60 computer 62 communication interface 64 processor 66 memory 70 server device 80, 480 tablet terminal 82, 482 display screen 84 control unit 90 first object 90a, 90b contour 92 second object 94 3 object 95 space 96 floor surface 100, 110, 120, 130, 140, 200, 300, 481 composite image 101, 201, 301 photographed image 102, 112, 114, 122, 124, 126, 128, 132, 134, 136, 138, 148, 202, 312, 322, 332, 42 aerosol images 102a, 112a, 114a, 122a, 124a, 126a, 128a, 132a, 134a, 136a, 138a, 148a, 202a, 312a,

Abstract

A display device according to one embodiment of the present disclosure is provided with a display screen, and a control unit which causes the display screen to display a combined image in which a first image obtained by imaging a space using a camera is combined with a second image representing at least one type of aerosol present in the space. The position of the at least one type of aerosol in the depth direction of the first image is reflected in the second image.

Description

表示装置、画像処理装置及び制御方法Display device, image processing device, and control method
 本開示は、表示装置、画像処理装置及び制御方法に関する。 The present disclosure relates to a display device, an image processing device, and a control method.
 従来、花粉又は埃などの空気中を浮遊する物質、すなわち、エアロゾルを可視化して表示する端末装置が知られている。例えば、特許文献1及び2は、このような端末装置を開示している。 Conventionally, a terminal device that visualizes and displays a substance floating in the air, such as pollen or dust, that is, an aerosol is known. For example, Patent Documents 1 and 2 disclose such a terminal device.
特開2014-206291号公報JP 2014-206291 A 国際公開第2016/181854号International Publication No. WO 2016/181854
 しかしながら、上記従来技術では、エアロゾルの位置を精度良く提示することができないという問題がある。 However, the above-described conventional technology has a problem that the position of the aerosol cannot be accurately presented.
 そこで、本開示は、エアロゾルの位置を精度良く提示することができる表示装置、画像処理装置及び制御方法を提供する。 Accordingly, the present disclosure provides a display device, an image processing device, and a control method capable of accurately presenting the position of an aerosol.
 本開示の一態様に係る表示装置は、表示画面と、カメラで空間を撮像することにより得られた第1画像、及び前記空間に存在する少なくとも1種類のエアロゾルを表す第2画像が合成された合成画像を前記表示画面に表示させる制御部と、を備える。前記第2画像には、前記少なくとも1種類のエアロゾルの、前記第1画像における奥行き方向の位置が反映されている。 In the display device according to an embodiment of the present disclosure, a display screen, a first image obtained by imaging a space with a camera, and a second image representing at least one type of aerosol existing in the space are synthesized. A control unit for displaying a composite image on the display screen. The position of the at least one aerosol in the depth direction in the first image is reflected in the second image.
 また、本開示の一態様に係る画像処理装置は、空間に存在する少なくとも1種類のエアロゾルの前記空間内の位置を表す三次元座標データを取得する取得回路と、プロセッサと、を備える。前記プロセッサは、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類のエアロゾルを表す第2画像とが合成された合成画像を生成する。前記第2画像には、前記少なくとも1種類のエアロゾルの、前記第1画像における奥行き方向の位置が反映されている。 The image processing device according to an aspect of the present disclosure includes an acquisition circuit that acquires three-dimensional coordinate data representing a position in the space of at least one type of aerosol existing in the space, and a processor. The processor is configured to synthesize a first image obtained by imaging the space with a camera and a second image representing the at least one aerosol existing in the space, based on the three-dimensional coordinate data. A composite image is generated. The position of the at least one aerosol in the depth direction in the first image is reflected in the second image.
 また、本開示の一態様に係る制御方法は、空間中の少なくとも1種類の対象物に向けて照射光を出射する光源、及び、前記少なくとも1種類の対象物からの戻り光を検出する光検出器を含み、前記光検出器が前記戻り光を検出した結果を表すデータを出力するセンサと、表示装置と、を備えるシステムの制御方法であって、前記センサから前記データを取得すること、前記データに基づいて、前記少なくとも1種類の対象物の前記空間内の位置を表す三次元座標データを生成すること、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類の対象物を表す第2画像であって、前記少なくとも1種類の対象物の、前記第1画像における奥行き方向の位置が反映された第2画像とが合成された合成画像を生成すること、及び前記表示装置に、前記合成画像を表示させること、を含む。 Further, a control method according to an aspect of the present disclosure includes a light source that emits irradiation light toward at least one type of object in a space, and a light detection that detects return light from the at least one type of object. And a sensor that outputs data representing the result of detection of the return light by the photodetector, and a display device, comprising: obtaining the data from the sensor; Based on data, generating three-dimensional coordinate data representing the position of the at least one type of object in the space, and obtaining the three-dimensional coordinate data by imaging the space with a camera based on the three-dimensional coordinate data. A first image and a second image representing the at least one type of object existing in the space, wherein the position of the at least one type of object in the depth direction in the first image is The second image imaged to generate a synthesized image synthesized, and the display device comprises, possible to display the composite image.
 また、本開示の一態様は、上記制御方法をコンピュータに実行させるプログラムとして実現することができる。あるいは、当該プログラムを格納したコンピュータ読み取り可能な非一時的な記録媒体として実現することもできる。 In addition, one embodiment of the present disclosure can be realized as a program that causes a computer to execute the control method. Alternatively, it can be realized as a non-transitory computer-readable recording medium storing the program.
 本開示によれば、エアロゾルの位置を精度良く提示することができる。 According to the present disclosure, the position of the aerosol can be presented with high accuracy.
図1は、実施の形態に係る非接触センシングシステムが適用される空間を示す上面図である。FIG. 1 is a top view illustrating a space to which the non-contact sensing system according to the embodiment is applied. 図2は、実施の形態に係る非接触センシングシステムの構成を示すブロック図である。FIG. 2 is a block diagram illustrating a configuration of the non-contact sensing system according to the embodiment. 図3は、実施の形態に係るセンサ装置の一例を示す図である。FIG. 3 is a diagram illustrating an example of the sensor device according to the embodiment. 図4は、実施の形態に係るセンサ装置から出力されるセンサデータの例を示す図である。FIG. 4 is a diagram illustrating an example of sensor data output from the sensor device according to the embodiment. 図5は、実施の形態に係る非接触センシングシステムにおいて、管理レベルを決定するための条件式を示すブロック図である。FIG. 5 is a block diagram showing a conditional expression for determining a management level in the non-contact sensing system according to the embodiment. 図6は、物質毎の基準値を示す基準値データベースの一例を示す図である。FIG. 6 is a diagram illustrating an example of a reference value database indicating a reference value for each substance. 図7は、実施の形態に係る非接触センシングシステムによるエアロゾルの輪郭の決定方法を説明するための図である。FIG. 7 is a diagram for explaining a method of determining an aerosol contour by the non-contact sensing system according to the embodiment. 図8は、実施の形態に係る非接触センシングシステムによって得られた空間内の対象物毎の管理レベルの代表値を示す図である。FIG. 8 is a diagram illustrating a representative value of the management level for each object in the space obtained by the non-contact sensing system according to the embodiment. 図9は、実施の形態に係る表示装置の表示画面への表示例を示す図である。FIG. 9 is a diagram illustrating a display example on the display screen of the display device according to the embodiment. 図10は、実施の形態に係る非接触センシングシステムの動作を示すシーケンス図である。FIG. 10 is a sequence diagram showing an operation of the non-contact sensing system according to the embodiment. 図11は、実施の形態に係る非接触センシングシステムの動作のうち、撮影画像データの3Dデータベース化の処理を示すフローチャートである。FIG. 11 is a flowchart illustrating a process of converting captured image data into a 3D database, among operations of the non-contact sensing system according to the embodiment. 図12は、実施の形態に係る非接触センシングシステムの動作のうち、センサデータの3Dデータベース化の処理を示すフローチャートである。FIG. 12 is a flowchart illustrating a process of converting the sensor data into a 3D database in the operation of the non-contact sensing system according to the embodiment. 図13は、実施の形態に係る非接触センシングシステムで生成された3Dデータベースの一例を示す図である。FIG. 13 is a diagram illustrating an example of a 3D database generated by the non-contact sensing system according to the embodiment. 図14は、実施の形態に係る非接触センシングシステムの動作のうち、レベル分布の生成処理を示すフローチャートである。FIG. 14 is a flowchart illustrating a level distribution generation process in the operation of the non-contact sensing system according to the embodiment. 図15は、実施の形態に係る非接触センシングシステムの動作のうち、補助情報の生成処理を示すフローチャートである。FIG. 15 is a flowchart illustrating a process of generating auxiliary information in the operation of the non-contact sensing system according to the embodiment. 図16は、実施の形態に係る表示装置の表示画面への表示の別の一例を示す図である。FIG. 16 is a diagram showing another example of the display on the display screen of the display device according to the embodiment. 図17は、実施の形態に係る表示装置の表示画面への表示の別の一例を示す図である。FIG. 17 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment. 図18は、実施の形態に係る表示装置の表示画面への表示の別の一例を示す図である。FIG. 18 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment. 図19は、実施の形態に係る表示装置の表示画面への表示の別の一例を示す図である。FIG. 19 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment. 図20は、実施の形態に係る表示装置の表示画面への表示の別の一例を示す図である。FIG. 20 is a diagram showing another example of the display on the display screen of the display device according to the embodiment. 図21は、実施の形態に係る表示装置の表示画面への表示の別の一例を示す図である。FIG. 21 is a diagram illustrating another example of the display on the display screen of the display device according to the embodiment. 図22は、実施の形態に係る非接触センシングシステムを一体的に備える表示装置を示す図である。FIG. 22 is a diagram illustrating a display device integrally including the non-contact sensing system according to the embodiment.
 (本開示の概要)
 本開示の一態様に係る表示装置は、表示画面と、カメラで空間を撮像することにより得られた第1画像、及び前記空間に存在する少なくとも1種類のエアロゾルを表す第2画像が合成された合成画像を前記表示画面に表示させる制御部と、を備える。前記第2画像には、前記少なくとも1種類のエアロゾルの、前記第1画像における奥行き方向の位置が反映されている。
(Summary of the present disclosure)
In the display device according to an embodiment of the present disclosure, a display screen, a first image obtained by imaging a space with a camera, and a second image representing at least one type of aerosol existing in the space are synthesized. A control unit for displaying a composite image on the display screen. The position of the at least one aerosol in the depth direction in the first image is reflected in the second image.
 このように、エアロゾルを表す第2画像には、奥行き方向のエアロゾルの位置が反映されている。このため、表示画面には、第1画像によって表される上下左右方向のエアロゾルの位置だけでなく、第1画像に対する奥行き方向のエアロゾルの位置も表示される。これにより、エアロゾルの位置を精度良く提示することができる。 As described above, the position of the aerosol in the depth direction is reflected in the second image representing the aerosol. For this reason, the position of the aerosol in the depth direction with respect to the first image as well as the position of the aerosol in the vertical and horizontal directions represented by the first image is displayed on the display screen. Thereby, the position of the aerosol can be accurately presented.
 また、例えば、前記第1画像は二次元空間を表しており、前記制御部は、さらに、前記二次元空間に、前記少なくとも1種類のエアロゾルの前記空間内の位置を表す三次元座標データを投影することで前記第2画像を生成し、前記第1画像と前記第2画像とを合成することで前記合成画像を生成してもよい。 In addition, for example, the first image represents a two-dimensional space, and the control unit further projects three-dimensional coordinate data representing a position of the at least one type of aerosol in the space on the two-dimensional space. Then, the second image may be generated, and the first image and the second image may be synthesized to generate the synthesized image.
 これにより、二次元空間内でのエアロゾルの位置を精度良く提示することができる。 Thereby, the position of the aerosol in the two-dimensional space can be accurately presented.
 また、例えば、前記制御部は、さらに、前記少なくとも1種類のエアロゾルの前記空間内の位置を取得するセンサから前記三次元座標データを取得し、前記第1画像を擬似的に三次元画像に変換し、前記三次元画像と前記三次元座標データとを対応付けて、前記二次元空間に前記三次元座標データを投影することで前記第2画像を生成してもよい。 Also, for example, the control unit further obtains the three-dimensional coordinate data from a sensor that obtains a position of the at least one type of aerosol in the space, and converts the first image into a pseudo three-dimensional image. Then, the second image may be generated by associating the three-dimensional image with the three-dimensional coordinate data and projecting the three-dimensional coordinate data onto the two-dimensional space.
 これにより、擬似的な三次元空間内でのエアロゾルの位置を精度良く提示することができる。 Thereby, the position of the aerosol in the pseudo three-dimensional space can be accurately presented.
 また、例えば、前記第2画像は、前記少なくとも1種類のエアロゾルが存在する範囲を表す輪郭と、前記空間における基準位置から前記輪郭内の代表位置までの距離を表す距離情報とを含んでもよい。 For example, the second image may include a contour indicating a range where the at least one type of aerosol exists, and distance information indicating a distance from a reference position in the space to a representative position in the contour.
 これにより、エアロゾルが存在する範囲を表示画面に表示することができ、かつ、エアロゾルの奥行き方向の位置を代表位置で代表させて表示させることができる。このため、エアロゾルの位置を簡潔に、すなわち、表示画面を見るユーザにとって分かりやすい表示態様で表示することができる。 Thereby, the range in which the aerosol exists can be displayed on the display screen, and the position in the depth direction of the aerosol can be displayed as a representative position. For this reason, the position of the aerosol can be displayed simply, that is, in a display mode that is easy for the user viewing the display screen to understand.
 また、例えば、前記代表位置は、前記輪郭内における前記少なくとも1種類のエアロゾルの濃度分布の重心であってもよい。 For example, the representative position may be a center of gravity of the concentration distribution of the at least one type of aerosol in the outline.
 これにより、濃度分布に基づく演算により、容易に代表位置を決定することができる。また、エアロゾルの中心程、濃度が高いことが多いので、濃度分布の重心を代表位置とすることで、エアロゾルの位置を精度良く提示することができる。 This makes it possible to easily determine the representative position by calculation based on the density distribution. In addition, since the concentration is often higher toward the center of the aerosol, the position of the aerosol can be presented with high accuracy by setting the center of gravity of the concentration distribution as the representative position.
 また、例えば、前記距離情報は、前記距離を示す数値であってもよい。 For example, the distance information may be a numerical value indicating the distance.
 これにより、距離を数値で表示することができるので、エアロゾルの位置をユーザにとって分かりやすい表示態様で表示することができる。 (4) Since the distance can be displayed as a numerical value, the position of the aerosol can be displayed in a display mode that is easy for the user to understand.
 また、例えば、前記距離情報は、前記距離に応じて予め定められた、前記輪郭内に付された色であってもよい。 Also, for example, the distance information may be a color given in the outline, which is predetermined according to the distance.
 これにより、距離を色によって区別することができるので、エアロゾルの位置をユーザにとって分かりやすい表示態様で表示することができる。 (4) Since the distance can be distinguished by color, the position of the aerosol can be displayed in a display mode that is easy for the user to understand.
 また、例えば、前記合成画像は、前記空間と、前記少なくとも1種類のエアロゾルが存在する範囲を表す輪郭と含む三次元モデルを示していてもよい。 For example, the composite image may show a three-dimensional model including the space and a contour representing a range in which the at least one type of aerosol exists.
 これにより、合成画像が三次元モデル化されるので、例えば複数の視点からの画像を表示画面に表示することができる。このため、エアロゾルの位置をユーザにとってより分かりやすい表示態様で表示することができる。 {Circle over (3)} Since the composite image is converted into a three-dimensional model, images from a plurality of viewpoints can be displayed on the display screen. Therefore, the position of the aerosol can be displayed in a display mode that is easier for the user to understand.
 また、例えば、前記第2画像は、複数の画像が時間的に切り替わる動画像であり、前記複数の画像の各々は、前記空間における基準位置からの距離に対応し、前記対応する距離における前記少なくとも1種類のエアロゾルが存在する範囲を表す輪郭を含んでもよい。 Further, for example, the second image is a moving image in which a plurality of images are temporally switched, each of the plurality of images corresponds to a distance from a reference position in the space, and It may include a contour representing a range where one type of aerosol exists.
 これにより、距離毎のエアロゾルを表す画像が順次表示されるので、エアロゾルの位置を精度良く提示することができる。 Thereby, images representing the aerosol for each distance are sequentially displayed, so that the position of the aerosol can be presented with high accuracy.
 また、例えば、前記第2画像には、さらに、前記少なくとも1種類のエアロゾルの濃度が反映されていてもよい。 For example, the second image may further reflect the concentration of the at least one aerosol.
 これにより、エアロゾルの位置だけでなく、濃度も提示することができる。ユーザに提示される情報量及び種類が増えるので、例えば換気を行うなどのエアロゾルに対する対策をするか否かの判断を支援することができる。 This allows not only the position of the aerosol but also the concentration to be presented. Since the amount and types of information presented to the user increase, it is possible to assist in determining whether or not to take measures against aerosols such as performing ventilation.
 また、例えば、前記第2画像は、前記少なくとも1種類のエアロゾルの濃度のレベルを表すレベル情報を含んでもよい。 For example, the second image may include level information indicating a concentration level of the at least one type of aerosol.
 これにより、エアロゾルの濃度をレベル別に分類することで、エアロゾルの濃度を簡潔に、すなわち、表示画面を見るユーザにとって分かりやすい表示態様で表示することができる。 (4) By classifying the aerosol concentration by level, the aerosol concentration can be displayed in a simple manner, that is, in a display mode that is easy to understand for a user viewing the display screen.
 また、例えば、前記少なくとも1種類のエアロゾルは、複数種類のエアロゾルを含み、前記第2画像は、前記複数種類のエアロゾルをそれぞれ異なる表示態様で表してもよい。 For example, the at least one type of aerosol may include a plurality of types of aerosols, and the second image may represent the plurality of types of aerosols in different display modes.
 これにより、複数の種類のエアロゾルが存在する場合であっても、種類毎に異なる表示態様で表示することができる。 Thereby, even when a plurality of types of aerosols are present, they can be displayed in different display modes for each type.
 また、例えば、前記制御部は、さらに、前記少なくとも1種類のエアロゾルの濃度が閾値を上回った場合に、ユーザの注意を喚起するための画像を前記表示画面に表示してもよい。 For example, when the concentration of the at least one type of aerosol exceeds a threshold value, the control unit may display an image for calling a user's attention on the display screen.
 これにより、エアロゾルに対する対策などをユーザに促すことができる。 This can encourage the user to take measures against aerosols.
 また、例えば、本開示の一態様に係る画像処理装置は、空間に存在する少なくとも1種類のエアロゾルの前記空間内の位置を表す三次元座標データを取得する取得回路と、プロセッサと、を備え、前記プロセッサは、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類のエアロゾルを表す第2画像とが合成された合成画像を生成し、前記第2画像には、前記少なくとも1種類のエアロゾルの、前記第1画像における奥行き方向の位置が反映されている。 Further, for example, the image processing apparatus according to an aspect of the present disclosure includes an acquisition circuit that acquires three-dimensional coordinate data representing a position in the space of at least one type of aerosol existing in the space, and a processor, The processor is configured to synthesize a first image obtained by imaging the space with a camera and a second image representing the at least one aerosol existing in the space, based on the three-dimensional coordinate data. The second image reflects the position of the at least one aerosol in the depth direction in the first image.
 このように、エアロゾルを表す第2画像には、奥行き方向のエアロゾルの位置が反映されている。このため、表示画面に表示される合成画像には、第1画像によって表される上下左右方向のエアロゾルの位置だけでなく、第1画像に対する奥行き方向のエアロゾルの位置も表れる。これにより、本態様に係る画像処理装置が生成した合成画像を表示画面に表示させた場合に、エアロゾルの位置を精度良く提示することができる。 As described above, the position of the aerosol in the depth direction is reflected in the second image representing the aerosol. For this reason, in the composite image displayed on the display screen, not only the position of the aerosol in the vertical and horizontal directions represented by the first image but also the position of the aerosol in the depth direction with respect to the first image appear. Thereby, when the composite image generated by the image processing apparatus according to the present aspect is displayed on the display screen, the position of the aerosol can be presented with high accuracy.
 また、例えば、本開示の一態様に係る制御方法は、空間中の少なくとも1種類の対象物に向けて照射光を出射する光源、及び、前記少なくとも1種類の対象物からの戻り光を検出する光検出器を含み、前記光検出器が前記戻り光を検出した結果を表すデータを出力するセンサと、表示装置と、を備えるシステムの制御方法であって、前記センサから前記データを取得すること、前記データに基づいて、前記少なくとも1種類の対象物の前記空間内の位置を表す三次元座標データを生成すること、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類の対象物を表す第2画像であって、前記少なくとも1種類の対象物の、前記第1画像における奥行き方向の位置が反映された第2画像とが合成された合成画像を生成すること、及び前記表示装置に、前記合成画像を表示させること、を含む。 Also, for example, the control method according to an aspect of the present disclosure detects a light source that emits irradiation light toward at least one type of object in a space, and detects return light from the at least one type of object. A control method for a system including a photodetector and outputting a data representing a result of detection of the return light by the photodetector, and a display device, wherein the data is obtained from the sensor. Generating three-dimensional coordinate data representing a position of the at least one type of object in the space based on the data; and capturing an image of the space with a camera based on the three-dimensional coordinate data. And a second image representing the at least one type of object existing in the space, wherein the at least one type of object is a depth direction in the first image. The second image position is reflected to generate a synthesized image synthesized, and the display device comprises, possible to display the composite image.
 このように、対象物を表す第2画像には、奥行き方向の対象物の位置が反映されている。このため、表示装置には、第1画像によって表される上下左右方向の対象物の位置だけでなく、第1画像に対する奥行き方向の対象物の位置も表示される。これにより、対象物の位置を精度良く提示することができる。 Thus, the position of the object in the depth direction is reflected in the second image representing the object. For this reason, the display device displays not only the position of the object in the vertical and horizontal directions represented by the first image but also the position of the object in the depth direction with respect to the first image. Thereby, the position of the target object can be presented with high accuracy.
 また、例えば、前記戻り光は、前記照射光によって前記少なくとも1種類の対象物が励起されて発する蛍光であり、前記合成画像の生成では、さらに、前記蛍光を分析することで、前記少なくとも1種類の対象物の種類を判別し、前記種類を前記第2画像に反映させてもよい。 Further, for example, the return light is fluorescence emitted when the at least one kind of object is excited by the irradiation light, and in the generation of the composite image, the fluorescence is further analyzed to obtain the at least one kind of light. May be determined, and the type may be reflected in the second image.
 これにより、対象物の位置だけでなく、種類も提示することができる。ユーザに提示される情報量及び種類が増えるので、例えば換気を行うなどの対象物に対する対策をするか否かの判断を支援することができる。 This allows not only the position of the object but also the type to be presented. Since the amount and types of information presented to the user are increased, it is possible to assist in determining whether or not to take measures against the target object such as performing ventilation.
 また、例えば、前記照射光は、所定の偏光成分を含み、前記合成画像の生成では、さらに、前記戻り光に含まれる前記偏光成分の偏光解消度に基づいて前記少なくとも1種類の対象物の種類を判別し、前記種類を前記第2画像に反映させてもよい。 Further, for example, the irradiation light includes a predetermined polarization component, and in the generation of the composite image, the type of the at least one target object is further determined based on a degree of depolarization of the polarization component included in the return light. May be determined, and the type may be reflected in the second image.
 これにより、対象物の位置だけでなく、種類も提示することができる。ユーザに提示される情報量及び種類が増えるので、例えば換気又は除菌を行うなどの対象物に対する対策をするか否かの判断を支援することができる。 This allows not only the position of the object but also the type to be presented. Since the amount and types of information presented to the user increase, it is possible to assist in determining whether or not to take measures against the target object such as performing ventilation or sterilization.
 また、例えば、前記三次元座標データは、前記照射光が出射される時間と前記戻り光が検出される時間との差に基づいて算出された、前記センサと前記少なくとも1種類の対象物との相対位置関係と、前記センサの前記空間中での座標とを用いて生成されてもよい。 Further, for example, the three-dimensional coordinate data is calculated based on a difference between a time at which the irradiation light is emitted and a time at which the return light is detected. It may be generated using a relative positional relationship and coordinates of the sensor in the space.
 これにより、対象物の検知と検知した対象物までの距離とを、同じ光源及び光検出器によって実行することができる。これにより、センサ装置の構成を簡潔にすることができる。 Thereby, the detection of the object and the distance to the detected object can be executed by the same light source and photodetector. Thereby, the configuration of the sensor device can be simplified.
 また、例えば、前記少なくとも1種類の対象物は、前記空間中に存在する物体に付着した有機物であってもよい。 Further, for example, the at least one type of object may be an organic substance attached to an object existing in the space.
 これにより、例えば、嘔吐物又は花粉などの有機物を含む物質を検出し、その位置を精度良く提示することができる。 Thereby, for example, a substance containing an organic substance such as a vomit or pollen can be detected, and its position can be presented with high accuracy.
 また、例えば、前記少なくとも1種類の対象物は、前記空間中に存在するエアロゾルであってもよい。 Also, for example, the at least one type of object may be an aerosol existing in the space.
 これにより、例えば、花粉又は埃などの空気中を浮遊する物質を検出し、その位置を精度良く提示することができる。 Thereby, for example, a substance floating in the air, such as pollen or dust, can be detected, and its position can be presented with high accuracy.
 また、例えば、前記戻り光は、前記照射光が前記少なくとも1種類の対象物によって散乱されて発生する後方散乱光であってもよい。 Also, for example, the return light may be backscattered light generated by the irradiation light being scattered by the at least one type of object.
 これにより、エアロゾルを精度良く検出することができる。 Thereby, the aerosol can be detected with high accuracy.
 また、例えば、本開示の一態様に係るコンピュータ読み取り可能な記録媒体は、空間中の少なくとも1種類の対象物に向けて照射光を出射する光源、及び、前記少なくとも1種類の対象物からの戻り光を検出する光検出器を含み、前記光検出器が前記戻り光を検出した結果を表すデータを出力するセンサと、表示装置と、を備えるシステムを制御するためのプログラムを格納したコンピュータ読み取り可能な記録媒体であって、前記プログラムが前記コンピュータによって実行されるときに、前記センサから前記データを取得すること、前記データに基づいて、前記少なくとも1種類の対象物の前記空間内の位置を表す三次元座標データを生成すること、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類の対象物を表す第2画像であって、前記少なくとも1種類の対象物の、前記第1画像における奥行き方向の位置が反映された第2画像とが合成された合成画像を生成すること、及び前記表示装置に、前記合成画像を表示させること、が実行される。 Further, for example, a computer-readable recording medium according to an aspect of the present disclosure includes a light source that emits irradiation light toward at least one type of object in a space, and a return light from the at least one type of object. A computer readable program storing a program for controlling a system including a photodetector for detecting light, a sensor for outputting data representing a result of detection of the return light by the photodetector, and a display device A recording medium, wherein when the program is executed by the computer, acquiring the data from the sensor; and representing a position of the at least one type of object in the space based on the data. Generating three-dimensional coordinate data, obtained by imaging the space with a camera based on the three-dimensional coordinate data; One image and a second image representing the at least one type of object existing in the space, wherein the second image reflects the position of the at least one type of object in the depth direction in the first image And generating a composite image in which the composite image and the composite image are combined, and causing the display device to display the composite image.
 また、例えば、本開示の一態様に係るプログラムは、空間中の少なくとも1種類の対象物に向けて照射光を出射する光源、及び、前記少なくとも1種類の対象物からの戻り光を検出する光検出器を含み、前記光検出器が前記戻り光を検出した結果を表すデータを出力するセンサと、表示装置と、を備えるシステムを制御するためのコンピュータ実行可能なプログラムであって、前記センサから前記データを取得すること、前記データに基づいて、前記少なくとも1種類の対象物の前記空間内の位置を表す三次元座標データを生成すること、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類の対象物を表す第2画像であって、前記少なくとも1種類の対象物の、前記第1画像における奥行き方向の位置が反映された第2画像とが合成された合成画像を生成すること、及び前記表示装置に、前記合成画像を表示させること、をコンピュータに実行させる。 Further, for example, a program according to an aspect of the present disclosure includes a light source that emits irradiation light toward at least one type of object in a space, and a light that detects return light from the at least one type of object. A sensor that includes a detector and outputs a data representing a result of detection of the return light by the light detector, and a display device, a computer-executable program for controlling a system including: Obtaining the data, generating three-dimensional coordinate data representing the position of the at least one type of object in the space based on the data, and using the camera to generate the three-dimensional coordinate data based on the three-dimensional coordinate data. And a second image representing the at least one type of object existing in the space, wherein the first image is obtained by imaging the at least one type of object. A computer is configured to generate a combined image in which a second image reflecting the position of the elephant in the first image in the depth direction is reflected, and to cause the display device to display the combined image. Let it.
 本開示において、回路、ユニット、装置、部材または部の全部または一部、またはブロック図における機能ブロックの全部または一部は、例えば、半導体装置、半導体集積回路(IC)、またはLSI(large scale integration)を含む1つまたは複数の電子回路によって実行され得る。LSIまたはICは、1つのチップに集積されてもよいし、複数のチップを組み合わせて構成されてもよい。例えば、記憶素子以外の機能ブロックは、1つのチップに集積されてもよい。ここでは、LSIまたはICと呼んでいるが、集積の度合いによって呼び方が変わり、システムLSI、VLSI(very large scale integration)、もしくはULSI(ultra large scale integration)と呼ばれるものであってもよい。LSIの製造後にプログラムされる、Field Programmable Gate Array(FPGA)、またはLSI内部の接合関係の再構成またはLSI内部の回路区画のセットアップができるreconfigurable logic deviceも同じ目的で使うことができる。 In the present disclosure, all or a part of a circuit, a unit, a device, a member, or a part, or all or a part of a functional block in a block diagram is, for example, a semiconductor device, a semiconductor integrated circuit (IC), or an LSI (large scale integration). ) May be performed by one or more electronic circuits. The LSI or IC may be integrated on one chip or may be configured by combining a plurality of chips. For example, functional blocks other than the storage element may be integrated on one chip. Here, the term is referred to as an LSI or an IC, but the term is changed depending on the degree of integration, and may be referred to as a system LSI, a VLSI (very large scale integration), or a ULSI (ultra large scale integration). A Field Programmable Gate Array (FPGA), which is programmed after the manufacture of the LSI, or a reconfigurable logic device capable of reconfiguring a bonding relationship inside the LSI or setting up a circuit section inside the LSI can also be used for the same purpose.
 さらに、回路、ユニット、装置、部材または部の全部または一部の機能または操作は、ソフトウェア処理によって実行することが可能である。この場合、ソフトウェアは1つまたは複数のROM、光学ディスク、ハードディスクドライブなどの非一時的記録媒体に記録され、ソフトウェアが処理装置(processor)によって実行されたときに、そのソフトウェアで特定された機能が処理装置(processor)および周辺装置によって実行される。システムまたは装置は、ソフトウェアが記録されている1つまたは複数の非一時的記録媒体、処理装置(processor)、および必要とされるハードウェアデバイス、例えばインタフェースを備えていてもよい。 Further, all or some of the functions or operations of the circuits, units, devices, members or units can be executed by software processing. In this case, the software is recorded on one or more non-transitory recording media such as a ROM, an optical disk, a hard disk drive, etc., and when the software is executed by a processor, a function specified by the software is executed. It is performed by a processor and peripheral devices. The system or apparatus may include one or more non-transitory storage media on which the software is recorded, a processor, and any required hardware devices, such as an interface.
 以下では、実施の形態について、図面を参照しながら具体的に説明する。 Hereinafter, embodiments will be specifically described with reference to the drawings.
 なお、以下で説明する実施の形態は、いずれも包括的又は具体的な例を示すものである。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置及び接続形態、ステップ、ステップの順序などは、一例であり、本開示を限定する主旨ではない。また、以下の実施の形態における構成要素のうち、独立請求項に記載されていない構成要素については、任意の構成要素として説明される。 Note that each of the embodiments described below shows a comprehensive or specific example. Numerical values, shapes, materials, constituent elements, arrangement positions and connection forms of constituent elements, steps, order of steps, and the like shown in the following embodiments are merely examples, and do not limit the present disclosure. Further, among the components in the following embodiments, components not described in the independent claims are described as arbitrary components.
 また、各図は、模式図であり、必ずしも厳密に図示されたものではない。したがって、例えば、各図において縮尺などは必ずしも一致しない。また、各図において、実質的に同一の構成については同一の符号を付しており、重複する説明は省略又は簡略化する。 Moreover, each drawing is a schematic diagram, and is not necessarily strictly illustrated. Therefore, for example, the scales and the like do not always match in each drawing. Further, in each of the drawings, substantially the same configuration is denoted by the same reference numeral, and redundant description will be omitted or simplified.
 (実施の形態)
 [1.概要]
 本実施の形態に係る非接触センシングシステムは、空間を撮像し、かつ、当該空間内に存在する対象物を非接触で検出する。非接触センシングシステムは、撮像された空間を示す第1画像と、検出した対象物を表す第2画像とが合成された合成画像を表示画面に表示する。このとき、第2画像には、検出された対象物の、第1画像における奥行き方向の位置が反映されている。
(Embodiment)
[1. Overview]
The non-contact sensing system according to the present embodiment captures an image of a space and detects an object existing in the space in a non-contact manner. The non-contact sensing system displays, on a display screen, a combined image in which a first image indicating a captured space and a second image representing a detected target object are combined. At this time, the position of the detected object in the depth direction in the first image is reflected in the second image.
 まず、本実施の形態に係る非接触センシングシステムが適用される空間について、図1を用いて説明する。図1は、本実施の形態に係る非接触センシングシステムが適用される空間95を示す上面図である。 First, a space to which the non-contact sensing system according to the present embodiment is applied will be described with reference to FIG. FIG. 1 is a top view showing a space 95 to which the non-contact sensing system according to the present embodiment is applied.
 空間95は、例えば、住居、オフィス、介護施設又は病院などの建物の一部屋である。空間95は、例えば、壁、窓、ドア、床及び天井などで仕切られた空間であり、閉じられた空間であるが、これに限らない。空間95は、屋外の開放された空間であってもよい。また、空間95は、バス又は飛行機などの移動体の内部空間であってもよい。 The space 95 is, for example, a room in a building such as a residence, an office, a nursing facility, or a hospital. The space 95 is, for example, a space partitioned by walls, windows, doors, floors, ceilings, and the like, and is a closed space, but is not limited thereto. The space 95 may be an outdoor open space. The space 95 may be an internal space of a moving object such as a bus or an airplane.
 図1に示されるように、非接触センシングシステムによる検出の対象となる第1対象物90は、空間95内に存在している。第1対象物90は、具体的には、空間95内を浮遊しているエアロゾルである。エアロゾルには、塵若しくは埃などの粉塵、PM2.5などの浮遊粒子状物質、花粉などの生物系粒子、又は、微小水滴が含まれる。生物系粒子には、空中に浮遊するカビ又はダニなども含まれる。また、エアロゾルには、咳又はくしゃみなどの人体から動的に発生する物質が含まれてもよい。また、エアロゾルには、二酸化炭素(CO)などの空気質の対象となる物質が含まれてもよい。 As shown in FIG. 1, a first target object 90 to be detected by the non-contact sensing system exists in a space 95. The first object 90 is, specifically, an aerosol floating in the space 95. The aerosol includes dust such as dust or dust, suspended particulate matter such as PM2.5, biological particles such as pollen, or fine water droplets. Biological particles also include molds and mites floating in the air. The aerosol may also include substances that are dynamically generated from the human body, such as coughing or sneezing. In addition, the aerosol may include a substance of air quality, such as carbon dioxide (CO 2 ).
 なお、検出の対象物は、エアロゾルに限定されない。具体的には、対象物は、有機物汚れであってもよい。有機物汚れは、例えば空間95を構成する壁、床又は家具などの物体に付着した食品又は嘔吐物であり、空気中に浮遊したものでなくてよい。 Note that the detection target is not limited to the aerosol. Specifically, the target object may be an organic stain. The organic soil is food or vomit attached to an object such as a wall, a floor, or furniture that forms the space 95, and does not have to be floating in the air.
 [2.構成]
 図2は、本実施の形態に係る非接触センシングシステム10の構成を示すブロック図である。図2に示されるように、非接触センシングシステム10は、カメラ20と、第1センサ30と、第2センサ40と、第3センサ50と、コンピュータ60と、サーバ装置70と、タブレット端末80とを備える。
[2. Constitution]
FIG. 2 is a block diagram showing a configuration of the non-contact sensing system 10 according to the present embodiment. As shown in FIG. 2, the non-contact sensing system 10 includes a camera 20, a first sensor 30, a second sensor 40, a third sensor 50, a computer 60, a server device 70, a tablet terminal 80, Is provided.
 なお、非接触センシングシステム10の構成は、図2に示される例には限らない。例えば、非接触センシングシステム10は、第1センサ30、第2センサ40及び第3センサ50のいずれか1つのみを備えてもよい。つまり、非接触センシングシステム10が備えるセンサ装置の個数は、1個のみでもよく、あるいは、複数でもよい。また、非接触センシングシステム10は、コンピュータ60及びサーバ装置70を備えていなくてもよい。また、例えば、非接触センシングシステム10は、タブレット端末80の代わりにコンピュータ60に接続されたディスプレイを備えてもよい。 The configuration of the non-contact sensing system 10 is not limited to the example shown in FIG. For example, the non-contact sensing system 10 may include only one of the first sensor 30, the second sensor 40, and the third sensor 50. That is, the number of sensor devices included in the non-contact sensing system 10 may be only one, or may be plural. Further, the non-contact sensing system 10 may not include the computer 60 and the server device 70. Further, for example, the non-contact sensing system 10 may include a display connected to the computer 60 instead of the tablet terminal 80.
 なお、図2には示されていないが、カメラ20、第1センサ30、第2センサ40、第3センサ、サーバ装置70及びタブレット端末80はそれぞれ、通信インタフェースを備える。通信インタフェースを介して各種データ及び情報の送受信を行う。 Although not shown in FIG. 2, each of the camera 20, the first sensor 30, the second sensor 40, the third sensor, the server device 70, and the tablet terminal 80 has a communication interface. Various data and information are transmitted and received via the communication interface.
 以下では、非接触センシングシステム10の構成要素の詳細について、図2を適宜参照しながら説明する。 In the following, details of the components of the non-contact sensing system 10 will be described with reference to FIG.
 [2-1.カメラ]
 カメラ20は、空間95を撮像することで、撮影画像を生成する。撮影画像は、カメラ20が空間95を撮像することで生成される第1画像の一例である。カメラ20は、例えば空間95を撮像可能な位置に固定された定点カメラであるが、これに限らない。例えば、カメラ20は、撮影方向及び撮影位置の少なくとも一方が可変である可動式のカメラであってもよい。カメラ20は、複数の視点から空間95を撮像することで、複数の撮影画像を生成してもよい。カメラ20は、撮像によって得られた撮影画像データをコンピュータ60に送信する。カメラ20は、人間が視認できる空間を撮影する可視光カメラであってもよい。
[2-1. camera]
The camera 20 generates a captured image by capturing an image of the space 95. The captured image is an example of a first image generated when the camera 20 captures an image of the space 95. The camera 20 is, for example, a fixed-point camera fixed at a position where the space 95 can be imaged, but is not limited thereto. For example, the camera 20 may be a movable camera in which at least one of the shooting direction and the shooting position is variable. The camera 20 may generate a plurality of captured images by imaging the space 95 from a plurality of viewpoints. The camera 20 transmits the captured image data obtained by the imaging to the computer 60. The camera 20 may be a visible light camera that captures a space visible to humans.
 [2-2.センサ装置]
 第1センサ30、第2センサ40及び第3センサ50は、各々が検出対象とする対象物を非接触で検出するセンサ装置の一例である。つまり、本実施の形態に係る非接触センシングシステム10は、検出対象となる対象物の種類に応じて、3つのセンサ装置を備えている。例えば、図2に示される第1対象物90は、第1センサ30によって検出される花粉である。第2対象物92は、第2センサ40によって検出される埃である。第3対象物94は、第3センサ50によって検出される有機物汚れである。第1センサ30、第2センサ40及び第3センサ50はそれぞれ、例えば、LIDAR(Laser Imaging Detection and Ranging)を利用した非接触のセンサ装置である。
[2-2. Sensor device]
The first sensor 30, the second sensor 40, and the third sensor 50 are each an example of a sensor device that detects an object to be detected in a non-contact manner. That is, the non-contact sensing system 10 according to the present embodiment includes three sensor devices according to the type of the detection target. For example, the first target object 90 illustrated in FIG. 2 is pollen detected by the first sensor 30. The second object 92 is dust detected by the second sensor 40. The third object 94 is an organic stain detected by the third sensor 50. Each of the first sensor 30, the second sensor 40, and the third sensor 50 is a non-contact sensor device using, for example, LIDAR (Laser Imaging Detection and Ranging).
 図3は、本実施の形態に係るセンサ装置の一例である第1センサ30を示す図である。本実施の形態では、第1センサ30は、自律移動式のセンサ装置である。なお、第2センサ40及び第3センサ50は、第1センサ30と同様の構成を備える。 FIG. 3 is a diagram showing a first sensor 30 which is an example of the sensor device according to the present embodiment. In the present embodiment, the first sensor 30 is an autonomously moving sensor device. The second sensor 40 and the third sensor 50 have the same configuration as the first sensor 30.
 図3に示されるように、第1センサ30は、空間95の床面96上を走行することができる。第1センサ30は、床面96上の所定の位置で照射光L1を出射した後、第1対象物90から戻ってくる戻り光L2を受光する。第1センサ30は、照射光L1の出射と戻り光L2の受光との時間差に基づいて第1対象物90までの距離を計測する。また、第1センサ30は、戻り光L2の強度に基づいて第1対象物90の濃度を計測する。 第 As shown in FIG. 3, the first sensor 30 can run on the floor 96 of the space 95. After emitting irradiation light L1 at a predetermined position on floor surface 96, first sensor 30 receives return light L2 returning from first object 90. The first sensor 30 measures a distance to the first object 90 based on a time difference between emission of the irradiation light L1 and reception of the return light L2. Further, the first sensor 30 measures the density of the first target 90 based on the intensity of the return light L2.
 具体的には、第1センサ30は、図2に示されるように、光源32と、光検出器34と、信号処理回路36とを備える。 Specifically, the first sensor 30 includes a light source 32, a photodetector 34, and a signal processing circuit 36, as shown in FIG.
 光源32は、空間95中の第1対象物90に向けて照射光L1を出射する光源である。光源32は、例えばLED(Light Emitting Diode)又はレーザ素子である。光源32が出射する照射光L1は、第1対象物90を励起させるための波長成分を含んでいる。具体的には、照射光L1は、220nm以上550nm以下の範囲においてピーク波長を有する光である。照射光L1は、例えばパルス光である。 The light source 32 is a light source that emits the irradiation light L1 toward the first object 90 in the space 95. The light source 32 is, for example, an LED (Light Emitting Diode) or a laser element. The irradiation light L1 emitted from the light source 32 includes a wavelength component for exciting the first object 90. Specifically, the irradiation light L1 is light having a peak wavelength in a range from 220 nm to 550 nm. The irradiation light L1 is, for example, pulsed light.
 光検出器34は、第1対象物90からの戻り光L2を検出する光検出器である。光検出器34が検出する戻り光L2は、光源32が出射した照射光L1によって第1対象物90が励起されて発する蛍光である。蛍光は、照射光L1よりも長波長成分を多く含む光である。光検出器34は、例えば、蛍光の波長成分に受光感度を有するフォトダイオードである。光検出器34は、受光した蛍光の強度に応じた出力信号を信号処理回路36に出力する。出力信号は、例えば、受光した蛍光の強度が大きい程、信号強度が大きくなる電気信号である。 The photodetector 34 is a photodetector that detects the return light L2 from the first object 90. The return light L2 detected by the light detector 34 is fluorescence emitted when the first object 90 is excited by the irradiation light L1 emitted from the light source 32. Fluorescence is light containing more long wavelength components than the irradiation light L1. The photodetector 34 is, for example, a photodiode having a light receiving sensitivity to a wavelength component of fluorescence. The photodetector outputs an output signal corresponding to the intensity of the received fluorescence to the signal processing circuit. The output signal is, for example, an electric signal whose signal intensity increases as the intensity of the received fluorescence increases.
 信号処理回路36は、光検出器34から出力された出力信号を処理することにより、第1対象物90までの距離、及び、第1対象物90の濃度を決定する。図2に示されるように、信号処理回路36は、位置情報取得部37と、濃度情報取得部38とを備える。 The signal processing circuit 36 processes the output signal output from the photodetector 34 to determine the distance to the first object 90 and the density of the first object 90. As shown in FIG. 2, the signal processing circuit 36 includes a position information obtaining unit 37 and a density information obtaining unit 38.
 位置情報取得部37は、第1対象物90の空間95内の三次元位置を示す位置情報を取得する。位置情報には、第1対象物90までの距離と方向とが含まれる。例えば、位置情報取得部37は、TOF(Time Of Flight)方式で距離を算出する。具体的には、位置情報取得部37は、光源32による照射光L1の出射と光検出器34による蛍光の検出との時間差に基づいて距離情報を取得する。距離情報には、第1対象物90までの距離ri、並びに、第1対象物90を検出した方向を示す水平角φi及び鉛直角θiが含まれる。なお、第1対象物90を検出した方向は、光源32が照射光L1を出射した方向である。 The position information acquiring unit 37 acquires position information indicating a three-dimensional position of the first object 90 in the space 95. The position information includes a distance and a direction to the first target object 90. For example, the position information acquisition unit 37 calculates the distance by a TOF (Time $ Of $ Flight) method. Specifically, the position information acquisition unit 37 acquires distance information based on a time difference between emission of the irradiation light L1 by the light source 32 and detection of fluorescence by the photodetector 34. The distance information includes a distance ri to the first object 90, a horizontal angle φi indicating a direction in which the first object 90 is detected, and a vertical angle θi. Note that the direction in which the first target object 90 is detected is the direction in which the light source 32 emits the irradiation light L1.
 濃度情報取得部38は、第1対象物90の濃度を示す濃度情報を取得する。具体的には、濃度情報取得部38は、出力信号の信号強度に応じて第1対象物90の濃度を決定する。例えば、信号強度をSiとした場合に、濃度Diは、以下の式(1)に基づいて算出される。 The density information acquisition unit 38 acquires density information indicating the density of the first target object 90. Specifically, the density information acquisition unit 38 determines the density of the first target 90 according to the signal strength of the output signal. For example, when the signal intensity is Si, the density Di is calculated based on the following equation (1).
 (1) Di=α×Si (1) Di = α × Si
 ここで、αは、定数である。また、Di及びSi、並びに、上述したri、φi及びθiの添字“i”は、センサデータのデータ番号を示している。なお、濃度情報取得部38が用いる濃度Diの算出方法は、これに限らない。例えば、濃度情報取得部38は、出力信号そのものの代わりに、出力信号からノイズ成分を除去した後の信号を用いてもよい。 Α Here, α is a constant. The subscripts “i” of Di and Si and the above-mentioned ri, φi, and θi indicate the data numbers of the sensor data. Note that the method of calculating the density Di used by the density information acquisition unit 38 is not limited to this. For example, instead of the output signal itself, the density information acquisition unit 38 may use a signal obtained by removing a noise component from the output signal.
 また、信号処理回路36は、蛍光を分析することで、第1対象物90の種類を判別してもよい。具体的には、信号処理回路36は、照射光の波長と蛍光の波長との組み合わせに基づいて第1対象物90の種類を判別する。例えば、第1センサ30では、光源32が、複数の励起波長に対応する複数の照射光を照射し、光検出器34が、複数の受光波長に対応する複数の蛍光を受光してもよい。信号処理回路36は、励起波長と受光波長と受光強度との三次元マトリクス、いわゆる蛍光指紋を生成することで、蛍光を発生させた第1対象物90の種類を精度良く判別することができる。 (4) The signal processing circuit 36 may determine the type of the first target object 90 by analyzing the fluorescence. Specifically, the signal processing circuit 36 determines the type of the first object 90 based on a combination of the wavelength of the irradiation light and the wavelength of the fluorescence. For example, in the first sensor 30, the light source 32 may emit a plurality of irradiation lights corresponding to a plurality of excitation wavelengths, and the photodetector 34 may receive a plurality of fluorescences corresponding to a plurality of light reception wavelengths. The signal processing circuit 36 can accurately determine the type of the first object 90 that has generated the fluorescence by generating a three-dimensional matrix of the excitation wavelength, the reception wavelength, and the reception intensity, that is, a so-called fluorescent fingerprint.
 信号処理回路36は、決定した濃度Diを示す濃度情報、及び、位置情報をセンサデータとしてコンピュータ60に出力する。 The signal processing circuit 36 outputs the density information indicating the determined density Di and the position information to the computer 60 as sensor data.
 なお、第1センサ30とコンピュータ60とは、例えば、データの送受信が可能なように無線で接続されている。第1センサ30は、例えば、Wi-Fi(登録商標)、Bluetooth(登録商標)又はZigBee(登録商標)などの無線通信規格に基づく無線通信を行う。なお、第1センサ30とコンピュータ60とは、有線で接続されていてもよい。 The first sensor 30 and the computer 60 are wirelessly connected, for example, so that data can be transmitted and received. The first sensor 30 performs wireless communication based on a wireless communication standard such as Wi-Fi (registered trademark), Bluetooth (registered trademark), or ZigBee (registered trademark). Note that the first sensor 30 and the computer 60 may be connected by wire.
 第2センサ40は、第2対象物92に向けて照射光を出射し、第2対象物92からの戻り光を受光することで、第2対象物92を検出する。本実施の形態では、第2対象物92は、蛍光を発しない物質であり、例えば埃である。 The second sensor 40 detects the second object 92 by emitting irradiation light toward the second object 92 and receiving return light from the second object 92. In the present embodiment, the second object 92 is a substance that does not emit fluorescence, and is, for example, dust.
 第2センサ40は、光源42と、光検出器44と、信号処理回路46とを備える。光源42、光検出器44及び信号処理回路46はそれぞれ、第1センサ30の光源32、光検出器34及び信号処理回路36に相当する。 The second sensor 40 includes a light source 42, a photodetector 44, and a signal processing circuit 46. The light source 42, the light detector 44, and the signal processing circuit 46 correspond to the light source 32, the light detector 34, and the signal processing circuit 36 of the first sensor 30, respectively.
 光源42は、第2対象物92に向けて照射光を出射する光源である。光源42は、例えばLED又はレーザ素子である。光源42が出射する照射光は、第2対象物92を励起させる必要がない。このため、照射光の波長成分としては、広い波長帯域から選択された波長成分を利用することができる。具体的には、光源42が出射する照射光は、300nm以上1300nm以下の範囲においてピーク波長を有する光である。つまり、照射光は、紫外光でもよく、可視光でもよく、近赤外光でもよい。照射光は、例えばパルス光である。 The light source 42 is a light source that emits irradiation light toward the second object 92. The light source 42 is, for example, an LED or a laser element. The irradiation light emitted from the light source 42 does not need to excite the second object 92. Therefore, a wavelength component selected from a wide wavelength band can be used as the wavelength component of the irradiation light. Specifically, the irradiation light emitted from the light source 42 is light having a peak wavelength in a range from 300 nm to 1300 nm. That is, the irradiation light may be ultraviolet light, visible light, or near-infrared light. The irradiation light is, for example, pulsed light.
 光検出器44は、第2対象物92からの戻り光を検出する光検出器である。光検出器44が検出する戻り光は、光源42が出射した照射光が第2対象物92によって散乱されて発生する後方散乱光である。後方散乱光は、例えばミー散乱による散乱光である。後方散乱光は、照射光と同じ波長成分を有する。光検出器44は、照射光の波長成分に受光感度を有するフォトダイオードである。光検出器44は、受光した後方散乱光の強度に応じた出力信号を信号処理回路46に出力する。出力信号は、例えば、受光した後方散乱光の強度が大きい程、信号強度が大きくなる電気信号である。 The photodetector 44 is a photodetector that detects the return light from the second object 92. The return light detected by the photodetector 44 is backscattered light generated when the irradiation light emitted from the light source 42 is scattered by the second object 92. The backscattered light is, for example, scattered light due to Mie scattering. The backscattered light has the same wavelength component as the irradiation light. The photodetector 44 is a photodiode having a light receiving sensitivity to a wavelength component of irradiation light. The photodetector 44 outputs an output signal corresponding to the intensity of the received backscattered light to the signal processing circuit 46. The output signal is, for example, an electric signal whose signal intensity increases as the intensity of the received backscattered light increases.
 なお、本実施の形態では、光源42が出射する照射光は、所定の偏光成分を含んでいてもよい。信号処理回路46は、戻り光に含まれる偏光成分の偏光解消度に基づいて第2対象物92の種類を判別してもよい。偏光成分は、例えば直線偏光であるが、円偏光又は楕円偏光であってもよい。偏光成分が含まれる照射光が第2対象物92に照射された場合、第2対象物92からの戻り光である後方散乱光は、第2対象物92の形状に応じて、その偏光解消度が異なる。 In the present embodiment, the irradiation light emitted from the light source 42 may include a predetermined polarization component. The signal processing circuit 46 may determine the type of the second object 92 based on the degree of depolarization of the polarization component included in the return light. The polarization component is, for example, linearly polarized light, but may be circularly polarized light or elliptically polarized light. When the irradiation light including the polarized light component is irradiated on the second object 92, the backscattered light that is the return light from the second object 92 has a degree of depolarization according to the shape of the second object 92. Are different.
 具体的には、第2対象物92が球形の粒子である場合、その後方散乱光の偏光状態は保持される。つまり、後方散乱光の偏光状態は、照射光の偏光状態と同じになる。第2対象物92が非球形の粒子である場合、粒子の形状に応じて偏光面が変化する。このため、信号処理回路46は、後方散乱光の偏光解消度に基づいて、第2対象物92の種類を判別することができる。例えば、黄砂の偏光解消度は10%程度であり、花粉の偏光解消度は約1以上約4%以下である。 Specifically, when the second object 92 is a spherical particle, the polarization state of the backscattered light is maintained. That is, the polarization state of the backscattered light becomes the same as the polarization state of the irradiation light. When the second object 92 is a non-spherical particle, the plane of polarization changes according to the shape of the particle. Therefore, the signal processing circuit 46 can determine the type of the second object 92 based on the degree of depolarization of the backscattered light. For example, the degree of depolarization of yellow sand is about 10%, and the degree of depolarization of pollen is about 1 to about 4%.
 信号処理回路46は、光検出器44から出力された出力信号を処理することにより、第2対象物92までの距離、及び、第2対象物92の濃度を決定する。信号処理回路46は、図2に示されるように、位置情報取得部47と、濃度情報取得部48とを備える。距離及び濃度の具体的な決定の動作は、第1センサ30の信号処理回路36と同じである。 The signal processing circuit 46 determines the distance to the second object 92 and the density of the second object 92 by processing the output signal output from the photodetector 44. The signal processing circuit 46 includes a position information acquisition unit 47 and a density information acquisition unit 48, as shown in FIG. The specific operation of determining the distance and the density is the same as that of the signal processing circuit 36 of the first sensor 30.
 第3センサ50は、第3対象物94に向けて照射光を出射し、第3対象物94からの戻り光を受光することで、第3対象物94を検出する。本実施の形態では、第3対象物94は、励起光が照射された場合に蛍光を発する有機物汚れである。 The third sensor 50 detects the third object 94 by emitting irradiation light toward the third object 94 and receiving return light from the third object 94. In the present embodiment, the third object 94 is an organic stain that emits fluorescence when irradiated with excitation light.
 第3センサ50は、光源52と、光検出器54と、信号処理回路56とを備える。信号処理回路56は、位置情報取得部57と、濃度情報取得部58とを備える。 The third sensor 50 includes a light source 52, a photodetector 54, and a signal processing circuit 56. The signal processing circuit 56 includes a position information acquisition unit 57 and a density information acquisition unit 58.
 光源52、光検出器54及び信号処理回路56はそれぞれ、第1センサ30の光源32、光検出器34及び信号処理回路36に相当する。第1センサ30と第3センサ50とでは、各々の光源が照射光を出射する方向が異なっている。例えば、光源32が空間95の空中に向けて照射光を出射するのに対して、光源52は、空間95の床面又は壁面に向けて照射光を出射する。光源52、光検出器54及び信号処理回路56の各々の動作は、光源32、光検出器34及び信号処理回路36の各々と同じである。 The light source 52, the light detector 54, and the signal processing circuit 56 correspond to the light source 32, the light detector 34, and the signal processing circuit 36 of the first sensor 30, respectively. The first sensor 30 and the third sensor 50 differ in the direction in which each light source emits irradiation light. For example, while the light source 32 emits irradiation light toward the air in the space 95, the light source 52 emits irradiation light toward the floor or wall surface of the space 95. The operation of each of the light source 52, the photodetector 54, and the signal processing circuit 56 is the same as that of each of the light source 32, the photodetector 34, and the signal processing circuit 36.
 第1センサ30、第2センサ40及び第3センサ50はそれぞれ、照射光を出射した方向に位置する対象物を検出する。このとき、照射光の出射方向に複数の対象物が存在する場合、対象物の位置に応じて異なる時刻で戻り光が返ってくる。したがって、戻り光を受光した時刻に基づいて、照射光の出射方向に位置する複数の対象物を一度に検出することができる。なお、照射光の出射方向に対象物が存在しない場合、戻り光が返ってこない。このため、戻り光が返ってこない場合、照射光の経路上には対象物が存在しないことが検出される。第1センサ30、第2センサ40及び第3センサ50はそれぞれ、検出結果をセンサデータとしてコンピュータ60に送信する。 The first sensor 30, the second sensor 40, and the third sensor 50 each detect an object located in a direction in which the irradiation light is emitted. At this time, when there are a plurality of objects in the emission direction of the irradiation light, the return light returns at different times according to the positions of the objects. Therefore, a plurality of objects located in the emission direction of the irradiation light can be detected at a time based on the time at which the return light is received. When the target object does not exist in the emission direction of the irradiation light, the return light does not return. Therefore, when the return light does not return, it is detected that the target object does not exist on the path of the irradiation light. Each of the first sensor 30, the second sensor 40, and the third sensor 50 transmits a detection result to the computer 60 as sensor data.
 図4は、本実施の形態に係るセンサ装置から出力されるセンサデータを含むデータベースの一例を示す図である。図4に示されるデータベースは、コンピュータ60のプロセッサ64によって管理されてメモリ66に記憶される。 FIG. 4 is a diagram showing an example of a database including sensor data output from the sensor device according to the present embodiment. The database shown in FIG. 4 is managed by the processor 64 of the computer 60 and stored in the memory 66.
 図4に示されるように、データベースでは、データ番号No.i毎に物質名Miと、センサデータと、センサ基準位置とが対応付けられている。センサデータには、濃度Di、距離ri、水平角φi及び鉛直角θiが含まれる。 (4) As shown in FIG. The substance name Mi, the sensor data, and the sensor reference position are associated with each other for each i. The sensor data includes the density Di, the distance ri, the horizontal angle φi, and the vertical angle θi.
 データ番号No.iは、コンピュータ60が受信したセンサデータ毎に付されている。プロセッサ64は、例えば、通信インタフェース62がセンサデータを受信した順に、昇順でデータ番号を割り当てる。 Data number No. i is attached to each sensor data received by the computer 60. The processor 64 assigns data numbers in ascending order, for example, in the order in which the communication interface 62 receives the sensor data.
 物質名Miは、検出対象物の種類を示す情報である。本実施の形態では、センサ装置毎に対象物の種類が対応している。このため、プロセッサ64は、通信インタフェース62が受信したセンサデータの送信先を判別することで、当該センサデータに対応する物質名Miを判定することができる。例えば、図4に示される例では、データ番号1のセンサデータは、花粉を検出する第1センサ30から送信されたことを表している。 The substance name Mi is information indicating the type of the detection target. In the present embodiment, the type of the target object corresponds to each sensor device. Therefore, the processor 64 can determine the substance name Mi corresponding to the sensor data by determining the transmission destination of the sensor data received by the communication interface 62. For example, in the example illustrated in FIG. 4, the sensor data of data number 1 indicates that the sensor data is transmitted from the first sensor 30 that detects pollen.
 濃度Diは、上述した式(1)に基づいて算出された値である。各センサ装置の信号処理回路36、46及び56がそれぞれ、信号強度Siに基づいて算出する。 The density Di is a value calculated based on the above equation (1). Each of the signal processing circuits 36, 46, and 56 of each sensor device calculates based on the signal strength Si.
 距離ri、水平角φi及び鉛直角θiは、LIDARを利用して得られる対象物の三次元位置を示すデータである。LIDARで得られた位置データは、極座標系で示されているので、本実施の形態では、コンピュータ60は、当該位置データを三次元直交座標系に座標変換する。座標変換の詳細については、後で説明する。 The distance ri, the horizontal angle φi, and the vertical angle θi are data indicating the three-dimensional position of the object obtained using LIDAR. Since the position data obtained by LIDAR is shown in a polar coordinate system, in the present embodiment, the computer 60 converts the position data into a three-dimensional orthogonal coordinate system. Details of the coordinate conversion will be described later.
 センサ基準位置は、例えば、第1センサ30、第2センサ40及び第3センサ50のうち、センサデータを送信したセンサ装置の設置位置である。センサ装置が固定されている場合、センサ基準位置は変化しない。センサ装置が可動式である場合、センサ基準位置は、検出処理を行った時点、具体的には、照射光を出力した時点又は戻り光を受光した時点でのセンサ装置の位置である。なお、各センサが送信する水平角φi及び鉛直角θiの基準方向、すなわちφi=0かつθi=0となる方向は、予めセンサ間で統一された方向に定められている。 The sensor reference position is, for example, the installation position of the sensor device that has transmitted the sensor data among the first sensor 30, the second sensor 40, and the third sensor 50. When the sensor device is fixed, the sensor reference position does not change. When the sensor device is movable, the sensor reference position is the position of the sensor device at the time of performing the detection processing, specifically, at the time of outputting the irradiation light or receiving the return light. Note that the reference direction of the horizontal angle φi and the vertical angle θi transmitted by each sensor, that is, the direction in which φi = 0 and θi = 0, is determined in advance as a direction unified among the sensors.
 [2-3.コンピュータ]
 コンピュータ60は、画像処理装置の一例であり、図2に示されるように、通信インタフェース62と、プロセッサ64と、メモリ66とを備える。
[2-3. Computer]
The computer 60 is an example of an image processing device, and includes a communication interface 62, a processor 64, and a memory 66, as shown in FIG.
 通信インタフェース62は、非接触センシングシステム10を構成する各機器と通信を行うことで、データの送受信を行う。各機器との通信は、例えば、Wi-Fi(登録商標)、Bluetooth(登録商標)又はZigBee(登録商標)などの無線通信規格に基づいた無線通信であるが、有線通信であってもよい。 The communication interface 62 transmits and receives data by communicating with each device constituting the non-contact sensing system 10. Communication with each device is, for example, wireless communication based on a wireless communication standard such as Wi-Fi (registered trademark), Bluetooth (registered trademark), or ZigBee (registered trademark), but may be wired communication.
 通信インタフェース62は、三次元座標データを取得する取得回路の一例である。通信インタフェース62は、第1センサ30、第2センサ40及び第3センサ50の各々と通信することで、第1センサ30、第2センサ40及び第3センサ50の各々からセンサデータを取得する。センサデータは、少なくとも1種類の対象物の空間95内の位置を表す三次元座標データの一例である位置情報を含んでいる。さらに、センサデータは、濃度情報を含んでいる。 The communication interface 62 is an example of an acquisition circuit that acquires three-dimensional coordinate data. The communication interface 62 acquires sensor data from each of the first sensor 30, the second sensor 40, and the third sensor 50 by communicating with each of the first sensor 30, the second sensor 40, and the third sensor 50. The sensor data includes position information, which is an example of three-dimensional coordinate data representing the position of at least one type of object in the space 95. Further, the sensor data includes density information.
 三次元座標データは、照射光が出射される時間と戻り光が検出される時間との差に基づいて算出された、センサ装置と対象物との相対位置関係と、センサ装置の空間95中での座標とを用いて生成される。相対位置関係は、図4に示される距離riに相当する。センサ装置の空間95中での座標は、図4に示される基準位置を示す座標(x0,y0,z0)に相当する。 The three-dimensional coordinate data is calculated based on the difference between the time at which the irradiation light is emitted and the time at which the return light is detected, and the relative positional relationship between the sensor device and the object, and the space 95 of the sensor device Is generated using the coordinates of The relative positional relationship corresponds to the distance ri shown in FIG. The coordinates in the space 95 of the sensor device correspond to the coordinates (x0, y0, z0) indicating the reference position shown in FIG.
 また、通信インタフェース62は、例えば、カメラ20と通信することで、カメラ20から撮影画像データを取得する。通信インタフェース62は、カメラ20、第1センサ30、第2センサ40及び第3センサ50の少なくとも1つに、撮影指示又はセンシングの指示を含む制御信号を送信してもよい。通信インタフェース62は、さらに、サーバ装置70と通信することで、対象物の濃度分布に相当するレベル分布情報をサーバ装置70に送信する。通信インタフェース62は、タブレット端末80と通信することで、合成画像データをタブレット端末80に送信する。 The communication interface 62 acquires captured image data from the camera 20 by, for example, communicating with the camera 20. The communication interface 62 may transmit a control signal including a shooting instruction or a sensing instruction to at least one of the camera 20, the first sensor 30, the second sensor 40, and the third sensor 50. The communication interface 62 further transmits level distribution information corresponding to the concentration distribution of the target object to the server device 70 by communicating with the server device 70. The communication interface 62 transmits the composite image data to the tablet terminal 80 by communicating with the tablet terminal 80.
 プロセッサ64は、通信インタフェース62が取得したセンサデータに基づいて合成画像を生成する。合成画像は、カメラ20で撮像された空間95を表す撮影画像と対象物画像とが合成された合成画像である。対象物画像は、空間95に存在する少なくとも1種類の対象物を表す第2画像の一例である。 The processor 64 generates a composite image based on the sensor data acquired by the communication interface 62. The composite image is a composite image in which a captured image representing the space 95 captured by the camera 20 and an object image are composited. The object image is an example of a second image representing at least one type of object existing in the space 95.
 本実施の形態では、プロセッサ64は、センサデータに基づいて空間95内の対象物の濃度分布を生成する。具体的には、プロセッサ64は、空間95を三次元直交座標系の座標で表し、座標毎に濃度を対応付けることで、濃度の三次元分布を生成する。図1に示されるx軸、y軸及びz軸は、三次元直交座標系の三軸を示している。x軸及びy軸が空間95の床面に平行な二軸であり、z軸が当該床面に垂直な一軸である。なお、三軸の設定例は、これに限らない。 In the present embodiment, the processor 64 generates a concentration distribution of the object in the space 95 based on the sensor data. Specifically, the processor 64 generates a three-dimensional distribution of the density by expressing the space 95 by coordinates in a three-dimensional orthogonal coordinate system and associating the density with each coordinate. The x-axis, y-axis, and z-axis shown in FIG. 1 indicate three axes of a three-dimensional orthogonal coordinate system. The x axis and the y axis are two axes parallel to the floor of the space 95, and the z axis is one axis perpendicular to the floor. The setting example of the three axes is not limited to this.
 具体的には、プロセッサ64は、対象物の濃度分布の一例であるレベル分布を生成する。レベル分布は、濃度情報に基づいて決定された管理レベルCiの分布である。本実施の形態では、濃度Diは、その大きさに応じて複数のレベル値に分類される。管理レベルCiは、濃度情報が示す濃度Diが分類されたレベル値である。例えば、プロセッサ64は、図5で示される条件式に基づいて管理レベルCiを決定する。図5は、本実施の形態に係る非接触センシングシステム10において、管理レベルCiを決定するための条件式を示す図である。条件式は、例えばメモリ66に記憶されている。 Specifically, the processor 64 generates a level distribution which is an example of the concentration distribution of the object. The level distribution is a distribution of the management level Ci determined based on the density information. In the present embodiment, the density Di is classified into a plurality of level values according to its magnitude. The management level Ci is a level value at which the density Di indicated by the density information is classified. For example, the processor 64 determines the management level Ci based on the conditional expression shown in FIG. FIG. 5 is a diagram showing a conditional expression for determining the management level Ci in the non-contact sensing system 10 according to the present embodiment. The conditional expression is stored in the memory 66, for example.
 図5に示されるように、管理レベルCiは、“1”から“5”の5段階で表される。濃度Diと基準値Lmとの関係に基づいて、プロセッサ64は、管理レベルCiを決定する。基準値Lmは、図6に示されるように、対象物の種類毎に予め定められた値である。図6は、物質毎の基準値を示す基準値データベースの一例を示す図である。基準値データベースは、例えばメモリ66に記憶されている。管理レベルCiの段階は、5段階に限らず、2段階、3段階又は4段階などでもよく、6段階以上であってもよい。図5に示される条件式において、基準値Lmに乗ずる係数(例えば“0.4”など)の値も一例に過ぎない。 管理 As shown in FIG. 5, the management level Ci is represented by five levels from “1” to “5”. Based on the relationship between the density Di and the reference value Lm, the processor 64 determines the management level Ci. As shown in FIG. 6, the reference value Lm is a value predetermined for each type of target object. FIG. 6 is a diagram illustrating an example of a reference value database indicating a reference value for each substance. The reference value database is stored in the memory 66, for example. The level of the management level Ci is not limited to five levels, but may be two levels, three levels, or four levels, or may be six levels or more. In the conditional expression shown in FIG. 5, the value of the coefficient (for example, “0.4”) by which the reference value Lm is multiplied is merely an example.
 本実施の形態では、プロセッサ64は、さらに、生成した三次元分布に基づいて対象物の輪郭を決定する。また、プロセッサ64は、決定した輪郭内の所定の位置を代表位置として決定する。対象物画像は、決定された輪郭と代表位置とを含んでいる。 In the present embodiment, the processor 64 further determines the contour of the object based on the generated three-dimensional distribution. Further, the processor 64 determines a predetermined position in the determined contour as a representative position. The object image includes the determined outline and the representative position.
 例えば、プロセッサ64は、座標毎の濃度Diに基づいて対象物の輪郭を決定する。具体的には、プロセッサ64は、座標毎の濃度Diに基づいて算出された管理レベルCiに基づいて、空間95に存在するエアロゾルの輪郭を決定する。 For example, the processor 64 determines the contour of the target object based on the density Di for each coordinate. Specifically, the processor 64 determines the contour of the aerosol existing in the space 95 based on the management level Ci calculated based on the density Di for each coordinate.
 図7は、本実施の形態に係る非接触センシングシステム10によるエアロゾルの輪郭の決定方法を説明するための図である。図7では、簡単のため、x軸とy軸とで定義される二次元のレベル分布内での輪郭の決定方法について説明するが、三次元の場合も同様に行うことができる。 FIG. 7 is a diagram for explaining a method of determining an aerosol contour by the non-contact sensing system 10 according to the present embodiment. In FIG. 7, for simplicity, a method of determining a contour in a two-dimensional level distribution defined by the x-axis and the y-axis will be described. However, the same method can be applied to a three-dimensional case.
 図7に示されるように、x座標とy座標とで表される座標毎に、管理レベルCiが算出されている。プロセッサ64は、例えば、管理レベルCiが設定値以上である領域を決定し、当該領域の輪郭をエアロゾルの輪郭として決定する。例えば、設定値が“2”である場合、プロセッサ64は、管理レベルCiが“2”以上の領域の輪郭90aを、エアロゾルの輪郭として決定する。なお、図7では、管理レベルCiが“2”以上の領域にドットの網掛けを付している。図7に示される例では、空間内の2ヶ所にエアロゾルが検出されたことを示している。 管理 As shown in FIG. 7, the management level Ci is calculated for each coordinate represented by the x coordinate and the y coordinate. The processor 64 determines, for example, a region where the management level Ci is equal to or greater than the set value, and determines the outline of the region as the aerosol outline. For example, when the set value is “2”, the processor 64 determines the outline 90 a of the area where the management level Ci is “2” or more as the aerosol outline. In FIG. 7, the area where the management level Ci is “2” or more is shaded with dots. The example shown in FIG. 7 indicates that aerosol was detected at two places in the space.
 輪郭を決定するための設定値は、変更可能であってもよい。例えば、設定値を大きくした場合、エアロゾルの濃度が十分に高い部分のみをエアロゾルの存在範囲として決定することができる。あるいは、設定値を小さくした場合は、エアロゾルの濃度が低い部分も含めてエアロゾルの存在範囲として決定することができる。 設定 The set value for determining the contour may be changeable. For example, when the set value is increased, only the portion where the concentration of the aerosol is sufficiently high can be determined as the aerosol existence range. Alternatively, when the set value is reduced, it can be determined as the existence range of the aerosol including the portion where the concentration of the aerosol is low.
 また、プロセッサ64は、複数の設定値を用いて、設定値毎に輪郭を決定してもよい。例えば、図7に示される例では、設定値“2”に対応する輪郭90aと、設定値“3”に対応する輪郭90bとが決定される。輪郭90aは、決定された複数の輪郭のうちの最も外側の輪郭であり、エアロゾルの存在範囲を示す輪郭に相当する。輪郭90bは、エアロゾルの存在範囲内において、エアロゾルの濃度がより濃い領域を示す輪郭に相当する。このように、エアロゾルの存在範囲内において、エアロゾルの濃度差を輪郭によって表すことができる。 {Circle around (4)} The processor 64 may determine the contour for each set value using a plurality of set values. For example, in the example shown in FIG. 7, a contour 90a corresponding to the set value "2" and a contour 90b corresponding to the set value "3" are determined. The contour 90a is the outermost contour of the determined plurality of contours, and corresponds to a contour indicating an aerosol existence range. The outline 90b corresponds to an outline indicating a region where the concentration of the aerosol is higher in the range where the aerosol exists. As described above, the difference in the concentration of the aerosol can be represented by the outline within the aerosol existing range.
 輪郭内の代表位置は、輪郭内におけるエアロゾルの濃度分布の重心である。具体的には、プロセッサ64は、輪郭内に存在する座標毎の管理レベルCiに基づいて重心を決定する。例えば、重心の座標を(Xc,Yc,Zc)とした場合、プロセッサ64は、以下の式(2)に基づいて重心の座標を決定する。 代表 The representative position in the contour is the center of gravity of the aerosol concentration distribution in the contour. Specifically, the processor 64 determines the center of gravity based on the management level Ci for each coordinate existing in the contour. For example, when the coordinates of the center of gravity are (Xc, Yc, Zc), the processor 64 determines the coordinates of the center of gravity based on the following equation (2).
 (2) Xc=Σ(Di×Xi)/Σ(Di)
     Yc=Σ(Di×Yi)/Σ(Di)
     Zc=Σ(Di×Zi)/Σ(Di)
(2) Xc = Σ (Di × Xi) / Σ (Di)
Yc = Σ (Di × Yi) / Σ (Di)
Zc = Σ (Di × Zi) / Σ (Di)
 式(2)において、Σ()は、()内の和を表す算術記号である。iは、決定された輪郭内に位置する座標に対応している。 In the equation (2), Σ () is an arithmetic symbol representing the sum in (). i corresponds to the coordinates located within the determined contour.
 なお、代表位置は、決定された輪郭を外周とする立体図形の重心であってもよい。 Note that the representative position may be the center of gravity of the three-dimensional figure having the determined contour as the outer periphery.
 メモリ66は、撮影画像データ及びセンサデータを格納するための記憶装置である。メモリ66は、プロセッサ64が実行するプログラム及び当該プログラムの実行に必要なパラメータなどを記憶している。また、メモリ66は、プロセッサ64によるプログラムの実行領域としても機能する。メモリ66は、例えば、HDD(Hard Disk Drive)又は半導体メモリなどの不揮発性メモリ、及び、RAM(Random Access Memory)などの揮発性メモリを有する。 The memory 66 is a storage device for storing captured image data and sensor data. The memory 66 stores a program executed by the processor 64, parameters necessary for executing the program, and the like. The memory 66 also functions as an area for executing a program by the processor 64. The memory 66 has, for example, a non-volatile memory such as an HDD (Hard Disk Drive) or a semiconductor memory, and a volatile memory such as a RAM (Random Access Memory).
 [2-4.サーバ装置]
 サーバ装置70は、コンピュータ60から送信されたレベル分布情報を受信し、受信したレベル分布情報を用いて所定の処理を行う。具体的には、サーバ装置70は、レベル分布情報に基づいて、空間95を使用する人に対する注意喚起を行う。例えば、サーバ装置70は、注意喚起用の画像である注意画像を生成し、生成した注意画像をタブレット端末80に送信する。
[2-4. Server device]
The server device 70 receives the level distribution information transmitted from the computer 60, and performs a predetermined process using the received level distribution information. Specifically, server device 70 alerts a person using space 95 based on the level distribution information. For example, the server device 70 generates an attention image which is an image for alerting, and transmits the generated attention image to the tablet terminal 80.
 例えば、サーバ装置70は、検出された少なくとも1種類の対象物の濃度が閾値を上回るか否かを判定する。具体的には、サーバ装置70は、空間95内の代表管理レベルCが閾値を上回るか否かを判定する。サーバ装置70は、代表管理レベルCが閾値を上回っていると判定した場合、注意画像を生成する。閾値は、予め定められた固定値であるが、これに限らない。例えば、閾値は、機械学習によって適宜更新されてもよい。 For example, the server device 70 determines whether or not the detected concentration of at least one type of target object exceeds a threshold value. Specifically, the server device 70 determines whether or not the representative management level C in the space 95 exceeds a threshold. If the server device 70 determines that the representative management level C exceeds the threshold, it generates a caution image. The threshold value is a predetermined fixed value, but is not limited to this. For example, the threshold may be appropriately updated by machine learning.
 代表管理レベルCは、例えば、対象物毎の管理レベルの代表値Cmに基づいて算出される。代表値Cmは、対応する対象物の管理レベルを代表する値であり、例えば、対応する対象物のレベル分布において、管理レベルの最大値である。サーバ装置70は、レベル分布に基づいて、対象物毎に代表値Cmを算出する。 The representative management level C is calculated based on, for example, a representative value Cm of the management level for each object. The representative value Cm is a value representing the management level of the corresponding object, and is, for example, the maximum value of the management level in the level distribution of the corresponding object. The server device 70 calculates a representative value Cm for each object based on the level distribution.
 図8は、本実施の形態に係る非接触センシングシステム10によって得られた空間95内の対象物毎の管理レベルの代表値Cmを示す図である。サーバ装置70は、対象物毎の代表値を平均することで、代表管理レベルCを算出する。例えば、図8に示される例では、代表管理レベルCは“3.8”になる。 FIG. 8 is a diagram showing a representative value Cm of the management level for each object in the space 95 obtained by the non-contact sensing system 10 according to the present embodiment. The server device 70 calculates the representative management level C by averaging the representative values for each object. For example, in the example shown in FIG. 8, the representative management level C is “3.8”.
 なお、代表管理レベルCは、複数の代表値Cmの平均値でなくてもよい。例えば、代表管理レベルCは、複数の代表値Cmの重み付け加算値であってもよい。例えば、花粉及び埃の重みを1とした場合に、CO、水分、表面有機物汚れの重みをそれぞれ、0.3、0.1、0.1としてもよい。重みの値はこれらに限らず、ユーザなどの指示に基づいて変更可能であってもよい。 Note that the representative management level C may not be the average value of the plurality of representative values Cm. For example, the representative management level C may be a weighted addition value of a plurality of representative values Cm. For example, when the weight of pollen and dust is set to 1, the weights of CO 2 , moisture, and surface organic soil may be set to 0.3, 0.1, and 0.1, respectively. The weight value is not limited to these, and may be changeable based on an instruction from a user or the like.
 また、サーバ装置70は、空間95に設置された空調機器を制御してもよい。あるいは、サーバ装置70は、例えば花粉又は埃などの濃度の上昇を抑制するための予防アドバイスを行ってもよい。予防アドバイスは、例えば、ユーザに空間95の換気を促す指示、又は、空間95内に配置された空気清浄機などの機器の駆動を促す指示などである。サーバ装置70は、予防アドバイスを含む画像データ又は音声データなどをタブレット端末80に出力する。例えば、サーバ装置70は、気象観測データなどを参照することにより、注意喚起又は予防アドバイスに関する情報を取得する。また、サーバ装置70は、濃度又は管理レベルの経時変化などに基づいた機械学習を行うことで、注意喚起又は予防アドバイスに関する情報を生成してもよい。 The server device 70 may control an air conditioner installed in the space 95. Alternatively, the server device 70 may give preventive advice for suppressing an increase in the concentration of pollen or dust, for example. The preventive advice is, for example, an instruction to prompt the user to ventilate the space 95 or an instruction to drive the device such as an air purifier disposed in the space 95. The server device 70 outputs image data or audio data including preventive advice to the tablet terminal 80. For example, the server device 70 obtains information on alerting or preventive advice by referring to weather observation data and the like. In addition, the server device 70 may generate information on alerting or preventive advice by performing machine learning based on a temporal change in the concentration or the management level.
 [2-5.タブレット端末]
 タブレット端末80は、携帯可能な情報処理端末である。タブレット端末80は、例えば、タブレットPC又はスマートフォンなどの多機能の情報端末であってもよく、非接触センシングシステム10に専用の情報端末であってもよい。図2に示されるように、タブレット端末80は、表示画面82及び制御部84を備える表示装置の一例である。
[2-5. Tablet terminal]
The tablet terminal 80 is a portable information processing terminal. The tablet terminal 80 may be, for example, a multifunctional information terminal such as a tablet PC or a smartphone, or may be an information terminal dedicated to the non-contact sensing system 10. As shown in FIG. 2, the tablet terminal 80 is an example of a display device including a display screen 82 and a control unit 84.
 表示画面82は、合成画像を表示する。表示画面82は、例えば液晶表示パネルであるが、これに限らない。例えば、表示画面82は、有機EL(Electroluminescence)素子を用いた自発光型の表示パネルであってもよい。表示画面82は、例えばタッチパネルディスプレイであり、ユーザからの入力を受け付けることができてもよい。 The display screen 82 displays the composite image. The display screen 82 is, for example, a liquid crystal display panel, but is not limited to this. For example, the display screen 82 may be a self-luminous display panel using an organic EL (Electroluminescence) element. The display screen 82 is, for example, a touch panel display, and may be capable of receiving an input from a user.
 制御部84は、合成画像を表示画面82に表示させる。制御部84は、例えば、プログラムが格納された不揮発性メモリ、プログラムを実行するための一時的な記憶領域である揮発性メモリ、入出力ポート、プログラムを実行するプロセッサなどを備える。 The control unit 84 causes the display screen 82 to display the composite image. The control unit 84 includes, for example, a nonvolatile memory storing a program, a volatile memory serving as a temporary storage area for executing the program, an input / output port, a processor executing the program, and the like.
 本実施の形態では、制御部84は、コンピュータ60から送信された合成画像データを取得し、取得した合成画像データに基づいて合成画像を表示画面82に表示させる。例えば、制御部84は、図9に示される合成画像100を表示画面82に表示させる。 In the present embodiment, the control unit 84 acquires the composite image data transmitted from the computer 60, and displays the composite image on the display screen 82 based on the acquired composite image data. For example, the control unit 84 causes the display screen 82 to display the composite image 100 shown in FIG.
 図9は、本実施の形態に係る表示装置の一例であるタブレット端末80の表示画面82への表示例を示す図である。図9に示されるように、表示画面82には、合成画像100が表示されている。 FIG. 9 is a diagram showing a display example on the display screen 82 of the tablet terminal 80 which is an example of the display device according to the present embodiment. As shown in FIG. 9, the composite image 100 is displayed on the display screen 82.
 合成画像100は、撮影画像101と、エアロゾル画像102とが合成された画像である。合成画像100は、例えば静止画である。 The composite image 100 is an image in which the captured image 101 and the aerosol image 102 are composited. The composite image 100 is, for example, a still image.
 撮影画像101は、カメラ20で撮像された空間95を表している。撮影画像101は、第1画像の一例である。撮影画像101は、空間95を水平方向から撮像することで得られた画像であるが、これに限らない。撮影画像101は、例えば、空間95を上方から撮像することで得られた画像であってもよい。この場合、撮影画像101は、図1に示される上面図に相当する。 The photographed image 101 represents the space 95 captured by the camera 20. The captured image 101 is an example of a first image. The captured image 101 is an image obtained by capturing the space 95 in the horizontal direction, but is not limited to this. The captured image 101 may be, for example, an image obtained by imaging the space 95 from above. In this case, the captured image 101 corresponds to the top view illustrated in FIG.
 エアロゾル画像102は、空間95に存在する少なくとも1種類の対象物を表す対象物画像の一例である。例えば、エアロゾル画像102は、エアロゾルの一例である花粉を表している。エアロゾル画像102は、少なくとも1種類の対象物の、撮影画像101における奥行き方向の位置が反映されている。エアロゾル画像102は、第2画像の一例である。 The aerosol image 102 is an example of an object image representing at least one type of object existing in the space 95. For example, the aerosol image 102 represents pollen, which is an example of an aerosol. The aerosol image 102 reflects the position of at least one type of object in the captured image 101 in the depth direction. The aerosol image 102 is an example of a second image.
 図9に示されるように、エアロゾル画像102は、輪郭102aと、距離情報102bとを含んでいる。輪郭102aは、例えば、第1センサ30によって検出された第1対象物90が存在する範囲を表している。距離情報102bは、基準位置から輪郭102a内の代表位置までの距離を示す数値である。 よ う As shown in FIG. 9, the aerosol image 102 includes a contour 102a and distance information 102b. The outline 102a represents, for example, a range where the first object 90 detected by the first sensor 30 exists. The distance information 102b is a numerical value indicating the distance from the reference position to the representative position in the outline 102a.
 基準位置は、空間95内に存在する位置である。例えば、基準位置は、カメラ20の設置位置である。あるいは、基準位置は、空間95内に存在する人又は空気清浄機などの機器の位置であってもよい。 The reference position is a position existing in the space 95. For example, the reference position is the installation position of the camera 20. Alternatively, the reference position may be a position of a person or an apparatus such as an air purifier existing in the space 95.
 詳細については、図16から図19に示される別の表示例を用いて後述するが、エアロゾル画像102は、エアロゾルの濃度が反映されていてもよい。具体的には、エアロゾル画像102は、エアロゾルの濃度の管理レベルCiを表すレベル情報が含まれてもよい。また、2種類以上のエアロゾルが検出された場合には、エアロゾル画像は、2種類以上のエアロゾルを異なる表示態様で表してもよい。また、エアロゾルの濃度が閾値を上回った場合に、表示画面82に、ユーザの注意を喚起するための注意画像が表示されてもよい。 Details will be described later using another display example shown in FIGS. 16 to 19, but the aerosol image 102 may reflect the concentration of the aerosol. Specifically, the aerosol image 102 may include level information indicating the management level Ci of the concentration of the aerosol. When two or more types of aerosols are detected, the aerosol image may represent two or more types of aerosols in different display modes. Further, when the concentration of the aerosol exceeds the threshold value, a caution image for calling the user's attention may be displayed on the display screen 82.
 [3.動作]
 続いて、本実施の形態に係る非接触センシングシステム10の動作について図10から図15を用いて説明する。
[3. motion]
Next, the operation of the non-contact sensing system 10 according to the present embodiment will be described with reference to FIGS.
 図10は、本実施の形態に係る非接触センシングシステム10の動作を示すシーケンス図である。 FIG. 10 is a sequence diagram showing an operation of the non-contact sensing system 10 according to the present embodiment.
 図10に示されるように、まず、カメラ20が空間95を撮像する(S10)。カメラ20は、撮像によって得られた撮影画像データをコンピュータ60に送信する(S12)。 As shown in FIG. 10, first, the camera 20 captures an image of the space 95 (S10). The camera 20 transmits the captured image data obtained by the imaging to the computer 60 (S12).
 また、第1センサ30は、第1対象物90の検出処理を行う(S14)。具体的には、第1センサ30では、光源32が第1対象物90に向けて照射光を出射し、光検出器34が第1対象物90からの戻り光を受光する。信号処理回路36は、戻り光の信号強度に基づいて第1対象物90の距離及び濃度を含むセンサデータを生成する。第1センサ30は、生成したセンサデータをコンピュータ60に送信する(S16)。 {Circle around (1)} The first sensor 30 performs a process of detecting the first object 90 (S14). Specifically, in the first sensor 30, the light source 32 emits irradiation light toward the first target 90, and the photodetector 34 receives the return light from the first target 90. The signal processing circuit 36 generates sensor data including the distance and the density of the first object 90 based on the signal intensity of the return light. The first sensor 30 transmits the generated sensor data to the computer 60 (S16).
 第2センサ40は、第2対象物92の検出処理を行う(S18)。具体的には、第2センサ40では、光源42が第2対象物92に向けて照射光を出射し、光検出器44が第2対象物92からの戻り光を受光する。信号処理回路46は、戻り光の信号強度に基づいて第2対象物92の距離及び濃度を含むセンサデータを生成する。第2センサ40は、生成したセンサデータをコンピュータ60に送信する(S20)。 (4) The second sensor 40 performs a process of detecting the second object 92 (S18). Specifically, in the second sensor 40, the light source 42 emits irradiation light toward the second object 92, and the photodetector 44 receives return light from the second object 92. The signal processing circuit 46 generates sensor data including the distance and the density of the second object 92 based on the signal intensity of the return light. The second sensor 40 transmits the generated sensor data to the computer 60 (S20).
 第3センサ50は、第3対象物94の検出処理を行う(S22)。具体的には、第3センサ50では、光源52が第3対象物94に向けて照射光を出射し、光検出器54が第3対象物94からの戻り光を受光する。信号処理回路56は、戻り光の信号強度に基づいて第3対象物94の距離及び濃度を含むセンサデータを生成する。第3センサ50は、生成したセンサデータをコンピュータ60に送信する(S24)。 The third sensor 50 performs the detection processing of the third object 94 (S22). Specifically, in the third sensor 50, the light source 52 emits irradiation light toward the third object 94, and the photodetector 54 receives return light from the third object 94. The signal processing circuit 56 generates sensor data including the distance and the density of the third object 94 based on the signal intensity of the return light. The third sensor 50 transmits the generated sensor data to the computer 60 (S24).
 なお、カメラ20による撮像(S10)、第1センサ30による検出処理(S14)、第2センサ40による検出処理(S18)及び第3センサ50による検出処理(S22)のいずれが先に行われてもよく、これらが同時に行われてもよい。撮像(S10)及び検出処理(S14、S18及びS22)が行われるタイミングは、コンピュータ60又はサーバ装置70などからの指示に基づいて行われてもよい。各機器は、撮影画像データ又はセンサデータが得られた場合に、撮影画像データ又はセンサデータを送信する。あるいは、各機器は、コンピュータ60からの要求を受け付けた場合に、撮影画像データ又はセンサデータを送信してもよい。 Note that any one of the imaging by the camera 20 (S10), the detection processing by the first sensor 30 (S14), the detection processing by the second sensor 40 (S18), and the detection processing by the third sensor 50 (S22) is performed first. Or these may be performed simultaneously. The timing at which the imaging (S10) and the detection processing (S14, S18, and S22) are performed may be performed based on an instruction from the computer 60, the server device 70, or the like. Each device transmits the captured image data or the sensor data when the captured image data or the sensor data is obtained. Alternatively, each device may transmit captured image data or sensor data when receiving a request from the computer 60.
 次に、コンピュータ60は、撮影画像データ及び各センサデータを受信し、受信した撮影画像データ及び各センサデータに基づいて3Dデータベース化の処理を行う(S26)。具体的には、コンピュータ60のプロセッサ64は、二次元の撮影画像を擬似的な三次元画像に変換する。また、プロセッサ64は、極座標系で得られたセンサデータを三次元直交座標系に座標変換する。 Next, the computer 60 receives the captured image data and each sensor data, and performs a process of creating a 3D database based on the received captured image data and each sensor data (S26). Specifically, the processor 64 of the computer 60 converts a two-dimensional captured image into a pseudo three-dimensional image. Further, the processor 64 performs coordinate conversion of the sensor data obtained in the polar coordinate system into a three-dimensional orthogonal coordinate system.
 図11は、本実施の形態に係る非接触センシングシステム10の動作のうち、撮影画像データの3Dデータベース化の処理を示すフローチャートである。図11は、図10のステップS26の詳細な動作の一例である。 FIG. 11 is a flowchart showing a process of converting captured image data into a 3D database in the operation of the non-contact sensing system 10 according to the present embodiment. FIG. 11 is an example of the detailed operation of step S26 in FIG.
 図11に示されるように、プロセッサ64は、通信インタフェース62を介して撮影画像データを取得する(S102)。撮影画像データに含まれる撮影画像は、二次元画像である。プロセッサ64は、二次元画像を擬似的に三次元画像に変換する一般的に知られた手法を用いて、二次元の撮影画像を擬似的な三次元画像に変換する(S104)。 プ ロ セ ッ サ As shown in FIG. 11, the processor 64 acquires captured image data via the communication interface 62 (S102). The captured image included in the captured image data is a two-dimensional image. The processor 64 converts the two-dimensional captured image into a pseudo three-dimensional image by using a generally known technique of converting a two-dimensional image into a pseudo three-dimensional image (S104).
 なお、撮影画像データには、空間95を構成する壁、床及び天井、並びに、空間95内に位置する人及び家具などまでの距離を示す距離画像が含まれてもよい。あるいは、撮影画像データには、互いに異なる複数の視点から撮影された複数の撮影画像が含まれていてもよい。例えば、プロセッサ64は、撮影画像と距離画像とを用いて、又は、複数の撮影画像を用いて三次元画像を生成してもよい。これにより、三次元画像の確からしさを高めることができる。 Note that the captured image data may include a distance image indicating a distance to a wall, a floor, and a ceiling constituting the space 95, a person and furniture located in the space 95, and the like. Alternatively, the captured image data may include a plurality of captured images captured from a plurality of different viewpoints. For example, the processor 64 may generate a three-dimensional image using a captured image and a distance image, or using a plurality of captured images. Thereby, the certainty of the three-dimensional image can be increased.
 図12は、本実施の形態に係る非接触センシングシステム10の動作のうち、センサデータの3Dデータベース化の処理を示すフローチャートである。図12は、図10のステップS26の詳細な動作の一例である。 FIG. 12 is a flowchart showing a process of converting the sensor data into a 3D database in the operation of the non-contact sensing system 10 according to the present embodiment. FIG. 12 is an example of the detailed operation of step S26 in FIG.
 図12に示されるように、プロセッサ64は、メモリ66に記憶されたデータベースからセンサデータを取得する(S112)。具体的には、プロセッサ64は、距離riと、水平角φiと、鉛直角θiと、物質名Miとを取得する。プロセッサ64は、以下の式(3)に基づいて、取得したセンサデータを三次元直交座標系である空間座標に変換する(S114)。 As shown in FIG. 12, the processor 64 acquires sensor data from the database stored in the memory 66 (S112). Specifically, the processor 64 acquires the distance ri, the horizontal angle φi, the vertical angle θi, and the substance name Mi. The processor 64 converts the acquired sensor data into spatial coordinates, which is a three-dimensional orthogonal coordinate system, based on the following equation (3) (S114).
 (3) Xi=x0+ri×cosθi・sinφi
     Yi=y0+ri×cosθi・cosφi
     Zi=z0+ri×sinθi
(3) Xi = x0 + ri × cos θi · sin φi
Yi = y0 + ri × cosθi · cosφi
Zi = z0 + ri × sin θi
 図11に示される撮影画像データの擬似三次元化と、図12に示されるセンサデータの三次元化とは、いずれが先に行われてもよく、同時に行われてもよい。センサデータの三次元化が行われることで、図13に示されるように、三次元直交座標系で表された空間座標(Xi,Yi,Zi)がデータ番号No.iに対応付けられる。 Either the pseudo three-dimensionalization of the captured image data shown in FIG. 11 and the three-dimensionalization of the sensor data shown in FIG. 12 may be performed first or simultaneously. By performing the three-dimensional conversion of the sensor data, as shown in FIG. 13, the spatial coordinates (Xi, Yi, Zi) expressed by the three-dimensional orthogonal coordinate system are converted to the data numbers No. i.
 ここで、図13は、本実施の形態に係る非接触センシングシステム10で生成された3Dデータベースの一例を示す図である。図13に示されるように、データ番号No.i毎に物質名Mi、濃度Di、管理レベルCi及び空間座標(Xi,Yi,Zi)が対応付けられている。 Here, FIG. 13 is a diagram illustrating an example of a 3D database generated by the non-contact sensing system 10 according to the present embodiment. As shown in FIG. A substance name Mi, a concentration Di, a management level Ci, and space coordinates (Xi, Yi, Zi) are associated with each i.
 3Dデータベースが生成された後、図10に示されるように、コンピュータ60は、生成された3Dデータベースに基づいて、レベル分布を生成する(S28)。コンピュータ60は、生成したレベル分布を示すレベル分布情報をサーバ装置70に送信する(S30)。 After the 3D database is generated, as shown in FIG. 10, the computer 60 generates a level distribution based on the generated 3D database (S28). The computer 60 transmits the generated level distribution information indicating the level distribution to the server device 70 (S30).
 ここで、レベル分布の生成処理の詳細について、図14を用いて説明する。図14は、本実施の形態に係る非接触センシングシステム10の動作のうち、レベル分布の生成処理を示すフローチャートである。図14は、図10のステップS28の詳細な動作の一例を示している。 Here, the details of the level distribution generation processing will be described with reference to FIG. FIG. 14 is a flowchart showing a level distribution generation process in the operation of the non-contact sensing system 10 according to the present embodiment. FIG. 14 shows an example of the detailed operation of step S28 in FIG.
 図14に示されるように、まず、プロセッサ64は、メモリ66から読み出すことで、濃度情報と空間座標(Xi,Yi,Zi)とを取得する(S122)。次に、プロセッサ64は、物質毎の基準値Lmとの比較に基づいて管理レベルCiを決定し、レベル分布を生成する(S124)。次に、プロセッサ64は、生成したレベル分布に基づいて、輪郭及び輪郭内の代表位置を決定する(S126)。輪郭及び代表位置の決定処理は、例えば、図7を用いて上述した通りである。 As shown in FIG. 14, first, the processor 64 acquires density information and spatial coordinates (Xi, Yi, Zi) by reading from the memory 66 (S122). Next, the processor 64 determines the management level Ci based on the comparison with the reference value Lm for each substance, and generates a level distribution (S124). Next, the processor 64 determines a contour and a representative position in the contour based on the generated level distribution (S126). The process of determining the contour and the representative position is, for example, as described above with reference to FIG.
 レベル分布が生成された後、図10に示されるように、コンピュータ60は、合成画像を生成する(S32)。具体的には、コンピュータ60は、撮影画像に対してレベル分布をマッピングすることで、輪郭及び距離情報と撮影画像とを合成する。コンピュータ60は、合成画像データをタブレット端末80に送信する(S34)。 After the level distribution is generated, as shown in FIG. 10, the computer 60 generates a composite image (S32). Specifically, the computer 60 maps the contour and distance information and the captured image by mapping the level distribution to the captured image. The computer 60 transmits the composite image data to the tablet terminal 80 (S34).
 なお、輪郭及び距離情報を含む画像が、撮影画像が表す二次元空間に、少なくとも1種類のエアロゾルの空間内の位置を表す三次元座標データを投影することで生成される第2画像の一例である。輪郭及び距離情報を含む画像は、例えば、図9に示されるエアロゾル画像102である。 Note that the image including the contour and the distance information is an example of a second image generated by projecting three-dimensional coordinate data representing a position in the space of at least one type of aerosol onto a two-dimensional space represented by the captured image. is there. The image including the contour and the distance information is, for example, the aerosol image 102 shown in FIG.
 具体的には、コンピュータ60は、撮影画像が表す二次元空間に、少なくとも1種類のエアロゾルの空間内の位置を表す三次元座標データを投影することで、輪郭及び距離情報を含む画像を生成する。例えば、コンピュータ60は、撮影画像を擬似的に三次元画像に拡張し、拡張した三次元画像と三次元座標データとを対応付けて投影を行うことで、輪郭及び距離情報を含む画像を生成する。三次元画像と三次元座標データとを対応付けとは、三次元画像の三次元座標の原点及び三軸と、三次元座標データの三次元座標の原点及び三軸とを空間内の同一の位置に合わせることである。コンピュータ60は、輪郭及び距離情報を含む画像と撮影画像とを合成することで合成画像を生成する。 Specifically, the computer 60 generates an image including contour and distance information by projecting three-dimensional coordinate data representing a position of at least one type of aerosol in the space into a two-dimensional space represented by the captured image. . For example, the computer 60 generates an image including contour and distance information by expanding the captured image into a pseudo three-dimensional image and projecting the expanded three-dimensional image in association with the three-dimensional coordinate data. . The correspondence between the three-dimensional image and the three-dimensional coordinate data means that the origin and three axes of the three-dimensional coordinates of the three-dimensional image and the origin and three axes of the three-dimensional coordinate of the three-dimensional coordinate data are at the same position in space. It is to match. The computer 60 generates a synthesized image by synthesizing the image including the contour and the distance information with the photographed image.
 また、サーバ装置70は、コンピュータ60から送信されたレベル分布情報に基づいて補助情報を取得する(S36)。補助情報は、例えば注意喚起又は予防アドバイスを含む情報である。サーバ装置70は、取得した補助情報をタブレット端末80に送信する(S38)。 {Circle around (7)} The server device 70 acquires auxiliary information based on the level distribution information transmitted from the computer 60 (S36). The auxiliary information is information including, for example, alert or preventive advice. The server device 70 transmits the acquired auxiliary information to the tablet terminal 80 (S38).
 ここで、補助情報の生成処理の詳細について、図15を用いて説明する。図15は、本実施の形態に係る非接触センシングシステム10の動作のうち、補助情報の生成処理を示すフローチャートである。図15は、図10のステップS36の詳細な動作の一例を示している。 Here, the details of the auxiliary information generation processing will be described with reference to FIG. FIG. 15 is a flowchart illustrating a process of generating auxiliary information in the operation of the non-contact sensing system 10 according to the present embodiment. FIG. 15 shows an example of the detailed operation of step S36 in FIG.
 図15に示されるように、まず、サーバ装置70は、空間95内の対象物毎の管理レベルの代表値Cmを決定する(S132)。次に、サーバ装置70は、空間95内の代表管理レベルCを決定する(S134)。代表管理レベルCの具体的な決定方法は、図8を用いて上述した通りである。 As shown in FIG. 15, first, the server device 70 determines a representative value Cm of the management level for each object in the space 95 (S132). Next, the server device 70 determines a representative management level C in the space 95 (S134). The specific method of determining the representative management level C is as described above with reference to FIG.
 次に、サーバ装置70は、代表管理レベルCと閾値とを比較する(S136)。代表管理レベルCが閾値より大きい場合(S136でYes)、サーバ装置70は、注意画像を生成する(S138)。注意画像の代わりに予防アドバイスを生成してもよい。代表管理レベルCが閾値以下である場合(S136でNo)、補助情報の生成処理は終了する。 Next, the server device 70 compares the representative management level C with the threshold (S136). When the representative management level C is larger than the threshold (Yes in S136), the server device 70 generates a caution image (S138). Preventive advice may be generated instead of a caution image. When the representative management level C is equal to or smaller than the threshold (No in S136), the processing for generating the auxiliary information ends.
 なお、サーバ装置70は、代表管理レベルCと閾値とを比較したが、対象物毎の管理レベルの代表値Cmと閾値とを比較してもよい。つまり、サーバ装置70は、代表管理レベルCを決定しなくてもよい。例えば、花粉及び埃などの複数の対象物の各々の管理レベルの代表値Cmのうち少なくとも1つの代表値Cmが閾値より大きい場合に、サーバ装置70は、注意画像を生成してもよい。 Although the server device 70 compares the representative management level C with the threshold value, the server device 70 may compare the representative value Cm of the management level for each object with the threshold value. That is, the server device 70 need not determine the representative management level C. For example, when at least one representative value Cm among the representative values Cm of the management levels of a plurality of objects such as pollen and dust is larger than a threshold, the server device 70 may generate a caution image.
 最後に、図10に示されるように、タブレット端末80は、コンピュータ60から送信された合成画像データと、サーバ装置70から送信された補助情報とを取得し、合成画像を表示画面82に表示する(S40)。表示画面82に表示される合成画像には、補助情報が含まれなくてもよい。これにより、例えば、図9に示されるような合成画像100が表示画面82に表示される。なお、図9は、補助情報が含まれていない表示例を示している。補助情報が含まれている表示例については、図19を用いて後で説明する。 Finally, as shown in FIG. 10, the tablet terminal 80 acquires the composite image data transmitted from the computer 60 and the auxiliary information transmitted from the server device 70, and displays the composite image on the display screen 82. (S40). The composite image displayed on the display screen 82 may not include the auxiliary information. Thereby, for example, a composite image 100 as shown in FIG. 9 is displayed on the display screen 82. FIG. 9 shows a display example that does not include auxiliary information. A display example including the auxiliary information will be described later with reference to FIG.
 [4.別の表示例]
 以下では、本実施の形態に係るタブレット端末80の表示画面82に表示される合成画像の具体例について、図16から図21を用いて説明する。なお、以下では、図9に示される合成画像100との相違点を中心に説明し、共通点の説明を省略又は簡略化する。
[4. Another display example]
Hereinafter, a specific example of the composite image displayed on display screen 82 of tablet terminal 80 according to the present embodiment will be described with reference to FIGS. In the following, description will be made focusing on differences from the composite image 100 shown in FIG. 9, and description of common points will be omitted or simplified.
 [4-1.静止画(対象物が1種類の場合)]
 図16は、本実施の形態に係るタブレット端末80の表示画面82への表示の別の一例を示す図である。図16に示されるように、表示画面82には、合成画像110が表示されている。
[4-1. Still image (when there is only one type of object)]
FIG. 16 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 16, a composite image 110 is displayed on the display screen 82.
 合成画像110は、撮影画像101と、エアロゾル画像112及び114とが合成された画像である。エアロゾル画像112及び114はそれぞれ、空間95に存在する少なくとも1種類のエアロゾルを表す第2画像の一例である。図16に示される例では、エアロゾル画像112及び114はそれぞれ、花粉を表している。 The composite image 110 is an image in which the captured image 101 and the aerosol images 112 and 114 are composited. Each of the aerosol images 112 and 114 is an example of a second image representing at least one type of aerosol present in the space 95. In the example shown in FIG. 16, the aerosol images 112 and 114 each represent pollen.
 図16に示されるように、エアロゾル画像112は、輪郭112aと、距離情報112bとを含んでいる。同様に、エアロゾル画像114は、輪郭114aと、距離情報114bとを含んでいる。 よ う As shown in FIG. 16, the aerosol image 112 includes a contour 112a and distance information 112b. Similarly, the aerosol image 114 includes a contour 114a and distance information 114b.
 図16に示される例では、距離情報112bは、距離に応じて予め定められた、輪郭112a内に付された色である。距離情報114bも同様である。例えば、距離に応じて色の種類又は濃淡が予め定められている。なお、図16では、輪郭112a内に付されたドットの網掛けの密度によって色を表している。例えば、距離情報114bとして輪郭114a内に付された色は、距離情報112bとして輪郭112a内に付された色よりも濃い色である。このため、合成画像110では、エアロゾル画像114が表す花粉は、エアロゾル画像112が表す花粉よりも距離が短いことが表れている。 In the example shown in FIG. 16, the distance information 112b is a color given in the outline 112a, which is predetermined according to the distance. The same applies to the distance information 114b. For example, the type or shade of color is predetermined in accordance with the distance. In FIG. 16, the color is represented by the density of the hatched dots provided in the outline 112a. For example, the color given in the outline 114a as the distance information 114b is a darker color than the color given in the outline 112a as the distance information 112b. For this reason, the composite image 110 shows that the pollen represented by the aerosol image 114 is shorter in distance than the pollen represented by the aerosol image 112.
 なお、距離情報112b及び114bは、色ではなく、網掛けの濃淡で表されてもよい。例えば、輪郭112a又は114a内に付されたドットの粗密によって距離の遠近が表されてもよい。 Note that the distance information 112b and 114b may be represented by shades of shade instead of colors. For example, the distance may be represented by the density of dots provided in the outline 112a or 114a.
 また、エアロゾル画像112は、さらにレベル情報112cを含んでいる。エアロゾル画像114は、さらにレベル情報114cを含んでいる。レベル情報112cは、エアロゾル画像112が表すエアロゾルの種類及び濃度を示している。レベル情報112cが表す濃度は、例えば、輪郭112a内の各座標の管理レベルCiを代表する値である。例えば、レベル情報112cは、輪郭112a内の各座標の管理レベルCiの最大値又は平均値を示している。例えば、図16に示される例では、レベル情報112cは、エアロゾルの種類である花粉を表す文字と、管理レベルCiを示す数値とを含んでいる。レベル情報114cも同様である。 エ ア The aerosol image 112 further includes level information 112c. The aerosol image 114 further includes level information 114c. The level information 112c indicates the type and density of the aerosol represented by the aerosol image 112. The density represented by the level information 112c is, for example, a value representing the management level Ci of each coordinate in the outline 112a. For example, the level information 112c indicates the maximum value or the average value of the management level Ci of each coordinate in the outline 112a. For example, in the example shown in FIG. 16, the level information 112c includes a character representing pollen, which is the type of aerosol, and a numerical value indicating the management level Ci. The same applies to the level information 114c.
 このように、合成画像110では、エアロゾルまでの距離が、数値以外の表示態様で表示されるので、画像内に数値を含む文字が多くなり、複雑化することを抑制することができる。また、距離が数値以外の表示態様で表示されることで、数値及び文字を利用してエアロゾルの濃度を表すことができる。これにより、画像内の複雑化を抑制しつつ、ユーザに提示できる情報量を増やすことができる。 As described above, in the composite image 110, since the distance to the aerosol is displayed in a display mode other than the numerical values, the number of characters including the numerical values in the image is increased, and the complexity can be suppressed. In addition, since the distance is displayed in a display mode other than the numerical value, the concentration of the aerosol can be represented using the numerical value and characters. This makes it possible to increase the amount of information that can be presented to the user while suppressing complication in the image.
 [4-2.静止画(対象物が複数種類の場合)]
 次に、空間95内に複数種類のエアロゾルが存在する場合の表示例について説明する。
[4-2. Still image (when there are multiple types of objects)]
Next, a display example when a plurality of types of aerosols exist in the space 95 will be described.
 図17は、本実施の形態に係るタブレット端末80の表示画面82への表示の別の一例を示す図である。図17に示されるように、表示画面82には、合成画像120が表示されている。 FIG. 17 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 17, on the display screen 82, a composite image 120 is displayed.
 合成画像120は、撮影画像101と、エアロゾル画像122、124、126及び128とが合成された画像である。エアロゾル画像122、124、126及び128はそれぞれ、空間95に存在する少なくとも1種類のエアロゾルを表している。図17に示される例では、エアロゾル画像122及び128は、花粉を表している。エアロゾル画像124及び126は、埃を表している。 The composite image 120 is an image in which the captured image 101 and the aerosol images 122, 124, 126, and 128 are composited. Each of the aerosol images 122, 124, 126 and 128 represents at least one type of aerosol present in the space 95. In the example shown in FIG. 17, the aerosol images 122 and 128 represent pollen. Aerosol images 124 and 126 represent dust.
 図17に示されるように、エアロゾル画像122は、輪郭122aと、距離情報122bとを含んでいる。エアロゾル画像124は、輪郭124aと、距離情報124bとを含んでいる。エアロゾル画像126は、輪郭126aと、距離情報126bとを含んでいる。エアロゾル画像128は、輪郭128aと、距離情報128bとを含んでいる。距離情報122b、124b、126b及び128bはそれぞれ、図9で示される合成画像100と同様に、距離を表す数値である。 よ う As shown in FIG. 17, the aerosol image 122 includes a contour 122a and distance information 122b. The aerosol image 124 includes an outline 124a and distance information 124b. The aerosol image 126 includes an outline 126a and distance information 126b. The aerosol image 128 includes an outline 128a and distance information 128b. Each of the distance information 122b, 124b, 126b, and 128b is a numerical value representing the distance, similarly to the composite image 100 shown in FIG.
 図17に示される合成画像120では、エアロゾル画像122は、さらにレベル情報122cを含んでいる。エアロゾル画像124は、さらにレベル情報124cを含んでいる。エアロゾル画像126は、さらにレベル情報126cを含んでいる。エアロゾル画像128は、さらにレベル情報128cを含んでいる。 で は In the composite image 120 shown in FIG. 17, the aerosol image 122 further includes the level information 122c. The aerosol image 124 further includes level information 124c. The aerosol image 126 further includes level information 126c. The aerosol image 128 further includes level information 128c.
 レベル情報122cは、輪郭122a内に付された色又は網掛けである。具体的には、レベル情報122cは、色の濃淡又は網掛けの粗密によって管理レベルCiの大小を表している。例えば、レベル情報122cは、色が濃い程、又は、網掛けが密である程、管理レベルCiが大きいことを表している。レベル情報122cは、色が薄い程、又は、網掛けが疎である程、管理レベルが小さいことを表している。レベル情報124c、126c及び128cも同様である。 The level information 122c is a color or hatching added to the outline 122a. Specifically, the level information 122c indicates the magnitude of the management level Ci by shading the color or shading density. For example, the level information 122c indicates that the management level Ci is higher as the color is darker or as the shade is denser. The level information 122c indicates that the management level is smaller as the color is lighter or the hatching is sparser. The same applies to the level information 124c, 126c and 128c.
 さらに、レベル情報122cは、色又は網掛けの種類によって、エアロゾルの種類を表している。つまり、同一の種類の色又は網掛けは、同一のエアロゾルであることを意味する。例えば、図17に示される例では、ドットの網掛けが花粉を表し、格子状の網掛けが埃を表している。レベル情報124c、126c及び128cも同様である。 レ ベ ル Furthermore, the level information 122c indicates the type of the aerosol by the color or the type of hatching. That is, the same type of color or shading means the same aerosol. For example, in the example shown in FIG. 17, the hatching of dots represents pollen, and the grid-like shading represents dust. The same applies to the level information 124c, 126c and 128c.
 以上のことから、エアロゾル画像122は、エアロゾル画像128が表すエアロゾルと同じ種類であり、かつ、エアロゾル画像128が表すエアロゾルと比較して、その濃度は低く、距離が離れていることを表している。同様に、エアロゾル画像124は、エアロゾル画像126が表すエアロゾルと同じ種類であり、かつ、エアロゾル画像126が表すエアロゾルと比較して、その濃度は高く、距離が近いことを表している。 From the above, the aerosol image 122 is of the same type as the aerosol represented by the aerosol image 128, and has a lower concentration and a greater distance than the aerosol represented by the aerosol image 128. . Similarly, the aerosol image 124 is of the same type as the aerosol represented by the aerosol image 126, and has a higher concentration and a shorter distance than the aerosol represented by the aerosol image 126.
 図18は、本実施の形態に係るタブレット端末80の表示画面82への表示の別の一例を示す図である。図18に示されるように、表示画面82には、合成画像130が表示されている。 FIG. 18 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 18, a composite image 130 is displayed on the display screen 82.
 合成画像130は、撮影画像101と、エアロゾル画像132、134、136及び138とが合成された画像である。エアロゾル画像132、134、136及び138はそれぞれ、空間95に存在する少なくとも1種類のエアロゾルを表している。図18に示される例では、エアロゾル画像132及び138は、花粉を表している。エアロゾル画像134及び136は、埃を表している。 The composite image 130 is an image in which the photographed image 101 and the aerosol images 132, 134, 136, and 138 are composited. Aerosol images 132, 134, 136 and 138 each represent at least one aerosol present in space 95. In the example shown in FIG. 18, the aerosol images 132 and 138 represent pollen. Aerosol images 134 and 136 represent dust.
 図18に示されるように、エアロゾル画像132は、輪郭132aと、距離情報132bと、レベル情報132cとを含んでいる。エアロゾル画像134は、輪郭134aと、距離情報134bと、レベル情報134cとを含んでいる。エアロゾル画像136は、輪郭136aと、距離情報136bと、レベル情報136cとを含んでいる。エアロゾル画像138は、輪郭138aと、距離情報138bと、レベル情報138cとを含んでいる。 As shown in FIG. 18, the aerosol image 132 includes a contour 132a, distance information 132b, and level information 132c. The aerosol image 134 includes an outline 134a, distance information 134b, and level information 134c. The aerosol image 136 includes an outline 136a, distance information 136b, and level information 136c. The aerosol image 138 includes an outline 138a, distance information 138b, and level information 138c.
 距離情報132b、134b、136b及び138bはそれぞれ、図16で示される合成画像110と同様に、距離に応じて予め定められた、輪郭内に付された色である。また、図18に示される例では、距離情報132b、134b、136b及び138bは、色又は網掛けの種類によって、エアロゾルの種類を表している。つまり、同一の種類の色又は網掛けは、同一のエアロゾルであることを意味する。例えば、図18に示される例では、ドットの網掛けが花粉を表し、格子状の網掛けが埃を表している。 Distance information 132b, 134b, 136b, and 138b are each a color given in the outline that is predetermined according to the distance, similarly to the composite image 110 shown in FIG. In the example shown in FIG. 18, the distance information 132b, 134b, 136b, and 138b indicates the type of the aerosol by the color or the type of hatching. That is, the same type of color or shading means the same aerosol. For example, in the example shown in FIG. 18, the hatching of dots represents pollen, and the grid-like shading represents dust.
 レベル情報132c、134c、136c及び138cはそれぞれ、図16で示される合成画像110と同様に、エアロゾルの種類である花粉を表す文字と、管理レベルCiを示す数値とを含んでいる。 The level information 132c, 134c, 136c, and 138c each include a character indicating pollen, which is the type of aerosol, and a numerical value indicating the management level Ci, similarly to the composite image 110 illustrated in FIG.
 このように、複数種類のエアロゾルが異なる表示態様で表示されるので、ユーザにエアロゾルの位置だけでなく、種類を提示することができる。これにより、画像内の複雑化を抑制しつつ、ユーザに提示できる情報量を増やすことができる。 複数 Since a plurality of types of aerosols are displayed in different display modes, it is possible to present not only the position of the aerosol but also the type to the user. This makes it possible to increase the amount of information that can be presented to the user while suppressing complication in the image.
 [4-3.静止画(注意画像を含む場合)]
 次に、エアロゾルの濃度が閾値を超えた場合の表示例について説明する。
[4-3. Still image (when attention image is included)]
Next, a display example when the concentration of the aerosol exceeds the threshold will be described.
 図19は、本実施の形態に係るタブレット端末80の表示画面82への表示の別の一例を示す図である。図19に示されるように、表示画面82には、合成画像140が表示されている。 FIG. 19 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 19, on the display screen 82, a composite image 140 is displayed.
 合成画像140は、図18に示される合成画像130と比較して、エアロゾル画像138の代わりにエアロゾル画像148が合成されている。エアロゾル画像148は、輪郭148aと、距離情報148bと、レベル情報148cとを含んでいる。輪郭148aと、距離情報148bと、レベル情報148cとはそれぞれ、図18に示される輪郭138aと、距離情報138bと、レベル情報138cと同様である。 The synthesized image 140 is different from the synthesized image 130 shown in FIG. 18 in that an aerosol image 148 is synthesized instead of the aerosol image 138. The aerosol image 148 includes an outline 148a, distance information 148b, and level information 148c. The contour 148a, the distance information 148b, and the level information 148c are the same as the contour 138a, the distance information 138b, and the level information 138c shown in FIG. 18, respectively.
 エアロゾル画像148のレベル情報148cは、管理レベルCiが“3”である。管理レベルCiが閾値を超えているので、表示画面82には、注意喚起を促すための注意画像141が表示されている。 レ ベ ル The level information 148c of the aerosol image 148 has the management level Ci of “3”. Since the management level Ci exceeds the threshold value, the display screen 82 displays a caution image 141 for calling attention.
 注意画像141は、例えば、注意を促す文字であるが、これに限らない。注意画像141は、例えば、所定の図形などであってもよい。あるいは、ユーザの注意を惹くことができれば、表示態様は特に限定されない。例えば、表示画面82に表示される合成画像140の全体を点滅表示させてもよく、色調を変化させてもよい。 The attention image 141 is, for example, a character calling for attention, but is not limited thereto. The attention image 141 may be, for example, a predetermined figure. Alternatively, the display mode is not particularly limited as long as it can attract the user's attention. For example, the whole of the composite image 140 displayed on the display screen 82 may be displayed blinking or the color tone may be changed.
 また、注意画像141に加えて、又は、注意画像141の代わりに、予防アドバイスが表示画面82に表示されてもよい。予防アドバイスは、例えば、文字情報として表示される。あるいは、予防アドバイスを表す文字情報の代わりに、予防アドバイスの詳細を記載したウェブページなどに接続するためのURL(Uniform Resource Locator)又はQRコード(登録商標)などが表示されてもよい。 In addition, preventive advice may be displayed on the display screen 82 in addition to or instead of the caution image 141. The preventive advice is displayed, for example, as character information. Alternatively, instead of the character information indicating the preventive advice, a URL (Uniform Resource Locator) or a QR code (registered trademark) for connecting to a web page or the like describing the details of the preventive advice may be displayed.
 [4-4.擬似三次元画像]
 次に、合成画像が擬似三次元画像である場合について説明する。
[4-4. Pseudo three-dimensional image]
Next, a case where the composite image is a pseudo three-dimensional image will be described.
 図20は、本実施の形態に係るタブレット端末80の表示画面82への表示の別の一例を示す図である。図20に示されるように、表示画面82には、合成画像200が表示されている。 FIG. 20 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. As shown in FIG. 20, on the display screen 82, a composite image 200 is displayed.
 合成画像200は、空間95と、少なくとも1種類のエアロゾルが存在する範囲を表す輪郭とを三次元モデル化した画像である。具体的には、合成画像200は、視点変更が可能な擬似的な三次元画像である。 The composite image 200 is a three-dimensional modeled image of the space 95 and a contour representing a range in which at least one type of aerosol exists. Specifically, the composite image 200 is a pseudo three-dimensional image whose viewpoint can be changed.
 図20に示されるように、合成画像200は、撮影画像201と、エアロゾル画像202とが合成された画像である。エアロゾル画像202は、輪郭202aと、レベル情報202cとを含んでいる。 合成 As shown in FIG. 20, the composite image 200 is an image in which the captured image 201 and the aerosol image 202 are composited. The aerosol image 202 includes an outline 202a and level information 202c.
 図20の部分(a)では、図9と同様に、空間95を水平方向に見たときの合成画像200が表示画面82に表示されている。ユーザの指示又は時間の経過によって、表示画面82には、図20の部分(b)に示されるように、空間95を斜め上方から見たときの合成画像200が表示画面82に表示される。例えば、ユーザが表示画面82をスワイプすることで、視点が自由に変更可能である。また、合成画像200は、拡大及び縮小が自在に表示されてもよい。 In the part (a) of FIG. 20, the composite image 200 when the space 95 is viewed in the horizontal direction is displayed on the display screen 82 as in FIG. According to the user's instruction or the passage of time, the display screen 82 displays the composite image 200 when the space 95 is viewed from obliquely above, as shown in part (b) of FIG. For example, when the user swipes the display screen 82, the viewpoint can be freely changed. Further, the composite image 200 may be displayed so as to be freely enlarged and reduced.
 視点が変更されることにより、エアロゾル画像202の輪郭202aの表示位置及び形状が変化する。これにより、エアロゾルの空間95内の位置が精度良く提示される。 表示 By changing the viewpoint, the display position and shape of the outline 202 a of the aerosol image 202 change. Thereby, the position of the aerosol in the space 95 is accurately presented.
 [4-3.動画像]
 次に、エアロゾルを表す画像が動画像である場合について説明する。
[4-3. Video]
Next, a case where the image representing the aerosol is a moving image will be described.
 図21は、本実施の形態に係るタブレット端末80の表示画面82への表示の別の一例を示す図である。図21の部分(a)から部分(e)はそれぞれ、表示画面82の表示の時間変化を示している。表示画面82に表示される合成画像300は、例えば1秒間隔から数秒間隔で順に切り替わる。 FIG. 21 is a diagram showing another example of the display on the display screen 82 of the tablet terminal 80 according to the present embodiment. Portions (a) to (e) of FIG. 21 each show a time change of the display on the display screen 82. The composite image 300 displayed on the display screen 82 is sequentially switched, for example, from one second to several seconds.
 合成画像300は、撮影画像301と、複数のエアロゾル画像312、322、332及び334とが合成された画像である。複数のエアロゾル画像312、322、332及び334はそれぞれ、基準位置からの距離に対応している。 The composite image 300 is an image in which the captured image 301 and a plurality of aerosol images 312, 322, 332, and 334 are composited. Each of the plurality of aerosol images 312, 322, 332, and 334 corresponds to a distance from the reference position.
 図21に示されるように、表示画面82には、距離情報302が表示される。距離情報302は、奥行き方向における距離を数値で表している。複数のエアロゾル画像312、322、332及び334は、距離が0.8m、1.1m、1.4m、1.7mの位置におけるエアロゾルを表している。 距離 As shown in FIG. 21, distance information 302 is displayed on the display screen 82. The distance information 302 represents the distance in the depth direction by a numerical value. The plurality of aerosol images 312, 322, 332, and 334 represent aerosols at distances of 0.8 m, 1.1 m, 1.4 m, and 1.7 m.
 なお、図21の部分(a)に示されるように、距離が0.5mの場合には、エアロゾル画像が含まれていない。つまり、距離が0.5mの位置には、エアロゾルが存在していないことを表している。 As shown in FIG. 21A, when the distance is 0.5 m, no aerosol image is included. That is, the aerosol does not exist at the position where the distance is 0.5 m.
 エアロゾル画像312は、輪郭312aと、レベル情報312cとを含んでいる。エアロゾル画像322は、輪郭322aと、レベル情報322cとを含んでいる。エアロゾル画像332は、輪郭332aと、レベル情報332cとを含んでいる。エアロゾル画像342は、輪郭342aと、レベル情報342cとを含んでいる。 The aerosol image 312 includes an outline 312a and level information 312c. The aerosol image 322 includes an outline 322a and level information 322c. The aerosol image 332 includes an outline 332a and level information 332c. The aerosol image 342 includes an outline 342a and level information 342c.
 輪郭312a、322a、332a及び342aはそれぞれ、対応する距離におけるエアロゾルの存在範囲を表している。同様に、レベル情報312c、322c、332c及び342cは、対応する距離におけるエアロゾルの濃度を表している。例えば、レベル情報312c、322c、332c及び342cは、対応する距離におけるエアロゾルの輪郭内の各座標の濃度の最大値を表している。図21の部分(d)に示されるように、距離が1.4mの場合に、エアロゾルの管理レベルCi、すなわち、濃度が最も高いことが表れている。 The contours 312a, 322a, 332a, and 342a each represent the aerosol presence range at the corresponding distance. Similarly, the level information 312c, 322c, 332c, and 342c indicates the concentration of the aerosol at the corresponding distance. For example, the level information 312c, 322c, 332c, and 342c represent the maximum value of the concentration of each coordinate in the outline of the aerosol at the corresponding distance. As shown in part (d) of FIG. 21, when the distance is 1.4 m, the management level Ci of the aerosol, that is, the highest concentration appears.
 なお、合成画像300において、撮影画像301は静止画であるが、時間とともに変化してもよい。つまり、撮影画像301は、動画像であってもよい。 In the composite image 300, the captured image 301 is a still image, but may change with time. That is, the captured image 301 may be a moving image.
 (他の実施の形態)
 以上、1つ又は複数の態様に係る表示装置、画像処理装置及び制御方法について、実施の形態に基づいて説明したが、本開示は、これらの実施の形態に限定されるものではない。本開示の主旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したもの、及び、異なる実施の形態における構成要素を組み合わせて構築される形態も、本開示の範囲内に含まれる。
(Other embodiments)
As described above, the display device, the image processing device, and the control method according to one or more aspects have been described based on the embodiments, but the present disclosure is not limited to these embodiments. Unless departing from the gist of the present disclosure, various modifications conceivable by those skilled in the art are applied to the present embodiment, and forms configured by combining components in different embodiments are also included in the scope of the present disclosure. It is.
 例えば、上記の実施の形態では、第1センサ30、第2センサ40及び第3センサ50がそれぞれ、自律移動式のセンサである例を示したが、これに限らない。第1センサ30、第2センサ40及び第3センサ50の少なくとも1つは、空間95内の所定の位置に固定された据え置き型のセンサ装置であってもよい。所定の位置は、例えば、空間95を構成する天井、床又は壁などである。 For example, in the above embodiment, the first sensor 30, the second sensor 40, and the third sensor 50 have been described as examples in which each of the sensors is an autonomous mobile sensor, but the present invention is not limited to this. At least one of the first sensor 30, the second sensor 40, and the third sensor 50 may be a stationary sensor device fixed at a predetermined position in the space 95. The predetermined position is, for example, a ceiling, a floor, a wall, or the like that forms the space 95.
 また、例えば、エアロゾル又はその他の対象物を表す画像に濃度を反映させる場合に、管理レベルではなく、濃度そのものの値を数値で表示させてもよい。また、複数種類のエアロゾルを異なる表示態様で表す場合に、輪郭の線種を異ならせてもよい。例えば、花粉を実線の輪郭で表し、埃を破線の輪郭で表してもよい。 For example, when the density is reflected in an image representing an aerosol or other object, the value of the density itself may be displayed as a numerical value instead of the management level. When a plurality of types of aerosols are displayed in different display modes, the line type of the outline may be different. For example, pollen may be represented by a solid outline, and dust may be represented by a broken outline.
 また、例えば、非接触センシングシステム10は、カメラ20を備えなくてもよい。予め空間95を撮影した撮影画像が、コンピュータ60のメモリ66に記憶されていてもよい。 In addition, for example, the non-contact sensing system 10 may not include the camera 20. A photographed image of the space 95 may be stored in the memory 66 of the computer 60 in advance.
 また、例えば、第1センサ30、第2センサ40及び第3センサ50の少なくとも1つは、接触式のセンサであってもよい。 Also, for example, at least one of the first sensor 30, the second sensor 40, and the third sensor 50 may be a contact-type sensor.
 また、上記実施の形態で説明した装置間の通信方法については特に限定されるものではない。装置間で無線通信が行われる場合、無線通信の方式(通信規格)は、例えば、ZigBee(登録商標)、Bluetooth(登録商標)、又は、無線LAN(Local Area Network)などの近距離無線通信である。あるいは、無線通信の方式(通信規格)は、インターネットなどの広域通信ネットワークを介した通信でもよい。また、装置間においては、無線通信に代えて、有線通信が行われてもよい。有線通信は、具体的には、電力線搬送通信(PLC:Power Line Communication)又は有線LANを用いた通信などである。 The communication method between the devices described in the above embodiment is not particularly limited. When wireless communication is performed between devices, the wireless communication method (communication standard) is, for example, ZigBee (registered trademark), Bluetooth (registered trademark), or short-range wireless communication such as wireless LAN (Local Area Network). is there. Alternatively, the wireless communication method (communication standard) may be communication via a wide area communication network such as the Internet. Wired communication may be performed between the devices instead of wireless communication. Specifically, the wired communication is power line communication (PLC) or communication using a wired LAN.
 また、上記実施の形態において、特定の処理部が実行する処理を別の処理部が実行してもよい。また、複数の処理の順序が変更されてもよく、あるいは、複数の処理が並行して実行されてもよい。また、非接触センシングシステム10が備える構成要素の複数の装置への振り分けは、一例である。例えば、一の装置が備える構成要素を他の装置が備えてもよい。また、非接触センシングシステム10は、単一の装置として実現されてもよい。 In addition, in the above-described embodiment, another processing unit may execute a process executed by a specific processing unit. Further, the order of the plurality of processes may be changed, or the plurality of processes may be executed in parallel. The distribution of the components included in the non-contact sensing system 10 to a plurality of devices is an example. For example, components provided in one device may be provided in another device. Further, the non-contact sensing system 10 may be realized as a single device.
 図22は、実施の形態に係る非接触センシングシステム10を一体的に備えるタブレット端末480を示す図である。タブレット端末480は、板状のデバイスである。図22の部分(a)及び(b)はそれぞれ、タブレット端末480の一方の面及び他方の面を示す平面図である。 FIG. 22 is a diagram showing a tablet terminal 480 integrally provided with the non-contact sensing system 10 according to the embodiment. The tablet terminal 480 is a plate-like device. Parts (a) and (b) of FIG. 22 are plan views showing one surface and the other surface of the tablet terminal 480, respectively.
 図22の部分(a)に示されるように、タブレット端末480の一方の面には、表示画面482が設けられている。図22の部分(b)に示されるように、タブレット端末480の他方の面には、カメラ20、光源32及び光検出器34が設けられている。また、図22には示されていないが、タブレット端末480は、実施の形態におけるコンピュータ60のプロセッサ64及びメモリ66を備える。このように、タブレット端末480は、合成画像481を表示する表示画面482と、カメラ20、センサ装置及びコンピュータ60とが一体化されていてもよい。 表示 As shown in part (a) of FIG. 22, a display screen 482 is provided on one surface of the tablet terminal 480. As shown in part (b) of FIG. 22, on the other surface of the tablet terminal 480, the camera 20, the light source 32, and the photodetector 34 are provided. Although not shown in FIG. 22, the tablet terminal 480 includes the processor 64 and the memory 66 of the computer 60 in the embodiment. As described above, the tablet terminal 480 may be configured such that the display screen 482 displaying the composite image 481, the camera 20, the sensor device, and the computer 60 are integrated.
 また、例えば、サーバ装置70が行う処理は、コンピュータ60又はタブレット端末80によって行われてもよい。あるいは、コンピュータ60が行う処理は、サーバ装置70又はタブレット端末80によって行われてもよい。 For example, the processing performed by the server device 70 may be performed by the computer 60 or the tablet terminal 80. Alternatively, the processing performed by the computer 60 may be performed by the server device 70 or the tablet terminal 80.
 例えば、上記の実施の形態では、コンピュータ60が合成画像を生成する例を示したが、タブレット端末80の制御部84が合成画像を生成してもよい。具体的には、制御部84は、図11に示される撮影画像データの3Dデータベース化の処理、及び、センサデータの3Dデータベース化の処理を行ってもよい。制御部84は、図10に示される3Dデータベース化(S26)、レベル分布の生成(S28)及び合成画像の生成(S32)を行ってもよい。 For example, in the above-described embodiment, an example has been described in which the computer 60 generates a composite image, but the control unit 84 of the tablet terminal 80 may generate a composite image. Specifically, the control unit 84 may perform the processing of converting the captured image data into a 3D database and the processing of converting the sensor data into a 3D database illustrated in FIG. 11. The control unit 84 may generate the 3D database (S26), generate the level distribution (S28), and generate the composite image (S32) shown in FIG.
 例えば、上記実施の形態において説明した処理は、単一の装置又はシステムを用いて集中処理することによって実現してもよく、又は、複数の装置を用いて分散処理することによって実現してもよい。また、上記プログラムを実行するプロセッサは、単数であってもよく、複数であってもよい。すなわち、集中処理を行ってもよく、又は分散処理を行ってもよい。 For example, the processing described in the above embodiment may be realized by centralized processing using a single device or system, or may be realized by distributed processing using a plurality of devices. . In addition, the number of processors that execute the program may be one or more. That is, centralized processing or distributed processing may be performed.
 また、上記実施の形態において、制御部などの構成要素の全部又は一部は、専用のハードウェアで構成されてもよく、あるいは、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPU(Central Processing Unit)又はプロセッサなどのプログラム実行部が、HDD(Hard Disk Drive)又は半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。 Further, in the above-described embodiment, all or a part of the components such as the control unit may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component. Is also good. Each component may be realized by a program execution unit such as a CPU (Central Processing Unit) or a processor reading and executing a software program recorded on a recording medium such as an HDD (Hard Disk Drive) or a semiconductor memory. Good.
 また、制御部などの構成要素は、1つ又は複数の電子回路で構成されてもよい。1つ又は複数の電子回路は、それぞれ、汎用的な回路でもよいし、専用の回路でもよい。 The components such as the control unit may be configured by one or a plurality of electronic circuits. Each of the one or more electronic circuits may be a general-purpose circuit or a dedicated circuit.
 1つ又は複数の電子回路には、例えば、半導体装置、IC(Integrated Circuit)又はLSI(Large Scale Integration)などが含まれてもよい。IC又はLSIは、1つのチップに集積されてもよく、複数のチップに集積されてもよい。ここでは、IC又はLSIと呼んでいるが、集積の度合いによって呼び方が変わり、システムLSI、VLSI(Very Large Scale Integration)、又は、ULSI(Ultra Large Scale Integration)と呼ばれるかもしれない。また、LSIの製造後にプログラムされるFPGA(Field Programmable Gate Array)も同じ目的で使うことができる。 The one or more electronic circuits may include, for example, a semiconductor device, an integrated circuit (IC), or a large scale integration (LSI). The IC or LSI may be integrated on one chip, or may be integrated on a plurality of chips. Here, although the term is referred to as IC or LSI, the term varies depending on the degree of integration, and may be referred to as a system LSI, VLSI (Very Large Scale Integration), or ULSI (Ultra Large Scale Integration). Also, an FPGA (Field Programmable Gate Array) programmed after the manufacture of the LSI can be used for the same purpose.
 また、本開示の全般的又は具体的な態様は、システム、装置、方法、集積回路又はコンピュータプログラムで実現されてもよい。あるいは、当該コンピュータプログラムが記憶された光学ディスク、HDD若しくは半導体メモリなどのコンピュータ読み取り可能な非一時的記録媒体で実現されてもよい。また、システム、装置、方法、集積回路、コンピュータプログラム及び記録媒体の任意な組み合わせで実現されてもよい。 In addition, general or specific aspects of the present disclosure may be realized by a system, an apparatus, a method, an integrated circuit, or a computer program. Alternatively, the present invention may be implemented by a computer-readable non-transitory recording medium such as an optical disk, an HDD, or a semiconductor memory in which the computer program is stored. Further, the present invention may be realized by any combination of a system, an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
 また、上記の各実施の形態は、特許請求の範囲又はその均等の範囲において種々の変更、置き換え、付加、省略などを行うことができる。 In addition, various changes, replacements, additions, omissions, and the like can be made in the above embodiments within the scope of the claims or the equivalents thereof.
 本開示は、エアロゾルの正確な位置を精度良く提示することができる表示装置などとして利用でき、例えば、空調制御又は空間の浄化処理の制御などに利用することができる。 The present disclosure can be used as a display device or the like that can accurately indicate the exact position of an aerosol, and can be used, for example, for air conditioning control or control of space purification processing.
10 非接触センシングシステム
20 カメラ
30 第1センサ
32、42、52 光源
34、44、54 光検出器
36、46、56 信号処理回路
37、47、57 位置情報取得部
38、48、58 濃度情報取得部
40 第2センサ
50 第3センサ
60 コンピュータ
62 通信インタフェース
64 プロセッサ
66 メモリ
70 サーバ装置
80、480 タブレット端末
82、482 表示画面
84 制御部
90 第1対象物
90a、90b 輪郭
92 第2対象物
94 第3対象物
95 空間
96 床面
100、110、120、130、140、200、300、481 合成画像
101、201、301 撮影画像
102、112、114、122、124、126、128、132、134、136、138、148、202、312、322、332、342 エアロゾル画像
102a、112a、114a、122a、124a、126a、128a、132a、134a、136a、138a、148a、202a、312a、322a、332a、342a 輪郭
102b、112b、114b、122b、124b、126b、128b、132b、134b、136b、138b、148b、302 距離情報
112c、114c、122c、124c、126c、128c、132c、134c、136c、138c、148c、202c、312c、322c、332c、342c レベル情報
141 注意画像
10 Non-Contact Sensing System 20 Camera 30 First Sensors 32, 42, 52 Light Sources 34, 44, 54 Light Detectors 36, 46, 56 Signal Processing Circuits 37, 47, 57 Position Information Acquisition Units 38, 48, 58 Acquisition of Density Information Unit 40 second sensor 50 third sensor 60 computer 62 communication interface 64 processor 66 memory 70 server device 80, 480 tablet terminal 82, 482 display screen 84 control unit 90 first object 90a, 90b contour 92 second object 94 3 object 95 space 96 floor surface 100, 110, 120, 130, 140, 200, 300, 481 composite image 101, 201, 301 photographed image 102, 112, 114, 122, 124, 126, 128, 132, 134, 136, 138, 148, 202, 312, 322, 332, 42 aerosol images 102a, 112a, 114a, 122a, 124a, 126a, 128a, 132a, 134a, 136a, 138a, 148a, 202a, 312a, 322a, 332a, 342a contours 102b, 112b, 114b, 122b, 124b, 126b, 128b , 132b, 134b, 136b, 138b, 148b, 302 Distance information 112c, 114c, 122c, 124c, 126c, 128c, 132c, 134c, 136c, 138c, 148c, 202c, 312c, 322c, 332c, 342c Level information 141 Caution image

Claims (22)

  1.  表示画面と、
     カメラで空間を撮像することにより得られた第1画像、及び前記空間に存在する少なくとも1種類のエアロゾルを表す第2画像が合成された合成画像を前記表示画面に表示させる制御部と、を備え、
     前記第2画像には、前記少なくとも1種類のエアロゾルの、前記第1画像における奥行き方向の位置が反映されている、
     表示装置。
    Display screen,
    A control unit configured to display, on the display screen, a combined image obtained by combining a first image obtained by capturing an image of a space with a camera and a second image representing at least one type of aerosol existing in the space. ,
    In the second image, a position of the at least one type of aerosol in the depth direction in the first image is reflected.
    Display device.
  2.  前記第1画像は二次元空間を表しており、
     前記制御部は、さらに、
     前記二次元空間に、前記少なくとも1種類のエアロゾルの前記空間内の位置を表す三次元座標データを投影することで前記第2画像を生成し、
     前記第1画像と前記第2画像とを合成することで前記合成画像を生成する、
     請求項1に記載の表示装置。
    The first image represents a two-dimensional space,
    The control unit further includes:
    The second image is generated by projecting three-dimensional coordinate data representing the position of the at least one aerosol in the space on the two-dimensional space,
    Generating the synthesized image by synthesizing the first image and the second image;
    The display device according to claim 1.
  3.  前記制御部は、さらに、
     前記少なくとも1種類のエアロゾルの前記空間内の位置を取得するセンサから前記三次元座標データを取得し、
     前記第1画像を擬似的に三次元画像に変換し、
     前記三次元画像と前記三次元座標データとを対応付けて、前記二次元空間に前記三次元座標データを投影することで前記第2画像を生成する、
     請求項2に記載の表示装置。
    The control unit further includes:
    Acquiring the three-dimensional coordinate data from a sensor that acquires a position in the space of the at least one aerosol,
    Converting the first image into a pseudo three-dimensional image,
    Generating the second image by associating the three-dimensional image with the three-dimensional coordinate data and projecting the three-dimensional coordinate data onto the two-dimensional space;
    The display device according to claim 2.
  4.  前記第2画像は、前記少なくとも1種類のエアロゾルが存在する範囲を表す輪郭と、前記空間における基準位置から前記輪郭内の代表位置までの距離を表す距離情報とを含む、
     請求項1から3のいずれか1項に記載の表示装置。
    The second image includes a contour representing a range in which the at least one type of aerosol exists, and distance information representing a distance from a reference position in the space to a representative position in the contour.
    The display device according to claim 1.
  5.  前記代表位置は、前記輪郭内における前記少なくとも1種類のエアロゾルの濃度分布の重心である、
     請求項4に記載の表示装置。
    The representative position is a center of gravity of the concentration distribution of the at least one aerosol in the contour,
    The display device according to claim 4.
  6.  前記距離情報は、前記距離を示す数値である、
     請求項4又は5に記載の表示装置。
    The distance information is a numerical value indicating the distance,
    The display device according to claim 4.
  7.  前記距離情報は、前記距離に応じて予め定められた、前記輪郭内に付された色である、
     請求項4又は5に記載の表示装置。
    The distance information is a color given in the outline, which is predetermined according to the distance,
    The display device according to claim 4.
  8.  前記合成画像は、前記空間と、前記少なくとも1種類のエアロゾルが存在する範囲を表す輪郭とを含む三次元モデルを示す、
     請求項1から3のいずれか1項に記載の表示装置。
    The composite image shows a three-dimensional model including the space and a contour representing a range in which the at least one type of aerosol exists.
    The display device according to claim 1.
  9.  前記第2画像は、複数の画像が時間的に切り替わる動画像であり、
     前記複数の画像の各々は、
     前記空間における基準位置からの距離に対応し、
     前記対応する距離における前記少なくとも1種類のエアロゾルが存在する範囲を表す輪郭を含む、
     請求項1から3のいずれか1項に記載の表示装置。
    The second image is a moving image in which a plurality of images are temporally switched,
    Each of the plurality of images is
    Corresponding to the distance from the reference position in the space,
    Including a contour representing a range in which the at least one aerosol exists at the corresponding distance.
    The display device according to claim 1.
  10.  前記第2画像に、さらに、前記少なくとも1種類のエアロゾルの濃度が反映されている、
     請求項1から9のいずれか1項に記載の表示装置。
    The second image further reflects the concentration of the at least one aerosol.
    The display device according to claim 1.
  11.  前記第2画像は、前記少なくとも1種類のエアロゾルの濃度のレベルを表すレベル情報を含む、
     請求項10に記載の表示装置。
    The second image includes level information indicating a level of a concentration of the at least one aerosol,
    The display device according to claim 10.
  12.  前記少なくとも1種類のエアロゾルは、複数種類のエアロゾルを含み、
     前記第2画像は、前記複数種類のエアロゾルをそれぞれ異なる表示態様で表す、
     請求項1から11のいずれか1項に記載の表示装置。
    The at least one aerosol includes a plurality of aerosols,
    The second image represents the plurality of types of aerosols in different display modes,
    The display device according to claim 1.
  13.  前記制御部は、さらに、前記少なくとも1種類のエアロゾルの濃度が閾値を上回った場合に、ユーザの注意を喚起するための画像を前記表示画面に表示する、
     請求項1から12のいずれか1項に記載の表示装置。
    The control unit further displays an image for calling a user's attention on the display screen when the concentration of the at least one type of aerosol exceeds a threshold,
    The display device according to claim 1.
  14.  空間に存在する少なくとも1種類のエアロゾルの前記空間内の位置を表す三次元座標データを取得する取得回路と、
     プロセッサと、を備え、
     前記プロセッサは、前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類のエアロゾルを表す第2画像とが合成された合成画像を生成し、
     前記第2画像には、前記少なくとも1種類のエアロゾルの、前記第1画像における奥行き方向の位置が反映されている、
     画像処理装置。
    An acquisition circuit for acquiring three-dimensional coordinate data representing a position in the space of at least one type of aerosol present in the space;
    And a processor,
    The processor is configured to synthesize a first image obtained by imaging the space with a camera and a second image representing the at least one aerosol existing in the space, based on the three-dimensional coordinate data. Generated composite image,
    In the second image, a position of the at least one type of aerosol in the depth direction in the first image is reflected.
    Image processing device.
  15.  空間中の少なくとも1種類の対象物に向けて照射光を出射する光源、及び、前記少なくとも1種類の対象物からの戻り光を検出する光検出器を含み、前記光検出器が前記戻り光を検出した結果を表すデータを出力するセンサと、
     表示装置と、
     を備えるシステムの制御方法であって、
     前記センサから前記データを取得すること、
     前記データに基づいて、前記少なくとも1種類の対象物の前記空間内の位置を表す三次元座標データを生成すること、
     前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類の対象物を表す第2画像であって、前記少なくとも1種類の対象物の、前記第1画像における奥行き方向の位置が反映された第2画像とが合成された合成画像を生成すること、及び
     前記表示装置に、前記合成画像を表示させること、を含む、
     制御方法。
    A light source that emits irradiation light toward at least one type of object in a space, and a light detector that detects return light from the at least one type of object, wherein the light detector detects the return light. A sensor for outputting data representing a detection result,
    A display device;
    A method for controlling a system comprising:
    Obtaining the data from the sensor;
    Based on the data, generating three-dimensional coordinate data representing a position of the at least one type of object in the space,
    A first image obtained by imaging the space with a camera based on the three-dimensional coordinate data, and a second image representing the at least one type of object existing in the space; Generating a composite image in which a type of target is combined with a second image in which the position in the depth direction of the first image is reflected; and causing the display device to display the composite image. ,
    Control method.
  16.  前記戻り光は、前記照射光によって前記少なくとも1種類の対象物が励起されて発する蛍光であり、
     前記合成画像の生成では、さらに、前記蛍光を分析することで、前記少なくとも1種類の対象物の種類を判別し、前記種類を前記第2画像に反映させる、
     請求項15に記載の制御方法。
    The return light is fluorescence emitted when the at least one object is excited by the irradiation light,
    In the generation of the composite image, further, by analyzing the fluorescence, the type of the at least one type of target object is determined, and the type is reflected in the second image.
    The control method according to claim 15.
  17.  前記照射光は、所定の偏光成分を含み、
     前記合成画像の生成では、さらに、前記戻り光に含まれる前記偏光成分の偏光解消度に基づいて前記少なくとも1種類の対象物の種類を判別し、前記種類を前記第2画像に反映させる、
     請求項16に記載の制御方法。
    The irradiation light includes a predetermined polarization component,
    In the generation of the composite image, further, the type of the at least one type of object is determined based on the degree of depolarization of the polarization component included in the return light, and the type is reflected in the second image.
    The control method according to claim 16.
  18.  前記三次元座標データは、前記照射光が出射される時間と前記戻り光が検出される時間との差に基づいて算出された、前記センサと前記少なくとも1種類の対象物との相対位置関係と、前記センサの前記空間中での座標とを用いて生成される、
     請求項15から17のいずれか1項に記載の制御方法。
    The three-dimensional coordinate data is calculated based on a difference between a time at which the irradiation light is emitted and a time at which the return light is detected, and a relative positional relationship between the sensor and the at least one object. Generated using the coordinates of the sensor in the space.
    The control method according to claim 15.
  19.  前記少なくとも1種類の対象物は、前記空間中に存在する物体に付着した有機物である、
     請求項15から18のいずれか1項に記載の制御方法。
    The at least one type of object is an organic substance attached to an object existing in the space,
    The control method according to any one of claims 15 to 18.
  20.  前記少なくとも1種類の対象物は、前記空間中に存在するエアロゾルである、
     請求項15から18のいずれか1項に記載の制御方法。
    The at least one object is an aerosol present in the space,
    The control method according to any one of claims 15 to 18.
  21.  前記戻り光は、前記照射光が前記少なくとも1種類の対象物によって散乱されて発生する後方散乱光である、
     請求項20に記載の制御方法。
    The return light is back-scattered light generated when the irradiation light is scattered by the at least one object.
    The control method according to claim 20.
  22.  空間中の少なくとも1種類の対象物に向けて照射光を出射する光源、及び、前記少なくとも1種類の対象物からの戻り光を検出する光検出器を含み、前記光検出器が前記戻り光を検出した結果を表すデータを出力するセンサと、
     表示装置と、
     を備えるシステムを制御するためのプログラムを格納したコンピュータ読み取り可能な記録媒体であって、
     前記プログラムが前記コンピュータによって実行されるときに、
     前記センサから前記データを取得すること、
     前記データに基づいて、前記少なくとも1種類の対象物の前記空間内の位置を表す三次元座標データを生成すること、
     前記三次元座標データに基づいて、カメラで前記空間を撮像することにより得られた第1画像と、前記空間に存在する前記少なくとも1種類の対象物を表す第2画像であって、前記少なくとも1種類の対象物の、前記第1画像における奥行き方向の位置が反映された第2画像とが合成された合成画像を生成すること、及び
     前記表示装置に、前記合成画像を表示させること、が実行される
     コンピュータ読み取り可能な記録媒体。
    A light source that emits irradiation light toward at least one type of object in a space, and a light detector that detects return light from the at least one type of object, wherein the light detector detects the return light. A sensor for outputting data representing a detection result,
    A display device;
    A computer-readable recording medium storing a program for controlling a system comprising:
    When the program is executed by the computer,
    Obtaining the data from the sensor;
    Based on the data, generating three-dimensional coordinate data representing a position of the at least one type of object in the space,
    A first image obtained by imaging the space with a camera based on the three-dimensional coordinate data, and a second image representing the at least one type of object existing in the space; Generating a composite image in which a type of object is combined with a second image that reflects the position of the object in the depth direction in the first image, and displaying the composite image on the display device are executed. A computer-readable recording medium.
PCT/JP2019/024410 2018-07-11 2019-06-20 Display device, image processing device, and control method WO2020012906A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980029258.5A CN112041665A (en) 2018-07-11 2019-06-20 Display device, image processing device, and control method
US17/120,085 US11694659B2 (en) 2018-07-11 2020-12-11 Display apparatus, image processing apparatus, and control method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2018131681 2018-07-11
JP2018-131681 2018-07-11
JP2019108042A JP7113375B2 (en) 2018-07-11 2019-06-10 Display device, image processing device and control method
JP2019-108042 2019-06-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/120,085 Continuation US11694659B2 (en) 2018-07-11 2020-12-11 Display apparatus, image processing apparatus, and control method

Publications (1)

Publication Number Publication Date
WO2020012906A1 true WO2020012906A1 (en) 2020-01-16

Family

ID=69141986

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/024410 WO2020012906A1 (en) 2018-07-11 2019-06-20 Display device, image processing device, and control method

Country Status (1)

Country Link
WO (1) WO2020012906A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003294567A (en) * 2002-03-29 2003-10-15 Osaka Gas Co Ltd Gas leak visualizing and distance measuring device
US20060203248A1 (en) * 2005-03-11 2006-09-14 Reichardt Thomas A Natural gas leak mapper
JP2006343153A (en) * 2005-06-07 2006-12-21 Konica Minolta Sensing Inc Three-dimensional position measuring method and apparatus used for three-dimensional position measurement
JP2007232374A (en) * 2006-02-27 2007-09-13 Shikoku Res Inst Inc Hydrogen gas visualization method and system by raman scattering light
JP2011529189A (en) * 2008-07-24 2011-12-01 マサチューセッツ インスティテュート オブ テクノロジー System and method for image formation using absorption
JP2017032362A (en) * 2015-07-30 2017-02-09 株式会社キーエンス Measurement object measurement program, measurement object measurement method and enlargement observation device
US20170089800A1 (en) * 2015-09-30 2017-03-30 General Monitors, Inc. Ultrasonic gas leak location system and method
WO2018061816A1 (en) * 2016-09-28 2018-04-05 パナソニックIpマネジメント株式会社 Imaging device
WO2019138641A1 (en) * 2018-01-15 2019-07-18 コニカミノルタ株式会社 Gas monitoring system and gas monitoring method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003294567A (en) * 2002-03-29 2003-10-15 Osaka Gas Co Ltd Gas leak visualizing and distance measuring device
US20060203248A1 (en) * 2005-03-11 2006-09-14 Reichardt Thomas A Natural gas leak mapper
JP2006343153A (en) * 2005-06-07 2006-12-21 Konica Minolta Sensing Inc Three-dimensional position measuring method and apparatus used for three-dimensional position measurement
JP2007232374A (en) * 2006-02-27 2007-09-13 Shikoku Res Inst Inc Hydrogen gas visualization method and system by raman scattering light
JP2011529189A (en) * 2008-07-24 2011-12-01 マサチューセッツ インスティテュート オブ テクノロジー System and method for image formation using absorption
JP2017032362A (en) * 2015-07-30 2017-02-09 株式会社キーエンス Measurement object measurement program, measurement object measurement method and enlargement observation device
US20170089800A1 (en) * 2015-09-30 2017-03-30 General Monitors, Inc. Ultrasonic gas leak location system and method
WO2018061816A1 (en) * 2016-09-28 2018-04-05 パナソニックIpマネジメント株式会社 Imaging device
WO2019138641A1 (en) * 2018-01-15 2019-07-18 コニカミノルタ株式会社 Gas monitoring system and gas monitoring method

Similar Documents

Publication Publication Date Title
CN111033300B (en) Distance measuring device for determining at least one item of geometric information
JP6579450B2 (en) Smart lighting system, method for controlling lighting and lighting control system
KR102644439B1 (en) Detector for optically detecting one or more objects
JP5950296B2 (en) Person tracking attribute estimation device, person tracking attribute estimation method, program
CN107680124B (en) System and method for improving three-dimensional attitude score and eliminating three-dimensional image data noise
US9117281B2 (en) Surface segmentation from RGB and depth images
US11568511B2 (en) System and method for sensing and computing of perceptual data in industrial environments
CN107449459A (en) Automatic debugging system and method
US11694659B2 (en) Display apparatus, image processing apparatus, and control method
CN107025663A (en) It is used for clutter points-scoring system and method that 3D point cloud is matched in vision system
US20180137369A1 (en) Method and system for automatically managing space related resources
US20150042645A1 (en) Processing apparatus for three-dimensional data, processing method therefor, and processing program therefor
EP3226212A1 (en) Modeling device, three-dimensional model generating device, modeling method, and program
US11602845B2 (en) Method of detecting human and/or animal motion and performing mobile disinfection
CN104919334B (en) It is used to detect the general purpose transducer system of operating gesture on vehicle
TW201724022A (en) Object recognition system, object recognition method, program, and computer storage medium
US20220139086A1 (en) Device and method for generating object image, recognizing object, and learning environment of mobile robot
WO2020012906A1 (en) Display device, image processing device, and control method
Burbano et al. 3D cameras benchmark for human tracking in hybrid distributed smart camera networks
KR102089719B1 (en) Method and apparatus for controlling mechanical construction process
KR102645539B1 (en) Apparatus and method for encoding in a structured depth camera system
Ozendi et al. An emprical point error model for TLS derived point clouds
JP7051031B1 (en) Information processing equipment and detection system
JP2020135765A (en) Data generation support system, data generation support method and program
JPWO2020003818A1 (en) Inspection equipment and inspection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19833562

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19833562

Country of ref document: EP

Kind code of ref document: A1