WO2024038526A1 - Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system - Google Patents

Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system Download PDF

Info

Publication number
WO2024038526A1
WO2024038526A1 PCT/JP2022/031122 JP2022031122W WO2024038526A1 WO 2024038526 A1 WO2024038526 A1 WO 2024038526A1 JP 2022031122 W JP2022031122 W JP 2022031122W WO 2024038526 A1 WO2024038526 A1 WO 2024038526A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
optical
optical member
traveling direction
predetermined
Prior art date
Application number
PCT/JP2022/031122
Other languages
French (fr)
Japanese (ja)
Inventor
秀夫 安東
雄貴 遠藤
智 早田
末男 上野
雄太 平出
Original Assignee
株式会社 ジャパンセル
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 ジャパンセル filed Critical 株式会社 ジャパンセル
Priority to JP2024541332A priority Critical patent/JPWO2024038526A5/en
Priority to PCT/JP2022/031122 priority patent/WO2024038526A1/en
Publication of WO2024038526A1 publication Critical patent/WO2024038526A1/en
Priority to US19/053,752 priority patent/US20250193366A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/48Laser speckle optics

Definitions

  • This embodiment relates to the technical field of controlling the characteristics of light itself, the field of measurement using light, the field of utilizing light, or the field of providing services using light.
  • the characteristics of light itself are not limited to wavelength characteristics, intensity distribution characteristics, and phase distribution characteristics (including wave front characteristics), but also have various attributes such as directivity and coherence. It has been known.
  • imaging technologies that are performed by placing an image sensor at the imaging position of the target object
  • application fields that utilize spectral characteristic measurement technology, length measurement technology, display technology, etc. of the target object to be measured.
  • application fields such as imaging spectrum, which combines the above-mentioned imaging technology and spectral characteristic measurement technology, and three-dimensional measurement, which combines the above-mentioned imaging technology and the above-mentioned length measurement technology, have recently developed.
  • Known fields of service provision using light include technical fields that provide services to users by utilizing various information obtained in the application fields using light.
  • provision of services to users not only the provision of information to users and the provision and control of an optimal user environment, but also the provision of services in both directions (reciprocal) with users, there are various methods of providing services. included.
  • the above is not limited to the provision of a method for generating a predetermined light having desirable or relatively appropriate characteristics in various application fields using light, the provision of an optical characteristic conversion unit, a light source using the same, a method for utilizing a predetermined light, A service providing method and a service providing system may also be provided.
  • Patent Document 1 in order to improve inspection accuracy on the surface of a target object (semiconductor wafer), the inclination angle of irradiation is changed for each emitted light from a plurality of light sources. Using multiple light sources tends to make the device more complicated and larger. On the other hand, when a single light source is used, the phase difference between the irradiated lights at different tilt angles is always fixed, which causes the problem of increased optical interference noise.
  • Patent Document 2 describes a method of reducing optical interference noise by passing light passing through a transparent optical element having a different thickness for each region into a common optical fiber.
  • a transparent optical element having a different thickness for each region into a common optical fiber.
  • further reduction of optical interference noise is desired.
  • a solid-state image sensor is provided with a pixel memory, and the exposure time of each pixel can be independently set, thereby allowing ultra-high-speed imaging.
  • optical interference noise mixes into the light irradiated onto the measurement object that is imaged at ultra-high speed or the light reflected from the measurement object, the quality of the captured image will significantly deteriorate.
  • the user's convenience may be improved by providing the user with a hands-free environment, and by providing an additional function and an automatic selection function for the amount of information entered visually by the user.
  • the user's eye movements movement of the eyeballs
  • movements of the eyelids and eyebrows such as blinking/winking
  • facial expressions and audio content body movements
  • body movements such as gestures, and movements of fingers and hands (arms).
  • body movements such as gestures, and movements of fingers and hands (arms).
  • a method for collecting biometric information of a user in real time, with high precision, and simply may be provided to realize convenience for the user, high added value, and high reliability.
  • a highly reliable service may be provided by using the collected biometric information of the user with an identification function or an authentication function (biometric authentication) to prevent fraud or to prevent inappropriate actions that are not intended by the user. It is not limited to this, but it can also infer the user's mood and health condition from facial expressions, voice, movement characteristics, breathing, pulsation, changes in blood components, etc., and provide the user with an appropriate environment based on the estimation results. It is also possible to provide comfortable services or service systems.
  • an environment that allows clear and detailed three-dimensional expression may be provided.
  • a perpendicular to the entrance surface of the predetermined optical member is defined, and a perpendicular to the exit surface of the predetermined optical member is defined.
  • the traveling direction of the first light has an inclination angle between at least one of the normal to the incident surface side and the normal to the exit surface side,
  • the traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light.
  • the optical characteristics between the first light and the second light are changed.
  • FIG. 1 is a configuration diagram showing an example of an overall system outline.
  • FIG. 2 is an explanatory diagram of the relationship between (desirable) optical characteristics required in various technical fields.
  • FIG. 3 is an explanatory diagram of a mechanism in which a collection of lights of different wavelengths constitutes a wave train.
  • FIG. 4 is an explanatory diagram of an experimental system for measuring wave sequence characteristics using an optical interference phenomenon.
  • FIG. 5 illustrates the interference characteristics between single wave trains, one of which is delayed.
  • FIG. 6A shows the experimental results of measuring the wave train characteristics.
  • FIG. 6B is an explanatory diagram of the relationship between successive wave sequences predicted based on the experimental results of FIG. 6A.
  • FIG. 7A is an explanatory diagram of a basic optical system layout diagram using an optical characteristic conversion element.
  • FIG. 1 is a configuration diagram showing an example of an overall system outline.
  • FIG. 2 is an explanatory diagram of the relationship between (desirable) optical characteristics required in various technical fields.
  • FIG. 7B illustrates a specific structural example of the optical property conversion element.
  • FIG. 7C is an explanatory diagram of a method in which the optical characteristic conversion element utilizes the phase asynchronous characteristic between successive wave trains.
  • FIG. 7D describes another example regarding the specific structure of the optical property conversion element.
  • FIG. 7E illustrates another application example regarding the specific structure of the optical property conversion element.
  • FIG. 8 is an explanatory diagram of the effect of the optical characteristic conversion element in reducing noise in the spectral characteristics.
  • FIG. 9A is an explanatory diagram of a basic optical configuration frequently used in this embodiment.
  • FIG. 9B is an explanatory diagram regarding a specific embodiment of the basic optical configuration.
  • FIG. 9C is an explanatory diagram regarding another specific embodiment of the basic optical configuration.
  • FIG. 9A is an explanatory diagram of a basic optical configuration frequently used in this embodiment.
  • FIG. 9B is an explanatory diagram regarding a specific embodiment of the basic optical configuration.
  • FIG. 9C is an explanatory
  • FIG. 9D illustrates a simple method of cutting wavefront continuity at each traveling wave cross-section position.
  • FIG. 9E explains the difference in wavefront continuity between a general imaging lens and a Fresnel lens.
  • FIG. 9F is an explanatory diagram of an embodiment in which wavefront discontinuity is applied.
  • FIG. 9G is an explanatory diagram of another embodiment in which wavefront discontinuity is applied.
  • FIG. 9H explains the relationship between the microscopic reflection direction and the macroscopic reflection direction in the multi-division light reflection element.
  • FIG. 9I is an explanatory diagram of an applied form of FIG. 9G or FIG. 9H.
  • FIG. 10A is an explanatory diagram of the relationship between biological system components and corresponding absorption wavelengths.
  • FIG. 10A is an explanatory diagram of the relationship between biological system components and corresponding absorption wavelengths.
  • FIG. 10B is an explanatory diagram of the human eyeball structure.
  • FIG. 11A is an explanatory diagram of an embodiment example regarding the internal structure of a hybrid type light emitting section.
  • FIG. 11B is an explanatory diagram of the structure inside the near-infrared emitting phosphor.
  • FIG. 11C is an explanatory diagram regarding a method for producing a near-infrared emitting phosphor.
  • FIG. 12A is an explanatory diagram of an embodiment regarding an integrated structure of a light source section and a measurement section.
  • FIG. 12B is an explanatory diagram of another embodiment of an integrated structure of a light source section and a measurement section.
  • FIG. 13 shows an explanatory diagram of the principle of the cause of optical noise generation in this embodiment seen from a different perspective.
  • FIG. 11A is an explanatory diagram of an embodiment example regarding the internal structure of a hybrid type light emitting section.
  • FIG. 11B is an explanatory diagram of the structure inside the near-infrared
  • FIG. 14A is an explanatory diagram of an example of an optical noise reduction method.
  • FIG. 14B is an explanatory diagram of another embodiment of the optical noise reduction method.
  • FIG. 14C is an explanatory diagram of an application example regarding the optical noise reduction method.
  • FIG. 14D shows the optical noise reduction effect when using the application described in FIG. 14C.
  • FIG. 15A is a diagram illustrating a comparison of characteristics between a single mode optical fiber and a multimode optical fiber.
  • FIG. 15B is an explanatory diagram of types and characteristics of multimode optical fibers.
  • FIG. 15C is an explanatory diagram of the mode of light passing through the core region of the optical fiber.
  • FIG. 16A is an explanatory diagram of intensity gravity center shift generation using mode addition within an optical fiber.
  • FIG. 16A is an explanatory diagram of intensity gravity center shift generation using mode addition within an optical fiber.
  • FIG. 16B is an explanatory diagram of an optical noise reduction method using intensity gravity center shift.
  • FIG. 16C is an explanatory diagram of the relationship between the optical characteristic conversion element and optical noise reduction.
  • FIG. 17A is an explanatory diagram of the relationship between the incident angle with respect to the perpendicular to the entrance surface of a predetermined optical member and the optical noise reduction effect.
  • FIG. 17B is an explanatory diagram of the relationship between the number of angular divisions of the optical characteristic conversion element and the optical noise reduction effect.
  • FIG. 18A is an explanatory diagram of an example regarding the optical arrangement within the light source section.
  • FIG. 18B is an explanatory diagram of an embodiment mainly relating to the arrangement of electronic circuits within the light source section.
  • FIG. 19A is a data format explanatory diagram showing a time-varying light emission pattern.
  • FIG. 19B is an explanatory diagram regarding the communication control sequence between the host and the light source unit.
  • FIG. 19C is an explanatory diagram of an example of a control signal format for controlling light emission start timing.
  • FIG. 20A is an explanatory diagram showing information extraction and flow in this embodiment.
  • FIG. 20B is a classification diagram showing information contents extracted in this embodiment.
  • FIG. 20C shows a method for removing disturbance noise for each measurement location/content within the measurement object.
  • FIG. 21A shows a basic data processing method in this embodiment for spectral characteristics and image signals that change over time.
  • FIG. 21B shows another embodiment regarding a data processing method for spectral characteristics and image signals that change over time.
  • FIG. 21A shows a basic data processing method in this embodiment for spectral characteristics and image signals that change over time.
  • FIG. 21B shows another embodiment regarding a data processing method for spectral characteristics and image signals that change
  • FIG. 21C shows an application example regarding a data processing method of spectral characteristics and image signals using exposure by pulsed light emission.
  • FIG. 22 is a diagram illustrating the characteristics of the charge accumulation type signal receiving section.
  • FIG. 23A is an explanatory diagram of an example of signal processing (data processing) leading to the generation of a reference signal after DC component removal.
  • FIG. 23B is an explanatory diagram of an example of the second information extraction method for each wavelength or each pixel.
  • FIG. 24A is an explanatory diagram of an experimental optical system used in a signal processing experiment using a charge accumulation type signal receiving section.
  • FIG. 24B is an explanatory diagram of the spectral characteristics of the irradiation light and the detection light for the measurement target.
  • FIG. 25A shows the detected light amount temporal change characteristics for each different wavelength light.
  • FIG. 25B shows the detected light amount time change characteristics for each different wavelength light.
  • FIG. 26A shows an enlarged view inside the image sensor used in this embodiment.
  • FIG. 26B is a partial explanatory diagram of the internal drive circuit of the image sensor used in this embodiment.
  • FIG. 26C is an explanatory diagram of the operation timing in the drive circuit described in FIG. 26B.
  • FIG. 27A is an explanatory diagram of an embodiment in which a light source section and a measurement section are integrated.
  • FIG. 27B is an explanatory diagram of the internal structure of the optical device when a 3D color image sensor is used in the measurement section.
  • FIG. 28A is an explanatory diagram of a 3D color image (video) collection procedure.
  • FIG. 28B is an explanatory diagram of a reflected light pattern imaging method using light source wavelength light over the entire measurement distance range.
  • FIG. 28C is an explanatory diagram of a reflected light pattern imaging method using light source wavelength light for each measurement distance range.
  • FIG. 28D is a timing explanatory diagram of light source unit light emission and measurement unit exposure during detailed distance measurement.
  • FIG. 28E is an explanatory diagram of a distance measurement method using a combination of multiple pixels.
  • FIG. 28F shows a method of combining light emission and exposure to reduce the effects of speckle noise.
  • FIG. 28G is an explanatory diagram of a signal detection state within the measurement unit that monitors the amount of speckle noise.
  • FIG. 28H is an explanatory diagram of a signal detection state within the measurement unit that reduces the influence of speckle noise.
  • FIG. 29A is a structural explanatory diagram of a linear variable bandpass filter.
  • FIG. 29B is an explanatory diagram of another embodiment in which a light source section and an image sensor are combined.
  • FIG. 30A is an explanatory diagram of the relationship between input devices and output devices for a predetermined service providing domain in cyberspace.
  • FIG. 30B explains an example of the form of an input/output device to a predetermined service providing domain in cyberspace.
  • FIG. 30C illustrates another example input/output device configuration to a predetermined service providing domain in cyberspace.
  • FIG. 31A is an explanatory diagram of a method for adjusting the size of display content in real space and content displayed in cyberspace.
  • FIG. 31B shows an example of an authentication environment when participating in a predetermined service providing domain using biometry.
  • FIG. 31C is an explanatory diagram of a detailed authentication example using biometry.
  • FIG. 32 is an explanatory diagram of an embodiment of providing a service that allows time manipulation in cyberspace.
  • FIG. 33A is an explanatory diagram of four-dimensional coordinate axis directions as seen from the image sensor.
  • FIG. 33B is an explanatory diagram of a four-dimensional mesh structure that captures changes in the surface shape of a measurement target in this embodiment.
  • FIG. 33C is an explanatory diagram of the data structure of the four-dimensional mesh in this embodiment.
  • FIG. 34A is an explanatory diagram of a mapping concept used in a time-manipulable service provision domain in cyberspace.
  • FIG. 34B shows a procedure explanatory diagram for generating a four-dimensional image (video) to be displayed to the user using mapping technology.
  • FIG. 34C shows an example of a method for rendering and coordinate transformation of an individual four-dimensional mesh structure on a map.
  • FIG. 34D is an explanatory diagram of a method of adjusting color intensity according to lighting conditions within the map.
  • FIG. 34E is an explanatory diagram of a coordinate conversion method for three-dimensional display to the user.
  • the predetermined light generation method, optical characteristic changing unit, light source, predetermined light utilization method, detection method, imaging method, display method, optical measurement unit, optical device, service providing method, and service providing system in this embodiment are described below in the drawings. Explain with reference to.
  • the light emitted from the light source section 2 is irradiated onto the object 20 via the light propagation path 6. Then, the light obtained from this object 20 enters the measuring section 8 via the light propagation path 6 again. Furthermore, the present invention is not limited to this, and the light emitted from the light source section 2 may be directly incident on the measurement section 8 via the light propagation path 6. In another embodiment, the light emitted from the light source section 2 may reach the display section 18 via the light propagation path 6, and predetermined information may be displayed on the display section 18.
  • the measuring device 12 in this embodiment includes a light source section 2, a measuring section 8, and an internal system control section 50. Further, an application field (various optical application fields) adaptation section 60 exists outside the measuring device 12. Each of the sections 62 to 76 in the application field (various optical application fields) adaptation section 60 can individually exchange information with the system control section 50.
  • the information obtained from the measurement results in the measurement section 4 and the sections 62 to 76 in the application field (various optical application fields) adaptation section 60 are used in conjunction to provide services to the user.
  • the service providing system 14 in this embodiment is composed of the measuring device 12, the application field (various optical application fields) adapting section 60, and the external system 16, and is configured to be able to provide all kinds of services to users.
  • the remaining portion of the service providing system 14 excluding the external system 16 functions independently as the optical device 10.
  • the medical/welfare-related test processing unit 70 operates, and the information obtained from the measurement unit 8 can be used to assist in remote diagnosis.
  • the blood sugar level obtained by analyzing the data collected from the measurement unit 8 can be used for diagnosis of diabetes.
  • the pulsation waveform obtained at the same time may be used to diagnose arrhythmia related to heart disease.
  • a processing example will be described when an arrhythmia is detected in a pulsation waveform while measuring a specific user's blood sugar level.
  • the pulsating waveform of a specific user is extracted within the signal processing section 42 and transferred to the characteristic analysis/analysis processing section 62 via the signal/information conversion section (including decoding/demodulation processing) 44 and the system internal control section 50. .
  • this characteristic analysis/analysis processing section 62 analyzes the pulsation waveform and performs pattern matching with the standard waveform and the lesion waveform.
  • arrhythmia can be detected and defects within the heart can be predicted.
  • the arrhythmia detection result and intracardiac defect prediction information are then transmitted to the medical/welfare-related examination processing section 70 via the in-system control section 50.
  • the medical/welfare-related test processing unit 70 provides information (for example, sends an e-mail) to the family doctor in the external system 16 via the information transmission path 4. Additionally, if this specific user has concluded a prior contract with a predetermined insurance company (non-life insurance company), the medical/welfare-related inspection processing unit 70 automatically provides information to the insurance company (non-life insurance company). (e.g. send email). As a result, it is possible to provide a service that handles troublesome procedures such as arranging hospitalization and reducing treatment costs on behalf of the user, without placing a burden on the user.
  • the treatment adaptation control/processing unit 68 may be activated, and the doctor may remotely monitor the progress of the treatment. In other words, by tracking temporal changes in blood sugar levels and pulsation waveforms, a distant doctor can understand the progress of the disease and the progress of healing.
  • the user's health information is not limited to the above, and the user's health information may be used to provide any other service.
  • the non-life insurance company may use the optical device 10 to check the health condition of the user who is the subject of the contract.
  • a service for setting the amount of damages based on the information obtained from the optical device 10 may also be provided.
  • the information obtained from the optical device 10 may be used, for example, to set the interest amount and loan conditions when the user makes a deposit at a bank or when the bank lends the user (a company managed by the user).
  • the information obtained from the optical device 10 may be used in educational settings. For example, a student's level of concentration and drowsiness can be predicted based on pulse rate, breathing rate, eye movements, and eyelid movements. The content of the lecture can be changed as appropriate based on the student's concentration level and drowsiness information obtained from the optical device 10. This will improve educational efficiency.
  • the optical device 10 may serve as an entrance to cyberspace by using the information transmission path 4. (In other words, the optical device 10 can be directly connected to the cyberspace via the information transmission path 4.)
  • a service corresponding to the role of the entrance to the cyberspace an individual when entering the cyberspace
  • the optical device 10 or the service providing system 14 therein, or a visible light camera built into the measurement unit 8 is performed. Facial recognition and body recognition can be performed using . Therefore, in this embodiment, by using the user-related information collected by the optical device 10, it is possible to provide a personal authentication service when entering cyberspace. Further, the personal authentication service may be provided using any method other than the above (for example, voiceprint detection).
  • a camera section of a personal computer or a mobile terminal may be used as an entrance to this cyberspace.
  • a wearable terminal that can be worn by a user may be used as the physical form of the display unit 18 in the optical device 10.
  • the wearable terminal that can be worn by the user may take any physical form such as glasses, a hat, a helmet, or a bag.
  • the measuring section 8 in the optical device described above may be placed in a region that directly contacts the user's skin.
  • the present embodiment is not limited to this, and the activity of individual neurons within the user's head can be monitored. Therefore, using the optical device 10, users can efficiently approach cyberspace.
  • the present invention is not limited to this, and various non-optical sensors 52 within the optical device 10 can be used to provide high user convenience in dealing with cyberspace.
  • a gyroscope or an acceleration sensor is arranged as the various non-optical sensors 52 to detect the movement of the user's head or a part of the user's body (for example, a hand or a finger).
  • a glasses-type wearable terminal such as VR or AR
  • the display screen rotates accordingly.
  • the user leans forward or leans back the user moves forward or backward on the display screen.
  • An example of service provision to the user in collaboration between the information providing unit 72, collected information storage 74, and signal processing unit 42 in the service providing system 14 is shown below.
  • a menu screen is displayed on a VR screen or an AR screen of a wearable terminal (such as glasses or a helmet) worn by a user.
  • a wearable terminal such as glasses or a helmet
  • a wearable terminal such as VR or AR is incorporated into the display unit 18, 2.
  • the gyroscope and acceleration sensor in the various non-optical sensors 52 detect the movement of the user's head and fingers (or hands), 3.
  • the signal processing unit 42 uses the user's biological signals measured by the measurement unit 8, the signal processing unit 42 outputs information regarding the user's biological body, 4.
  • the system internal control unit 50 integrates and uses the above information, An identity in cyberspace corresponding to the user using the optical device 10 is formed. Any service can then be provided to the identity within this cyberspace.
  • the present invention is not limited to this, and by operating a robot placed in real space via cyberspace, it is possible to provide further services to users.
  • tourism services can be provided to users by operating robots that can walk automatically and installed in remote locations.
  • robots that can walk automatically installed in hospitals and other facilities can be operated to provide nursing care services from a distance.
  • voice input and user finger (or hand) movements are required for identity manipulation in cyberspace and robot manipulation in real space.
  • cumbersome vocalizations and finger movements are not required, and high-speed operation becomes possible. This greatly improves the convenience of providing services in this embodiment.
  • the user's emotions and intentions may be estimated one by one within the optical device 10. Images, videos, and sounds that are displayed when the user is liked or interested are appropriately stored in the collected information storage section 74.
  • the external system 16 collects the information (images, video, audio) stored in the collected information storage section 74 via the information transmission path 4 at appropriate timing. Next, the information collected within the external system 16 may be analyzed to extract products with purchasing power, and the information may be provided for a fee to a sales company of the corresponding product.
  • Personal information management is extremely important in providing services in cyberspace in this embodiment. Therefore, among the services provided in this embodiment, the personal information management service itself is a very important service.
  • an account ID identification
  • the user's health information and preference information obtained from the optical device 10 are linked to the account ID, it becomes personal information.
  • a personal information management agent may be resident within the collected information storage section 74 or within the characteristic analysis/analysis processing section 62.
  • Information such as “which facial muscles of the user are contracting,” “the content ratio of each component in the blood,” or “which nerve cells are active (nerve impulses)” is collected by the signal processing unit 42. parsed within. Advanced judgments such as "estimation of user emotion”, “estimation of user preference”, and “estimation of user intention” using this information are performed within the characteristic analysis/analysis processing unit 62.
  • the information obtained by the characteristic analysis/analysis processing unit 62 is stored in the collected information storage unit 74 as appropriate. Then, in response to a request from the external system 16, necessary information is transmitted to the external system 16 via the information transmission path 4.
  • the personal information management agent links transmittable external range information to each piece of information obtained by the characteristic analysis/analysis processing unit 62. Therefore, transmittable external range information is set for all information stored in the collected information storage section 74. Then, for each information transmission request from the external system 16, the personal information management agent determines whether transmission to the outside is possible. By performing the personal information management service within the optical device 10 in this manner, highly reliable personal information protection is possible.
  • this embodiment may be used as a tool to create artificial intelligence (to make artificial intelligence learn).
  • artificial intelligence for example, a ⁇ multi-input, multiple-output parallel processing method with a learning function'' used in deep learning technology or quantum computer technology may be used.
  • Examples of complex analysis/processing for which multiple-input, multiple-output parallel processing is suitable include image analysis and image understanding, language processing and language understanding, and advanced judgments adapted to complex situations.
  • Both the human and artificial intelligence of the measurement object 22 are given their tasks at the same time. Then, the answer given by the human can be regarded as the correct answer, and learning feedback can be applied to the artificial intelligence so that it approaches the correct answer.
  • the artificial intelligence to be learned is installed in advance on the external system 16, and the correct answer given by the human is notified to the artificial intelligence via the information transmission path 4 from the optical device 10 (or the application field adaptation section 60). can.
  • service provision are not limited to those described above, and any service provision may be provided in which the optical device 10 is connected to a cyberspace built on the external system 16 via the information transmission path 4.
  • FIG. 2 shows a list of (desirable) optical properties 102 required for each optical application field 100.
  • the required (desired) optical characteristic contents 102 surrounded by a rectangular frame can be met.
  • the optical application field 100 to which this embodiment is applied has many meanings, as shown in FIG. However, the present embodiment is not limited to this, and all application fields 100 related to light in some way (including display using light) are applicable to this embodiment.
  • Wave chain of light containing light of different wavelengths Depending on the wavelengths contained in light, it is generally classified into multi-wavelength light (panchromatic light) and single-wavelength light (monochromatic light).
  • laser light is considered to be a single wavelength light.
  • all light includes light of different wavelengths.
  • the wavelength width of emitted light is relatively narrow.
  • the wavelength width of emitted light from a semiconductor laser is relatively wide, and often has a half width of wavelength of about 2 nm.
  • FIG. 3 shows the characteristics when wavelength light included within the range of wavelength width ⁇ is gathered.
  • FIG. 3(c) shows the progress of light with a center wavelength ⁇ 0
  • FIGS. 3(a) and 3(e) show the progress of light with wavelengths ⁇ 0 ⁇ /2 and ⁇ 0 + ⁇ /2.
  • FIGS. 3(b) and 3(d) represent the progress of light with wavelengths ⁇ 0 ⁇ /4 and ⁇ 0 + ⁇ /4.
  • FIG. 3(f) shows the result of combining all the wavelength lights.
  • a collection of light formed by the collection of wavelength lights within the range of wavelength width ⁇ is called a wave train.
  • the largest peak will be formed at the center position in FIG. 3(f) where they are combined. Furthermore, as the light beam moves left and right from the center position in FIG. 3, the phases of the light beams of different wavelengths shift. At the left and right ends of FIG. 3, the phases of lights of different wavelengths are completely different. When a phase shift occurs between FIGS. 3(a) to 3(e) in this way, the amplitude of the entire wave train combining them decreases in the peripheral direction.
  • the above-mentioned wavelength width ⁇ means the wavelength width of the wavelength light included in the light emitted from the light source section 2.
  • the spectral intensity characteristics (light intensity characteristics for each wavelength) of the light emitted from the light source section 2 often have non-uniform spectral distribution characteristics within the above wavelength range.
  • the wavelength half-width (wavelength range with an intensity that is half of the maximum intensity in the wavelength direction) or the e -2 width (the wavelength range that has an intensity of e -2 with respect to the maximum intensity in the wavelength direction) is defined as the wavelength width ⁇ . You can call me.
  • the present invention is not limited to this, and when the light source section 2 emits multi-wavelength light (panchromatic light), it may be considered that the wavelength resolution within the measurement section 8 corresponds to the wavelength width ⁇ .
  • one preamplifier 1150 (detection cell) simultaneously detects light of different wavelengths.
  • the wavelength range of wavelength light detected by one preamplifier 1150 corresponds to the wavelength width ⁇ .
  • the spectral sensitivity characteristics (wavelength dependence of signal detection sensitivity characteristics) detected by this one preamplifier 1150 (detection cell) may be non-uniform.
  • the wavelength half-width (wavelength range with a detection sensitivity half of the maximum detection sensitivity in the wavelength direction) or the e -2 width (the wavelength range with a detection sensitivity of e -2 relative to the maximum detection sensitivity in the wavelength direction) is used. ) may be called the wavelength width ⁇ .
  • This wave series characteristic is expressed by the following formula.
  • the individual waves in FIGS. 3(a) to 3(e) can be represented by plane waves with different frequencies ⁇ from ⁇ 0 ⁇ /2 to ⁇ 0 + ⁇ /2. Therefore, when integrating a plane wave in this frequency range, the wave chain characteristic is
  • the sinc function obtained here corresponds to the envelope shown in FIG. 3(f). Further, as can be seen from Equation 1, the wavelength in FIG. 3(f) matches the wavelength in FIG. 3(c).
  • the physical distance ⁇ L 0 from the center to the end of the wave train shown in FIG. 3(f) is called the coherence length. And this coherence distance is
  • FIG. 4(a) shows the optical system used in the experiment. This optical system is roughly composed of a light source section 2, a sample setting section 36, and a measuring section 8, and light is transmitted between each section via an optical fiber.
  • a tungsten halogen lamp HL was used as the light source in the light source section 2.
  • a concave mirror CM is placed on the opposite side of the optical path to increase the efficiency of using the light emitted from the halogen lamp HL. That is, this concave mirror CM reflects the light emitted toward the rear of the halogen lamp HL (toward the left side in the figure) and returns it to the inside of the halogen lamp HL. The light that has passed through the interior of the halogen lamp HL then travels toward the front of the halogen lamp HL (to the right in the figure).
  • a lens L1 with a focal length of 25.4 mm converts the light emitted from the halogen lamp HL into parallel light. Thereafter, a lens L2 with a focal length of 25.4 mm focuses this parallel light onto the entrance surface of the bundle fiber BF.
  • 320 optical fibers each having a core diameter of 230 ⁇ m and an NA of 0.22 are bundled.
  • An optical characteristic changing element 210 is placed in the middle of the parallel optical path between these two lenses L1 and L2.
  • the filament that emits light within the halogen lamp HL has a size of 2 mm x 4 mm. Therefore, the emitted light from the outermost side of the filament generates off-axis aberration (coma aberration) within the imaging optical system composed of the two lenses L1 and L2. In order to eliminate the influence of this comatic aberration, an aperture A3 with a diameter of 3 mm was placed immediately after the halogen lamp HL.
  • a lens L3 with a focal length of 50 mm converts the light emitted from the bundle fiber BF into parallel light.
  • the parallel light beam then enters the sample TS.
  • an aperture A10 with a diameter of 10 mm was placed just in front of the sample to improve the accuracy and reproducibility of the obtained spectral characteristic data.
  • spectral characteristic data is obtained using the transmitted light of the sample.
  • a lens L4 with a focal length of 250 mm focuses the transmitted light of this sample onto the incident surface of the single-core fiber SF (core diameter 600 ⁇ m).
  • the spectrometer SM a near-infrared spectrometer (C11482GA manufactured by Hamahoto Co., Ltd.) with a wavelength resolution of 7.5 nm was used.
  • FIG. 7B(b) A structural example of the optical characteristic changing element 210 will be described later using FIG. 7B(b).
  • FIG. 5 shows the interference state between the straight light S 0 and the twice reflected light S 1 .
  • the position of the envelope characteristic S0 of the wave train traveling straight through the transparent glass plate was fixed at a reference position.
  • the function S 1 representing the envelope characteristic of the wave train after one or two reflections it was described as a relative position change when the center wavelength ⁇ 0 was changed from 0.9 ⁇ m to 1.7 ⁇ m.
  • the horizontal axis of FIG. 5 represents the coherence length ⁇ L 0 expressed by Equation 2 in reference units, and since the mechanical average thickness d 0 between the front and back surfaces of the transparent glass plate is a fixed value, the wave traveling straight through the transparent glass plate is The mechanical distance between the center position of the wave train S 0 and the center position of the wave train S 1 after two reflections is kept constant.
  • this mechanical constant distance is converted into a standard unit of coherent distance ⁇ L 0 .
  • the coherence length ⁇ L 0 changes in proportion to the square of the center wavelength ⁇ 0 . Therefore, the relative position between the two wave trains in FIG. 5 appears to change depending on the value of the center wavelength ⁇ 0 .
  • the area of the overlapping region (shaded area in FIG. 5) ⁇ S0S1> between both wave trains corresponds to the size of the optical interference fringes generated between the two wave trains.
  • the center wavelength ⁇ 0 is 1.7 ⁇ m or 1.5 ⁇ m
  • the wave trains overlap.
  • the center wavelength ⁇ 0 is 1.1 ⁇ m or less
  • the overlap between the two wave trains becomes “0” and no interference fringes are generated.
  • FIG. 6A Experimental results obtained using the optical system of FIG. 4 are shown in FIG. 6A.
  • the wavelength resolution of the spectrometer SM was 7.5 nm, and when the thickness d0 of the transparent glass plate was calculated as 138.40 ⁇ m, the measured data and the theoretical calculation results based on existing theory almost matched.
  • the region where the measured data and the theoretical calculation results almost match is limited to the wavelength side longer than 1.4 ⁇ m.
  • a discrepancy was observed between the theoretical calculation results based on existing theory and the measured data, as will be explained in the next section.
  • FIG. 6B(f) shows the result of the wave chain characteristics outside the end ⁇ (around the ⁇ region) calculated using the existing theory.
  • the existing theory shown in FIG. 6B(f) the amplitude of the wave train decreases and the wave train disappears when moving away from the central part ⁇ of the wave train.
  • waves propagate discontinuously in space like pulsed light. Continuously emitted light, such as that from a halogen lamp, cannot be explained using existing theories.
  • FIG. 6B(g) shows a new physical model proposed to solve the problems of the existing theory.
  • a paradigm shift reversal of the phase angle advancing direction
  • the measurement data in Fig. 6A can be modified. I can explain it well.
  • a physical model in which a reversal of the phase angle advancing direction occurs at the end ⁇ of the wave train will be described below.
  • Equations 1 and 3 The characteristics of the envelope of the wave chain near the end (near the ⁇ position in Figure 6B) are obtained from Equations 1 and 3.
  • Equation 8 The upper right-hand side of Equation 8 represents the "wave series that precedes (occurs first)" near the end. Further, the lower equation of Equation 8 represents the vicinity of the starting end of the "trailing (later occurring) wave series". A particularly notable point is that "reversal of the phase angle advancing direction” occurs between the upper right side of Equation 8 and the lower equation. In this way, when the "reversal of the phase angle advancing direction" occurs near the end of the "preceding wave series" (near the ⁇ position in FIG. 6B), phase synchronization between the component wavelength lights starts immediately thereafter. As a result, a "trailing wave series" occurs.
  • the above equation 7 shows a conditional theoretical equation for the occurrence of the above paradigm shift.
  • ⁇ reversing the direction of phase angle progression'' occurs at an ⁇ place where the phase is unspecified'' within the ⁇ preceding wave series''. Occur.
  • phase discontinuity phase asynchrony
  • phase continuity phase continuity or phase synchronization
  • the phase of the composite light obtained by overlapping the previous wave train and the rear wave train is always uniquely determined.
  • [D] the amount of phase shift between the previous and subsequent wave trains always changes. Therefore, when observed over a predetermined period of time (when viewed macroscopically in the time direction), it is impossible to observe interference between previous and subsequent wave sequences. This state is called "incoherence between the previous wave series and the following wave series" when looking at time macroscopically.
  • the intensity of the combined light of the previous wave series and the subsequent wave series becomes equal to the sum of the average intensity of the previous wave series and the average intensity of the subsequent wave series. In this embodiment, this situation is called "strength addition.”
  • FIG. 7A shows the basic arrangement of an optical system using the optical characteristic conversion element 210 in this embodiment. That is, the optical characteristic conversion element 210 divides the initial light 200 into a plurality of lights 202 to 206.
  • a first optical path 222 in the optical property conversion element 210 forms a first light 202 having a first optical property
  • a second optical path 224 forms a second light 204 having a second optical property.
  • a photosynthesis site 220 then combines the first light 202 and the second light 204 to form a predetermined light 230.
  • at least a portion between the first optical path 222 and the second optical path 224 is arranged at different spatial locations.
  • first optical characteristic that this first light 202 has and the second optical characteristic that this second light 204 has are different from each other.
  • This "difference in optical properties” may also refer to the “phase discontinuity (phase asynchrony)" between the two described in the previous section.
  • incoherence between at least a portion of the first light 202 and at least a portion of the second light 204 may mean the difference in optical properties. .
  • the third light 206 having the third optical characteristic may be formed in the third optical path 226 without being limited thereto.
  • at least a portion of this third optical path 226 may be arranged at a different spatial location than the first optical path 222 and the second optical path 224.
  • wave front division is performed on the initial light 200.
  • Each of the lights 202 to 206 may be extracted individually.
  • This wavefront division refers to the optical cross-section of the incident initial light 200 (a surface obtained by cutting the light flux constituted by the initial light 200 by a plane perpendicular to the traveling direction of the initial light 200) or the wavefront of the initial light 200.
  • Each region 212 to 216 is placed at a different location on the top, and each light 202 to 206 is individually extracted.
  • the optical property conversion element 210 used in this embodiment includes a first region 212 and a second region 214 that are different from each other.
  • the optical path length for each region 212 and 214 may also be changed.
  • “phase discontinuity (asynchrony)” (mutually different optical characteristics) occurs between the first light 202 and the second light 204.
  • the spatial structure of the optical characteristic conversion element 210 is such that the first and second lights 202 and 204 can be easily combined to form a predetermined light 230 at the light synthesis location 220.
  • a spatial structure that facilitates combining the first light 202 and the second light 204 to form the predetermined light 230
  • the incident initial light 200 is wavefront-divided into each light 202 and 204.
  • a spatial structure may be adopted in which the first region 212 is arranged in a predetermined region within a cross section of the light beam obtained by cutting the light beam by a plane perpendicular to the traveling direction of the incident initial light 200.
  • a spatial structure is adopted in which a second region 214 is arranged in another region within the cross section of the light beam.
  • the method is not limited to this, and as another method, the initial light 200 may be subjected to amplitude division or intensity division.
  • a third region 216 is further provided within the optical property conversion element 210, and the third light 206 that has passed through the third region 216 is also combined with other lights 202 and 204 at a light synthesis location 220. Good too.
  • FIG. 7B(b) shows an example of the structure of this optical property conversion element 210.
  • the optical arrangement in FIG. 7B(a) matches that in FIG. 4(a) already described.
  • the optical property conversion element 210 is located at the position of the parallel light between the two lenses L1 and L2.
  • a pair of semicircular glasses with a thickness of 2 mm and 3 mm are attached by rotating them 90 degrees. Next, each pair is rotated 45 degrees and bonded together, thereby completing an optical property changing element divided into eight parts in the angular direction.
  • the glass thicknesses of the eight divided regions differ from each other by 1 mm or more.
  • the thickness of the optical property changing element is 0 mm. Therefore, the light passing through this region A passes through a region in the optical characteristic changing element where no glass exists.
  • the glass thickness changes sequentially to 2 mm, 4 mm, 7 mm, 10 mm, 8 mm, 6 mm, and 3 mm.
  • each light beam passing through each area is called an "element". That is, different elements have different optical distances (optical path lengths) after passing through the optical property changing element 210.
  • this coherence length ⁇ L 0 is uniquely determined by the center wavelength ⁇ 0 and the wavelength width ⁇ .
  • This center wavelength ⁇ 0 is determined by the wavelength range of the light used (or the maximum wavelength of the light used) or the wavelength range of the detection light used in the measuring section 8 (or the maximum wavelength of the detection light).
  • the wavelength width ⁇ is determined by the wavelength width of the light used or the detection performance (for example, wavelength resolution) of the measuring section 8.
  • BK7 was used as the material of this optical property changing element 210 (glass), and an antireflection coating was formed on the interface (front and back surfaces) where light enters and exits.
  • the refractive index of BK7 is represented by n
  • the glass thickness in each region in FIG. 7B(b) is represented by d.
  • the optical path length within each region can be calculated as "d(n-1)", and the glass thickness between each region in FIG. 7B(b) differs by 1 mm or more.
  • the difference in glass thickness between the above regions is larger than the coherence length ⁇ L 0 (or twice the coherence length ⁇ L 0 ).
  • FIG. 7C shows the operating principle when the characteristics of the continuously generated multiple wave trains described in the previous section are applied to the optical arrangement described in FIG. 7A. As already explained in the previous section, it is considered that there is an unsynchronized phase relation 402 between the initial wave trains 400 that occur one after the other.
  • the initial light 200 that has entered in the form in which the initial wave train 400 shown in FIG. 7C(a) is continuously generated is wavefront-split when it passes through the optical characteristic conversion element 210 that operates/controls the phase synchronization characteristic.
  • FIG. 7C(b) shows the spatial propagation state (wave train state 406) of the first light 202 that has passed through the first region 212 in the optical characteristic conversion element 210 of FIG. 7A.
  • the amplitude in FIG. 7C(b) is smaller than the amplitude in FIG. 7C(a) because the first light 202 was extracted as a result of wave front divided of the initial light 200.
  • FIG. 7C(c) shows the spatial propagation state (wave train state 408) of the second light 204 extracted after passing through the second region 214.
  • the amplitude in FIG. 7C(c) is almost the same as that in FIG. 7C(b), there is a difference in optical path length between the two. Therefore, in FIGS. 7C(b) and 7C(c), the center positions of the wave trains 406 and 408 are shifted.
  • phase asynchrony (phase discontinuity) 402 occurs between different wave trains 400, and the size of one wave train is given by 2 ⁇ L 0 (see FIG. 3(f)). . Therefore, if the optical path length difference between the first region 212 and the second region 214 in the optical characteristic conversion element 210 in FIG. 7A is set to 2 ⁇ L 0 or more, the phase asynchronization at the same position between FIG. 7C (b) and (c) will occur. 402 happens.
  • FIG. 7C(d) shows a situation where both wave trains 406, 408 are synthesized or combined 410 to form a predetermined light 230 at a light synthesis location 220.
  • the wave trains 406 and 408 that are phase asynchronous 402 are combined and the light intensity is averaged ( ensemble average effect of intensities) 420 occurs. Accordingly, the optical noise averaging effect (smoothing effect or reduction effect) between the optical noise generated within the first light 202 and the optical noise generated within the second light 204 is increased. get up.
  • FIG. 7D shows an application example of the embodiment of the structure of the optical property conversion element 210.
  • a 1 mm thick semicircular glass is rotated 30 degrees and adhered, and a 6 mm thick semicircular glass is further adhered. Then, when viewed from the light traveling direction 348, the light is divided into 12 parts at equal intervals in the angular direction.
  • angle division a division method of dividing the wavefront cross section of light into angular directions with respect to the optical axis in the light traveling direction 348 is referred to as "angle division.” Specifically, this means the area division (wavefront division) indicated by the broken line in FIG. 7D(c).
  • the embodiment shown in FIG. 7D is divided into 12 parts at equal intervals in the angular direction (12 angular divisions). As a result, a difference in glass thickness of 1 mm or more occurs between each of the angularly divided regions.
  • FIG. 7D further has a structure in which cylindrical glasses of different diameters are stacked and bonded.
  • the division method of dividing the wavefront cross section of light in the radial direction with reference to the optical axis in the light traveling direction 348 is referred to as "radial division.” Specifically, this refers to region division (wavefront division) indicated by the solid line in FIG. 7D(c), and region division is performed for each circumference with a different radius.
  • radial division region division (wavefront division) indicated by the solid line in FIG. 7D(c)
  • region division is performed for each circumference with a different radius.
  • it is divided into four parts in the radial direction (radius divided into four parts).
  • the number of divided areas is 48 (12 ⁇ 4).
  • the number of divisions is not limited to this, and the number of divisions may be set arbitrarily.
  • the diameter of the radius division boundary line is set so that the area of each radius division area is equal.
  • the diameter of each cylindrical glass may be set at arbitrary intervals without being limited thereto.
  • the division method may be changed depending on the intensity characteristics of the light passing through (or reflecting) inside the optical characteristic conversion element 210. For example, consider a case where the optical characteristic conversion element 210 is used for light having a non-uniform intensity distribution. This light may have an intensity distribution in which the central intensity is high and the peripheral intensity is low. In this case, the diameter of the boundary line of the radius division may be set so that the intensity of each element passing through each division area is approximately equal.
  • FIG. 7E shows another embodiment of the structure of the optical property conversion element 210.
  • the division method is changed according to the intensity distribution characteristics of the light using the optical characteristic conversion element 210.
  • the angle is divided at equal intervals. For example, if the intensity distribution characteristics of light are non-uniform based on the angle method using the optical axis as a reference, variations in the amount of light will occur for each element passing through the divided regions. An example in which the intensity distribution characteristics of this light are non-uniform in terms of angle is shown in FIG. 7E(a).
  • the cross section 510 of the laser light emitted from the light emitting location 502 of the semiconductor laser device 500 generally has an elliptical shape in many cases.
  • the division angle interval between the boundary lines in the optical characteristic conversion element 210 shown in FIG. 7E(b) is narrowed in the long axis direction of the laser beam cross section 510.
  • the division angle interval between the boundary lines is wide.
  • FIG. 8A shows experimental results regarding the optical noise reduction effect when using the optical characteristic conversion element 210.
  • the optical system shown in FIG. 7B(a) was used in the experiment, and a diffuser plate with an average surface roughness Ra of 2.08 ⁇ m was arranged as the sample TS.
  • Optical noise is generated from the minute irregularities on the surface of the diffuser plate.
  • the spectral intensity characteristics (measurement wavelength dependence) of the relative intensity ([straight-passing light intensity when disposing the diffuser plate] ⁇ [straight-passing light intensity before disposing the diffuser plate]) obtained from the spectrometer SM has the above-mentioned optical Contains noise.
  • the vertical axis in Figure 8 indicates the value after normalizing the amount of variation in spectral intensity (the amount of intensity difference from the average value of spectral intensity in the wavelength direction) resulting from optical noise generated in the diffuser plate by the average value ([amount of variation ] ⁇ [average value]) represents the standard deviation value.
  • the horizontal axis in FIG. 8 represents the number of optical path divisions (the number of area divisions) within the optical characteristic conversion element 210.
  • the amount of optical noise (standard deviation value) clearly decreases.
  • Each element in the optical characteristic changing element 210 passes through a diffuser plate and generates optical noise.
  • the optical path to reach the spectroscope SM varies slightly from element to element. Therefore, the optical noise characteristics appearing within each element differ slightly from each other.
  • the intensities of all elements having different optical noise characteristics are added, the light intensity is averaged 420 (FIG. 7C(d)) between the different optical noise characteristics. As a result, the optical noise characteristics are smoothed (the amount of optical noise is reduced based on averaging).
  • FIG. 8(b) Effect of reducing the amount of optical noise when a diffuser plate with an average surface roughness Ra of 1.51 ⁇ m is inserted into the light source section 2 (between the optical property conversion element 210 and the condenser lens L2) in FIG. 7B(a) is shown in FIG. 8(b). It was confirmed that optical noise was further reduced due to the synergistic effect of the optical characteristic conversion element 210 and the diffuser plate corresponding to the wavefront phase characteristic conversion member.
  • an incident surface side perpendicular line 96 that is perpendicular to the incident surface 92 of the predetermined optical member is defined.
  • the first light 202 is incident at an angle ⁇ ( ⁇ 0) inclined with respect to the normal line 96 on the incident surface side.
  • the second light 204 is incident at a different angle from the first light 202. Therefore, the traveling direction of the light before entering the predetermined optical member 90 is that the first light 202 and the second light 204 travel in different directions.
  • the second light 208 different from the first light 202 is defined as a light 208 that travels through a different optical path than the first light 202, rather than the light 204 that travels in a direction different from the first light 202. Also good.
  • the traveling direction of the second light 208 is parallel to the first light 202 (that is, the direction of travel of the second light 208 is the same as that of the first light 202 with respect to the normal 96 to the entrance surface of the predetermined optical member. (the second light 208 may be incident on the second light 208). However, the second light 208 may travel in a different direction from the first light 202 and may pass through a different optical path from the first light 202.
  • the first light 202 passes through the predetermined optical member 90, after passing through the predetermined optical member 90, the first light 202 passes through the exit surface 94 of the predetermined optical member. Also, at this time, a perpendicular line 98 on the exit surface side that is perpendicular to the exit surface 94 of the predetermined optical member is defined.
  • the traveling direction of the first light 202 after passing through the exit surface 94 of the predetermined optical member may have a predetermined inclination angle with respect to the exit surface side perpendicular 98, or may have a predetermined inclination angle with respect to the exit surface side perpendicular 98. They may have a parallel relationship.
  • the traveling direction of the second light 204 after passing through the exit surface 94 of the predetermined optical member is It is necessary to have an inclination (a non-parallel relationship) with respect to the traveling direction of the first light 202.
  • the optical path of the second light 208 after passing through the output surface 94 of the predetermined optical member is the optical path of the first light 202 after passing. It needs to be different.
  • the traveling direction of the second light 208 after passing through the output surface 94 of the predetermined optical member may be parallel to the traveling direction of the first light 202 after passing.
  • the important thing here is 1.
  • the first light 202 is incident on the entrance surface 92 of the predetermined optical member at an angle inclined with respect to the normal line 96 on the entrance surface side of the predetermined optical member 90 .
  • the second light 204 travels in a direction non-parallel to the first light 202, or the second light 208 travels on a different optical path from the first light 202. progresses.
  • the first light 202 and the second light 204 or 208 after passing through the predetermined optical member 90 are combined (or mixed) at the light synthesis location 220 to form the predetermined light 230.
  • the basic concept explained in FIG. 9A is summarized below.
  • An entrance surface side perpendicular 96 perpendicular to the entrance surface 92 of this predetermined optical member 90 and an exit surface side perpendicular 98 perpendicular to the exit surface 94 of this predetermined optical member 90 are defined,
  • the traveling direction of the first light 202 has an inclination angle between at least one of the normal to the incident surface side 96 and the normal to the exit surface side 98,
  • the traveling direction of the second light 204 is tilted with respect to the traveling direction of the first light 202, or the optical path of the second light 208 within the predetermined optical member 90 is set to be the same as the optical path of the first light 202.
  • the optical characteristics between the first light 202 and the second light 204 or 208 are changed by making the light different.
  • an example of an embodiment of the predetermined optical member 90 includes the optical characteristic conversion element 210 in FIG. 7A.
  • the main function of the optical characteristic conversion element 210 is to create an optical path length difference between the first light 202 and the second light 204 (including the third light 206). The main focus was on If this optical path length difference is set to greater than or equal to the coherence length ⁇ L 0 (twice), it is possible to interrupt the phase continuity between the wave trains in both (de-synchronize the phases 402).
  • the function of the predetermined optical member 92 described using this embodiment has a function that includes the optical characteristic conversion element 210.
  • This predetermined optical member 92 also provides different optical characteristics between the first light 202 and the second light 204 (including the third light 206).
  • phase desynchronization (phase discontinuity) 402 is cited as an example of the "mutually different optical characteristics”.
  • a mode change (within the waveguide element 110), which will be described later with reference to FIG. 9B, may be generated.
  • phase desynchronization phase discontinuity
  • mode change as different optical characteristics provided between the first light 202 and the second light 204.
  • the present invention is not limited thereto, and the predetermined optical member 92 may provide any difference in optical characteristics between the first light 202 and the second light 204.
  • the service providing method 80 corresponds to an overall concept.
  • a service providing system 14 is defined as a means for realizing this service providing method 80.
  • the service providing system 14 also includes an optical device 10.
  • the predetermined light utilization method 82 is positioned as part of the operation of this optical device 10.
  • the optical measurement unit 84 constitutes the optical device 10 as a part of the optical device 10. However, the optical measurement unit 84 is not only used in a predetermined light utilization method 82 .
  • the light source section 2 is included within this optical measurement section 84.
  • This light source section 2 is comprised of a light emitting section 470, a predetermined optical member 90, and a light synthesis location 220.
  • the predetermined optical member 90 and the light synthesis location the first light 202 and the second light 204 or 208 emitted from the light emitting section 470 are manipulated to form the predetermined light 230.
  • FIG. 9B shows a specific example of an embodiment in which the predetermined optical member 90 described in FIG. 9A and the light synthesis location 220 are combined.
  • the core region 112 within the waveguide element 110 functions as both the light synthesis location 220 and the predetermined optical member 90.
  • a specific form of this waveguide element 110 may be an optical fiber, an optical waveguide, or a light guide.
  • the entrance surface of the waveguide element 110 corresponds to the entrance surface 92 of the predetermined optical member in FIG. 9A.
  • the exit surface of the waveguide element 110 corresponds to the exit surface 94 of the predetermined optical member in FIG. 9A.
  • the entrance surface of the optical fiber 110 There are two types of shapes for the entrance surface of the optical fiber 110: a structure cut perpendicular to the optical axis and a structure cut at a predetermined angle.
  • a structure cut perpendicular to the optical axis and a structure cut at a predetermined angle.
  • the entrance surface side perpendicular line 96 that is perpendicular to the entrance surface 92 of the predetermined optical member becomes parallel to the optical axis direction within the optical fiber 110.
  • the exit surface side perpendicular 98 that is perpendicular to the exit surface 94 of the predetermined optical member is also parallel to the optical axis direction within the optical fiber 110 .
  • the first light 202 enters the core region 112 in the waveguide element (optical fiber/optical waveguide/light guide) 110 by setting the incident angle ⁇ of the first light 202 with respect to the normal line 96 on the incident surface side to a predetermined value or more.
  • the first light 202 is reflected at the interface between the core region 112 and the cladding region 114. Since the length of the actual waveguide element (optical fiber/optical waveguide/light guide) 110 is sufficiently long, reflection at this interface is repeated many times. As a result, the first light 202 forms a higher-order mode other than the fundamental mode (for example, a TE2 mode described later in FIG. 16C(b)) within the core region.
  • the second light 204 is made to enter from a direction substantially parallel to the perpendicular line 96 on the side of the incident surface, and the second light 204 is made to enter the central portion of the core region 112. In this case, the second light 204 travels straight through approximately the center of the core region 112 . The second light 204 is reflected much fewer times at the interface between the core region 112 and the cladding region 114 than the first light 202. As a result, the second light 204 forms a fundamental mode (for example, the TE1 mode described later in FIG. 16C(a)) within the core region.
  • a fundamental mode for example, the TE1 mode described later in FIG. 16C(a)
  • the first light 202 is incident on the entrance surface 92 of the predetermined optical member at an angle inclined with respect to the normal line 96 on the entrance surface side of the predetermined optical member 90; If the optical arrangement is devised so that the second light 204 travels in a direction non-parallel to the first light 202 before entering the entrance surface 92 of a predetermined optical member, the first light 202 and the second light 204 are A "difference in modes within the core region 112" occurs, which corresponds to different optical properties (optical properties that have changed from each other) between the light 204 and the light 204.
  • speckle noise optical noise
  • the angle ⁇ formed by the first light 202 emitted from the waveguide element (optical fiber/optical waveguide/light guide) and the normal line 98 on the exit surface side also has a large angle.
  • the emission pattern (far field pattern) of the first light 202 from the waveguide element (optical fiber/optical waveguide/light guide) is a "doughnut-shaped pattern" (with different regions centered on the optical axis). This pattern shows a pattern in which the light intensity decreases and the light intensity increases in areas away from the center of the optical axis.
  • the light intensity distribution of the second light 204 emitted from the waveguide element increases as it approaches the exit surface side perpendicular 98 (the light intensity increases as it approaches the exit surface side perpendicular 98). (The light intensity emitted in the parallel direction is maximum).
  • FIG. 9A A specific embodiment in which the traveling direction of the second light 204 is tilted with respect to the traveling direction of the first light 202 to change the optical characteristics between the two before the second light 204 enters the entrance surface 92 of the predetermined optical member 90 in FIG. 9A.
  • An example is shown in Figure 9B.
  • FIG. 9C shows a specific embodiment in which the second light passes through a different optical path from that of the first light 202 to change the optical characteristics of both.
  • An example of an embodiment of the predetermined optical member 90 shown in FIG. 9C is a transparent parallel flat plate having a light reflecting surface 118 in part. An antireflection film is formed on the surface of this transparent parallel flat plate other than the light reflecting surface 118 (the incident surface 92 of the predetermined optical member).
  • the front surface front surface
  • the back surface back surface
  • the collimating lens 318 converts the diverging light emitted from the light emitting section 470 into parallel light.
  • a semiconductor laser element 500 is used as the light emitting section 470, as shown in FIG. 7E, a laser beam cross section 510 in a parallel light state takes an elliptical shape.
  • the minor axis direction 88 within this elliptical shape is parallel to the plane of the paper. Therefore, the long axis direction within this elliptical shape is perpendicular to the plane of the paper.
  • a predetermined optical member 90 made of a transparent parallel flat plate is arranged at an angle with respect to the traveling direction of the laser beam in the parallel light state.
  • the predetermined optical member 90 is tilted along a plane that includes the direction of electric field vibration within the laser beam (a plane that includes the plane of the paper). Therefore, the direction in which the entrance surface 92 and the exit surface 94 of the predetermined optical member are inclined has a P-wave (parallel wave) relationship with respect to the electric field vibration direction within the laser beam. It is generally known that the light transmittance of P waves is high at the entrance surface 92 and the exit surface 94.
  • the transmission efficiency of the light passing through the entrance surface 92 or the exit surface 94 can be increased. There is an effect that can be done.
  • the traveling direction of the parallel light after passing through the collimating lens 318 is inclined by an angle ⁇ ( ⁇ 0) with respect to the perpendicular 96 on the incident surface side of the predetermined optical member.
  • the output surface 94 of the predetermined optical member 90 has a specific light reflectance.
  • the light that has passed through the exit surface 94 of this predetermined optical member is treated as the first light 202.
  • the light reflected by the output surface 94 is reflected again by the light reflection surface 118, and a part of this reflected light passes through the output surface 94 of the predetermined optical member.
  • the light passing through this exit surface 94 is treated as second light 208.
  • this second light 208 passes through a different optical path from that of the first light 202 described above.
  • the optical path length of the second light 208 within the predetermined optical member 90 is different from the optical path length of the first light 202. If the optical path length difference between the two is set to be larger than the coherence length ⁇ L 0 (twice the value) shown in Equation 2, the phase continuity between the first light 202 and the second light 208 will be interrupted. (The relationship is phase asynchronous 402). In other words, the optical characteristics between the first light 202 and the second light 208 change from the viewpoint of phase continuity (phase synchrony).
  • the laser beam cross section 510 of each of the first light 202 and the second light 208 has an elliptical shape.
  • the first light 202, second light 208, and third light 206 immediately after passing through the output surface 94 are aligned in the short axis direction 88.
  • an effect is produced in which the elliptical shape of the laser beam cross section 510 is corrected. That is, when the predetermined optical member 90 is tilted within a plane including the minor axis direction 88 of the laser beam cross section 510 emitted from the semiconductor laser element 500 (light emitting section 470), the elliptical shape of the laser beam cross section 510 is corrected. .
  • the first light 202 is incident from a direction of an inclination angle ⁇ ( ⁇ 0) with respect to a perpendicular line (perpendicular line 96 to the incident surface side) perpendicular to the incident surface 92 of the predetermined optical member 90.
  • the predetermined optical member 90 changes the optical characteristics between the first light 202 and the second light 204 or 208 after passing through the predetermined optical member 90 .
  • the direction of incidence of the second light 204 is different from the direction of incidence of the first light 202.
  • the difference in optical characteristics between the first light 202 and the second light 204 corresponds to a "difference in light propagation mode within the core region 112."
  • the optical path of the second light 208 is different from the optical path of the first light 202.
  • the difference in optical characteristics between the first light 202 and the second light 208 corresponds to "phase asynchronization 402 (discontinuation of phase continuity) between the two".
  • optical characteristics that change between the first light 202 and the second light 204 or 208 after passing through the predetermined optical member 90 are not limited to those described above, and may include any difference in optical characteristics or any change in optical characteristics (discontinuous). gender) is also fine.
  • Section 3.3 Other Examples of Embodiments of Optical Configuration
  • Other specific individual embodiments of the predetermined optical member 90 shown in FIG. 9A will be described in Section 3.3.
  • A) The incident surface side perpendicular 96 or the exit surface side perpendicular 98 of the predetermined optical member 90 is tilted with respect to the incident direction of the first light 202
  • B) The entrance surface 92 or the exit surface 94 of the predetermined optical member 90 has a discontinuous surface (microscopic step shape).
  • FIG. 9D(a) shows an example of a conventional optical system that corrects the elliptical shape of the laser beam cross section 510 from the semiconductor laser element 500.
  • FIG. 9D(a) shows an example of a conventional optical system that corrects the elliptical shape of the laser beam cross section 510 from the semiconductor laser element 500.
  • the collimating lens or cylindrical lens 120 in FIG. 9D(a) converts the diverging light (divergent laser light) emitted from the light emitting section 470 (semiconductor laser element 500) into parallel light.
  • the equiphase front 128 of the light is flat.
  • the laser beam cross section 510 in this parallel light state often has an elliptical shape (see FIG. 7E(a)).
  • the short axis direction 88 of the elliptical shape coincides with the paper surface direction.
  • the wedge prism 130 stretches the parallel light in the minor axis direction, corrects the elliptical shape, and converts it into a substantially circular shape.
  • the traveling direction of the parallel light immediately after passing through the collimating lens or cylindrical lens 120 has an inclination with respect to the perpendicular line 96 on the incident surface side of the wedge prism 130. Therefore, the optical arrangement of FIG. 9D(a) satisfies the feature of (A) above. However, what is important is that the equiphase plane 128 in the parallel light after passing through the wedge prism 130 forms a uniform flat surface everywhere.
  • the equiphase surface 128 in parallel light is divided into multiple regions, and the optical path length difference for each region is set to a coherence length ⁇ L 0 (or twice that) or more. change to However, in a state where the equiphase plane 128 in the parallel light after passing through the wedge prism 130 forms a uniform flat surface as shown in FIG. 9D(a), the optical noise reduction effect described in Section 2.3 does not appear. do not have.
  • the exit surface 94 of the predetermined optical member 92 has a minute step difference, making it a discontinuous surface. Therefore, the equiphase front 128 of the light after passing through the output surface 94 of the predetermined optical member 92 is collapsed, and the optical noise reduction effect described in Section 2.3 can be achieved.
  • the output surface 94 of the predetermined optical member 92 is a discontinuous surface.
  • the present invention is not limited thereto, and the entrance surface 92 of the predetermined optical member 92 may be a discontinuous surface. Making at least a portion of the entrance surface 92 and the exit surface 94 of the predetermined optical member 92 into discontinuous surfaces with fine steps means the content of (B) above. Thereby, the optical noise reduction effect described in Section 2.3 can be achieved.
  • the relationship between various specific individual embodiments described in Section 3.3 of this book including FIG. 9D(b) and the content of explanation in Section 3.1 using FIG. 9A will be described.
  • the light passing through the center of the collimating lens or cylindrical lens 120 is made to correspond to the first light 202.
  • the light that has passed through the peripheral portion of the collimating lens or the cylindrical lens 120 is made to correspond to the second light 204.
  • the first light 202 and the second light 204 have different optical paths because they pass through different locations within the collimating lens or cylindrical lens 120.
  • the traveling direction of the first light 202 is inclined by an angle ⁇ ( ⁇ 0) with respect to a perpendicular to the entrance surface 96 that is orthogonal to the entrance surface 92 of the predetermined optical member 90.
  • the parallel light forms a flat equiphase surface 128.
  • the equiphase front 128 of the second light 204 reaches the entrance surface 92 of the predetermined optical member first.
  • the equiphase front 128 of the first light 202 reaches the entrance surface 92 of the predetermined optical member with a delay.
  • ⁇ ( ⁇ 0) tilting the traveling direction of the first light 202 by the angle ⁇ ( ⁇ 0) with respect to the perpendicular line 96 on the incident surface side of the predetermined optical member 90, the distance between different optical paths until reaching the predetermined optical member 90 is increased.
  • the optical path length changes.
  • this optical path length difference is set to be larger than the coherence length ⁇ L 0 (twice the value). In order to satisfy this condition, when the beam size of the light incident on this predetermined optical member 90 is D,
  • the value of Lmax may be set to 10 m, preferably 1 m.
  • the beam size D of the light incident on this predetermined optical member 90 may be set by the effective luminous flux diameter. That is, the maximum diameter of light that can pass through the collimating lens or the cylindrical lens 120 is considered to be the effective luminous flux diameter.
  • the invention is not limited thereto, and the width (half-width or half-value diameter) of a region where the intensity is half the maximum intensity at the center in the intensity distribution of light incident on the predetermined optical member 90 may be regarded as the beam size D of light.
  • the width (e -2 width or e -2 diameter) of the region where the intensity is e -2 of the maximum intensity at the center in the intensity distribution of light incident on the predetermined optical member 90 is the beam size D of light. It may be considered as
  • the collimating lens or cylindrical lens 120 converts the emitted light from the light emitting unit 470 into parallel light
  • the effective beam diameter of the collimating lens or cylindrical lens 120 is regarded as the beam size D of light. Therefore, using this relationship, the above formula 9 and the above formula 10 are
  • the cross section 510 of the laser light emitted therefrom has an elliptical shape (see FIG. 7E(a)).
  • the minor axis direction 88 of this ellipse may coincide with the electric field vibration direction of the laser beam. Therefore, when the predetermined optical member 90 is tilted within a plane including this minor axis direction 88 (in the plane of the paper in FIG. 9D(b)), the direction of inclination of the incident surface 92 of the predetermined optical member 90 becomes the P-wave incident direction of the laser beam. . It has an optical characteristic that the light transmittance at the interface (incidence surface 92) becomes high when P waves are incident.
  • tilting the predetermined optical member 90 within a plane including the minor axis direction 88 has the effect of increasing the utilization efficiency of the light transmitted through the predetermined optical member 90. Furthermore, at the same time, there is an effect that the ellipse correction of the laser beam cross section 510 (making the elliptical shape closer to a circular shape) can be performed.
  • the output surface of the wedge prism 130 is made flat, the equiphase front 128 of the output light becomes flat everywhere within the laser beam cross section 510, so the situation in which optical noise cannot be reduced was explained with reference to FIG. 9D(a).
  • the output surface of the predetermined optical member 90 is made into a discontinuous surface (fine step formation), and the equal phase plane 128 of the output light is divided into wavefronts. That is, the laser beam cross section 510 is divided into wavefronts for each plane within the step.
  • the output surface 94 of the predetermined optical member 90 has a fine step structure.
  • the present invention is not limited thereto, and a fine step structure may be provided on the entrance surface 92 side of the predetermined optical member 90.
  • the predetermined optical member 90 in which the exit surface 94 or the entrance surface 92 has a fine step may be referred to as a multi-segment Fresnel prism 140.
  • the number of planes for each step in this laser beam cross section 510 corresponds to the number of wavefront divisions. Therefore, as the value of the distance P between adjacent steps decreases, the number of wavefront divisions increases.
  • a mechanical tool cutting technique can be used to form fine steps within the exit surface 94 or the entrance surface 92 of the predetermined optical member 90. Therefore, it is possible to relatively easily form a step with a small distance P between adjacent steps, and the number of wavefront divisions can be significantly increased.
  • FIG. 8 or FIG. 17B shows that the optical noise reduction effect improves as the number of wavefront divisions increases. In other words, providing a fine step structure on the entrance surface or exit surface of a predetermined optical member produces the effect of significantly reducing optical noise using a relatively easy method.
  • the light intensity distribution within the beam size D of the light incident on the predetermined optical member 90 is rarely uniform. In many cases, the light intensity is highest near the center, and decreases as it approaches the periphery. Therefore, if the distance P between adjacent steps is made uniform everywhere, the light intensity of the divided light extracted near the center will increase. Therefore, the value of the distance P between adjacent steps may be changed for each location in accordance with the intensity distribution of light incident on the predetermined optical member 90. Specifically, the value of the distance P between adjacent steps may be made small near the center, and the value of the distance P between adjacent steps may be gradually increased as it approaches the periphery. With such a setting, the light intensity of each divided light (element) extracted for each level difference is made uniform, and the optical noise reduction effect is improved.
  • phase asynchrony 402 phase discontinuity
  • Equation 13 when the optical path length difference is ⁇ L 0 /2, complete phase asynchronization (phase discontinuity) 402 does not occur between the lights (elements) that have passed through adjacent steps (adjacent regions). However, when the optical path length difference becomes ⁇ L 0 /2 or more, the interference between the two is significantly reduced, and an optical noise reduction effect appears.
  • the value of Pmax described in the above formula is determined based on the mounting dimensions of the light source section 2, the optical measurement section 84, and the optical device 10. Therefore, the value of Pmax is preferably set to 10 m or 1 m, preferably 10 cm.
  • the predetermined optical member 90 (its entrance surface 92 or exit surface 94) is tilted with respect to the traveling direction of the first light 202 to satisfy any of Equations 9 to 14, the optical path A length difference occurs and optical noise can be effectively reduced. Therefore, if the specific individual embodiments described in Section 3.3 or the basic optical arrangement described in Section 3.1 are used, the optical system within the light source section 2 can be easily miniaturized. Furthermore, since the number of optical components used can be significantly reduced, there is also the effect that the cost of the light source section 2 or the optical device 10 as a whole can be reduced.
  • the exit surface side perpendicular 98 or the entrance surface side perpendicular 96 is defined as ⁇ the perpendicular for each plane formed between finely divided adjacent steps. ”.
  • the planes of each step are inclined to each other. Therefore, in this case, the angle of the exit surface side perpendicular 98 or the entrance surface side perpendicular 96 differs for each step.
  • the line perpendicular to the exit surface side 98 to the first light 202 means a perpendicular line perpendicular to the fine plane through which the first light 202 passes.
  • the angle ⁇ between the first light 202 emitted from the predetermined optical member 90 and the perpendicular to the exit surface side 98 can be defined.
  • the perpendicular to each fine plane region within the step boundary may be defined as the entrance surface side perpendicular 96.
  • the areas between the steps are flat.
  • the space between the steps may be a curved surface, as in a Fresnel lens.
  • the curved surface may have discontinuity (a change in curvature depending on the location or a change depending on the location of the center point of the spherical surface) like a fly's eye lens described later.
  • FIG. 9E explains the difference in wavefront (equiphase front) characteristics of light passing through a conventional spherical lens or aspherical lens and light passing through a Fresnel lens or fly's eye lens 142.
  • FIG. 9E(a) shows an example in which the imaging lens 144 is made of a spherical lens or an aspherical lens.
  • the continuity of the equal phase plane 128 is maintained from the light emitting section 470 to the imaging position ( ⁇ position). Since the phase is the same everywhere within the equal phase plane 128, the phase asynchrony 402 phenomenon does not occur.
  • FIG. 9E(b) shows an example in which the imaging optical system is configured with a Fresnel lens or a fly's eye lens 142.
  • This fly's eye lens has a structure in which a plurality of spherical lenses or aspheric lenses are arranged on a two-dimensional plane and bonded together. Therefore, discontinuity of the curved surface (curvature) occurs between adjacent spherical lenses or aspheric lenses. Even when a Fresnel lens or fly's eye lens 142 is used, the light emitted from the light emitting section 470 is imaged at the ⁇ position.
  • the Fresnel lens or the fly's eye lens 142 has fine steps or discontinuous curved surfaces on the light entrance surface or the light exit surface. Therefore, division of the equal phase plane 128 occurs within the light emitted from the Fresnel lens or the fly's eye lens 142.
  • phase asynchronization (phase discontinuity) 402 occurs between the divided equiphase planes 128. do.
  • a Fresnel lens or Fresnel lens 142 arranged at an angle is used as the predetermined optical member 90.
  • the incident side perpendicular 96 of the predetermined optical member 90 is inclined by an angle ⁇ with respect to the traveling direction of the first light 202 just before reaching the entrance surface 92 of the predetermined optical member 90 .
  • the optical path length from the light emitting section 470 to the entrance surface 92 of the predetermined optical member 90 is changed for each optical path.
  • the optical arrangement is made such that the difference in optical path length between the lights (first light 202, second light 204, and third light 206) passing through different optical paths exceeds the coherence length ⁇ L 0 (or twice that). .
  • the light incident on the predetermined optical member 90 is converted into parallel light as shown in FIG. 9F(b), and the predetermined optical member is The light may be incident on the member 90. Accordingly, the collimating lens or cylindrical lens 120 becomes unnecessary, and the number of parts is reduced. This has the effect of making the entire optical system smaller and cheaper.
  • the waveguide element (optical fiber/optical waveguide/light guide) 110 in FIG. 9F(a) corresponds to the light synthesis location 220 in FIG. 9A.
  • a measurement object 22 is used as shown in FIG. 9F(b). You may do so.
  • near-infrared light in the wavelength range of 0.8 ⁇ m to 2.5 ⁇ m has a deep penetration distance into living organisms. Since the inside of a living body has a complex structure (complex and minute refractive index distribution), the near-infrared light that enters the living body undergoes light scattering. In this light scattering process, a photosynthesis process is performed between different elements that are in a phase-asynchronous relationship 402 with each other.
  • FIG. 9G shows an example of an embodiment in which a light reflective optical member is used as the predetermined optical member 90 that is arranged at an angle with respect to the traveling direction of the first light 202 immediately before incidence.
  • a light reflective optical member is used as the predetermined optical member 90 that is arranged at an angle with respect to the traveling direction of the first light 202 immediately before incidence.
  • a multi-segment light-reflecting element or a Fresnel type reflector 148 is used.
  • the inside of the light reflection surface has "fine step structure", “finely divided curved surface structure", “discontinuity of the radius of curvature and inconsistency of the center of the curved surface", etc.
  • an optical member 90 that includes any structure that constitutes a fine non-uniform characteristic surface.
  • the distance P between adjacent steps can be defined.
  • the distance between the boundaries where the uniform characteristic surface (including curved surfaces) changes is defined as the distance between adjacent regions P. It may be stipulated. In either case, if the optical arrangement is set so as to satisfy either of the conditions of Equation 13 and Equation 14, the effect of reducing optical noise with respect to the light reflected from the predetermined optical member 90 is produced.
  • the value of P may be changed between different locations on the predetermined optical member 90 depending on the light intensity distribution of the light incident on the predetermined optical member 90.
  • the light intensity may take a maximum value near the center, and the light intensity may decrease at the periphery.
  • the distance between adjacent steps (distance between adjacent regions) P may be narrowed near the center of the incident light, and the distance P between adjacent steps (distance between adjacent regions) may be widened at the periphery of the incident light.
  • the light intensity between the light (elements) reflected in each region is brought closer to uniformity, and the optical noise reduction effect is improved.
  • an F ⁇ lens or collimating lens 324 is used as an optical element that changes the divergence of the divergent radiation from the light emitting unit 470.
  • the F ⁇ lens has a characteristic of condensing parallel light incident at an angle ⁇ to different positions on the focal plane. It is given this name because it has the characteristic of focusing light at a position shifted by F ⁇ with respect to the focal length F of this F ⁇ lens.
  • a collimating lens basically has the same characteristics, but as the value of F ⁇ (image height) increases, comatic aberration increases. Therefore, in an optical system where the value of F ⁇ (image height) is small, a collimating lens can be used instead of an F ⁇ lens.
  • the fine plane within the boundary line of the step is defined as the entrance surface 92 of the predetermined optical member.
  • a perpendicular line perpendicular to the plane corresponds to the entrance surface side perpendicular line 96.
  • all the normals 96 on the incident surface side are in a parallel relationship within the predetermined optical member 90 (the incident surfaces 92 of all the finely divided predetermined optical members are in a parallel relationship with each other).
  • An inclination angle ⁇ is formed between the traveling direction of the first light 202 immediately before entering the predetermined optical member 90 and the normal line 96 on the incident surface side.
  • the light reflected by the entrance surface 92 of the predetermined optical member returns to the light emitting point in the light emitting section 470. Therefore, in this embodiment, by setting ⁇ 0, the light reflected by the entrance surface 92 of the predetermined optical member is focused at a different position from the light emitting section 470.
  • the entrance surface (within the core region 112) of the waveguide element (optical fiber/optical waveguide/light guide) 110 is arranged at this light condensing position.
  • the light passes through a waveguide element (optical fiber/optical waveguide/light guide) 110, and different elements are combined. Therefore, this waveguide element (optical fiber/optical waveguide/light guide) 110 corresponds to the light synthesis location 220 in FIG. 9A.
  • a light incident surface 92 (light reflecting surface) of the predetermined optical member 90 has a fine step structure formed therein.
  • a plane (or curved surface) obtained by macroscopically averaging this fine step structure is defined as a macroscopic entrance surface 122 of the predetermined optical member.
  • the macroscopic entrance surface 122 of the predetermined optical member may be defined by an upper or lower envelope surface having a fine step structure.
  • a perpendicular line perpendicular to the macroscopic entrance surface 122 of this predetermined optical member is defined as a perpendicular line 126 to the macroscopic entrance surface. Then, the angle ⁇ between the traveling direction of the first light 202 immediately before entering the predetermined optical member 90 and the perpendicular line 126 to this macroscopic entrance plane can be defined.
  • the optical path length difference that occurs between the first light 202 and the second light 204 in the optical system of FIG. This occurs within the optical path leading to the optical waveguide (optical waveguide/light guide) 110.
  • a reflective predetermined optical member 90 multi-segment light reflecting element (Fresnel type reflecting plate) 148)
  • an optical path length difference is created between the outgoing path and the twice the incoming optical path before and after reflection. Therefore, when the predetermined reflective optical member 90 is used, an optical path length difference twice as large as the mechanical arrangement dimension can be obtained.
  • using the reflective predetermined optical member 90 has the effect that the optical system can be made smaller. There is.
  • the value of Lmax included in Equations 15 and 16 may be set to 10 m, preferably 1 m.
  • the light cross-sectional size D immediately before entering the predetermined optical member 90 may be defined by the effective beam diameter of the emitted light from the light emitting unit 470 that can pass through the F ⁇ lens or the collimating lens 324.
  • the present invention is not limited thereto, and the width (half-width or half-value diameter) of a place where the intensity decreases to half of the maximum intensity in the intensity distribution of the light emitted from the light emitting section 470 may be regarded as the optical cross-sectional size D.
  • Another idea is the width (e -2 width or e -binary diameter) of the area where the intensity of the light emitted from the light emitting section 470 decreases to e -2 , which is the maximum intensity. may be regarded as the optical cross-sectional size D.
  • Equation 15 and Equation 16 above become
  • Equation 17 or Equation 18 may be used. When this condition is satisfied, the effect of reducing optical noise in the light after passing through the multi-divided light reflecting element 148 (predetermined optical member 90) is produced.
  • the optical interference between the reflected lights (elements) of each adjacent step (adjacent area) is reduced, and a predetermined light 230 synthesized within the waveguide element (optical fiber/optical waveguide/light guide) 110 is generated.
  • the conditions for reducing optical noise will be explained.
  • the optical path length difference between the reflected lights (elements) for each adjacent step (adjacent region) becomes ⁇ L 0 /2 or more, the optical interference between the two is reduced. Therefore, as the value of the distance between adjacent steps (distance between adjacent areas) P,
  • the value of Pmax described in the above formula may be set to 10 m or 1 m, preferably 10 cm, for the reasons mentioned above.
  • the emitted laser light cross section 510 often takes an elliptical shape.
  • the reflective predetermined optical member 90 may be tilted in a direction along a plane (the paper plane of FIG. 9G) that includes the long axis direction 78 of this elliptical shape.
  • an F ⁇ lens or a collimating lens 324 is used to make parallel light incident on the predetermined optical member 90.
  • the present invention is not limited thereto, and an optical system that does not use the F ⁇ lens or the collimating lens 324 may be used.
  • a Fresnel concave mirror multi-segmented concave mirror
  • a concave Fresnel-shaped elliptic curved mirror multi-segmented elliptic curved mirror
  • the diverging light emitted from the light emitting section 470 enters the direct reflection type predetermined optical member 90. Therefore, as long as the diverging light incident on the direct reflection type optical member 90 can be focused at one point (such as the entrance of the waveguide element 110) after reflection, the reflection surface shape of the reflection type optical member 90 can be arbitrarily set, not limited to the above. It's okay.
  • the direction in which the microscopic entrance surface 92 of the predetermined optical member is tilted is the same as the direction in which the macroscopic entrance surface 122 of the predetermined optical member is tilted. Accordingly, there is an effect of increasing the light utilization efficiency of the multi-division light reflecting element 148 (predetermined optical member 90).
  • ⁇ 0 the condition for making the directions of inclination of both of them match is expressed in a mathematical expression, ⁇ 0.
  • FIG. 9H is an explanatory diagram of the effect regarding the above conditions.
  • a condition that deviates from the above condition is expressed in a mathematical formula, ⁇ 0.
  • the stepped side surface is exposed from the direction in which the incident light travels.
  • the reflected light from the exposed side surface of the step becomes a stray light component 146, and the utilization efficiency of the light returning to the waveguide element (optical fiber/optical waveguide/light guide) 110 decreases.
  • FIG. 9I shows an example application embodiment of FIG. 9G. Therefore, basically, the content explained using FIG. 9G remains valid.
  • the semiconductor laser element 500 is used as the light emitting section 470, speckle noise is likely to occur.
  • the irradiation direction toward the measurement object 22 is slightly shifted for each element that has a phase-asynchronous relationship (does not interfere with each other), speckle noise is significantly reduced. Details will be explained in Section 5.1 using FIG. 13.
  • ⁇ 1 be the angle between the normal line 96-1 on the side of the entrance surface that is perpendicular to the entrance surface 92-1 of the predetermined optical member on which the first light 202 enters, and the traveling direction of the first light 202 immediately before the entrance.
  • ⁇ 2 the angle between the normal line 96-2 on the incident surface side perpendicular to the incident surface 92-2 of the predetermined optical member on which the second light 204 is incident and the traveling direction of the second light 204 immediately before the incident is ⁇ 2 shall be.
  • the angle between the normal line 96-3 on the incident surface side that is perpendicular to the incident surface 92-3 of the predetermined optical member on which the third light 206 is incident and the traveling direction of the third light 206 immediately before the incident is ⁇ 3 . do.
  • the traveling directions of the first light 202, the second light 204, and the third light 206 immediately before incidence are parallel to each other. Therefore, when the incident surfaces 92-1, 2, and 3 are tilted, the relationship is, for example, ⁇ 1 ⁇ 2 ⁇ 3 or ⁇ 1 > ⁇ 2 > ⁇ 3 .
  • a transmissive or reflective diffuser plate 460 is placed on the focal plane of the F ⁇ lens or collimating lens 324. Each of the entrance surfaces 92-1, 2, and 3 of the predetermined optical member is tilted in accordance with the above conditions. Then, the first light 202 is reflected at the entrance surface 92-1 of the predetermined optical member and then focused at the ⁇ 1 position on the diffuser plate 460. Further, the second light 204 is reflected by the entrance surface 92-2 of a predetermined optical member and then focused at the ⁇ 2 position on the diffuser plate 460, and the third light 206 is reflected by the entrance surface 92-3 of the predetermined optical member. After that, the light is focused on the ⁇ 3 position on the diffuser plate 460.
  • the optical system in FIG. 9I uses a Koehler illumination system to irradiate the measurement object 22 with light. Therefore, each light (element) that passes through the collimating lens or the imaging lens 450 through different condensing points ⁇ 1 , ⁇ 2 , ⁇ 3 travels in slightly different directions, and is synthesized with the object 22 to be measured. Irradiate light. Further, due to the function of the diffuser plate 460 disposed on the light condensing surface, the respective lights (elements) are mixed with each other (further synthesis is promoted).
  • a reflecting surface 254 that reflects part of the light is installed immediately in front of the diffuser plate 460 placed on the light collecting surface, and the light reflected here is focused into the photodetecting element 250. The amount of light detected by this photodetector element 250 is fed back to the amount of light emitted by the light emitting section 470.
  • a light reflecting type diffuser plate 460 may be used as the diffuser plate 460 disposed on the light condensing surface, and this light reflecting type diffuser plate 460 may be tilted. The amount of inclination may be adjusted to cause the light reflected by the light reflecting diffuser plate 460 to travel toward the front of the page.
  • a collimating lens or an imaging lens 450 is placed in the middle of the optical path traveling toward the front of the page. This optical arrangement has the effect of making it possible to significantly reduce the thickness of the light source section 2.
  • piezoelectric elements 526 and 528 may be attached to a light-reflecting diffuser plate 460 arranged at an angle as described later in FIG. 27A, and the inclination angle of the light-reflecting diffuser plate 460 may be changed over time. Further, instead of the light reflecting diffuser plate 460, a light reflecting plate 520 connected to the piezoelectric elements 526 and 528 may be installed at the same location. Speckle noise may be further reduced by time-summing or time-averaging the data (or images) for each different tilt angle collected by the measurement unit 8 (see Section 8.2 for detailed speckle noise reduction). (described later).
  • the arrangement of the F ⁇ lens or the collimating lens 324 may be omitted, as described in FIG. 9G. In that case, if the entire incident surfaces 92-1 to 92-3 of the reflective predetermined optical member 90 are provided with a macroscopic concave curved surface, the reflected light will be condensed.
  • a hybrid light source unit that combines a plurality of light emitting elements is used.
  • the usage of the light source section 2 is expanded, and a variety of services can be provided to users.
  • Safety is extremely important when providing a variety of services to users.
  • Semiconductor laser light and LED light can provide high light intensity, but high light emission intensity in the visible light range has the risk of damaging human eyes.
  • FIG. 10A shows the relationship between the absorption wavelength of each biological system component 988 in near-infrared light in the wavelength range of 0.9 ⁇ m to 1.8 ⁇ m.
  • the wavelength range from 1.35 ⁇ m to 1.80 ⁇ m is called the first overtone region and has a relatively large amount of light absorption. Within this wavelength range, proteins, carbohydrates, and lipids absorb relatively large amounts of light in the order of short wavelengths. Furthermore, the wavelength range from 0.90 ⁇ m to 1.25 ⁇ m is called the second overtone region, and the amount of light absorption is relatively small.
  • the absorption wavelengths of each biological system component 988 within this wavelength range are arranged in the order of carbohydrates, proteins, and lipids from the short wavelength side.
  • the amount of moisture absorbed is extremely large.
  • the amount of moisture absorbed is small in the wavelength range of 1.35 ⁇ m or less, which corresponds to the second overtone region.
  • FIG. 10B shows the cross-sectional structure of the human eye.
  • Light entering from the outside world reaches the retina 150 via the crystalline lens 158 and the vitreous body 154.
  • This retinal portion 150 is most likely to be damaged by light irradiation.
  • the crystalline lens 158 and the vitreous body 154 contain a large amount of water. Therefore, light in the wavelength range of 1.35 ⁇ m or more, where the amount of moisture absorbed is extremely large, is absorbed within the crystalline lens 158 and the vitreous body 154 and does not reach the retina 150.
  • the risk of damage to the eyes can be significantly reduced by setting the wavelength of the semiconductor laser light to 1.35 ⁇ m or more.
  • near-infrared light with a wavelength exceeding 1.8 ⁇ m (particularly 2.4 ⁇ m) is affected by moisture too much. Therefore, when near-infrared light with a wavelength exceeding 1.8 ⁇ m (particularly 2.4 ⁇ m) is used, there is a risk that the light will be absorbed by water droplets or wetness adhering to the surface of the measurement target 22, resulting in a significant decrease in measurement accuracy. be.
  • the measuring section 8 uses reflected light from the measurement object 22 including a human being, and furthermore, a laser emitting element (such as the semiconductor laser element 500) is arranged in the hybrid light source section,
  • a laser emitting element such as the semiconductor laser element 500
  • the value of the center emission wavelength ⁇ 0 of the laser emitting device is set to 1.35 ⁇ m or more and 2.4 ⁇ m or less (preferably 1.8 ⁇ m). This has the effect of ensuring high measurement accuracy while reducing the risk of damage to the eyes.
  • the optical characteristics shown in FIG. 10A provide important information for measuring the inside of a living body. For example, an example will be explained in which the spectral characteristics of light transmitted through a living body such as a fingertip with a thickness of about 1 cm were investigated. Light with a wavelength of 1.35 ⁇ m or less (light scattered within a living body) passes through a living body approximately 1 cm thick, and the characteristics of the transmitted light can be detected as a signal. In comparison, light with a wavelength exceeding 1.35 ⁇ m is absorbed by the water in the living body, and almost no signal can be detected as transmitted light. This situation almost matches the optical absorption wavelength dependence of water shown in FIG. 10A. Therefore, in this embodiment for measuring the characteristics inside a living body, the wavelength of the light emitted from the hybrid light source section is set to 1.35 ⁇ m or less.
  • the measurement wavelength suitable for the second overtone region is 0.9 ⁇ m or more. Therefore, in this embodiment, when measuring the characteristics inside a living body, the wavelength of the light emitted from the hybrid light source section is set to 0.9 ⁇ m or more and 1.35 ⁇ m or less.
  • laser light has a high emission intensity but a narrow emission wavelength width, so it is not suitable for measuring spectral characteristics alone. Therefore, in this embodiment, the object to be measured using light of a specific wavelength inside a living body may be used, for example, for measurements involving temporal changes such as pulsation and respiration measurement.
  • the laser light wavelength that does not impede this spectral characteristic measurement may be set to 1.25 ⁇ m or more.
  • the emission wavelength range of the laser light emitting device (such as the semiconductor laser device 500) installed in the hybrid light source should be 1.25 ⁇ m or more and It may be set to 1.35 ⁇ m or less.
  • FIG. 11A shows an example of internal structure of the hybrid light emitting unit 470 in this embodiment.
  • measurement of spectral characteristics inside a living body requires a measurement wavelength range of 0.9 ⁇ m to 1.25 ⁇ m using the second overtone region. Therefore, it is desirable to emit light in a wide wavelength range of 0.9 ⁇ m to 1.3 ⁇ m, which is the measurement wavelength range plus a margin.
  • this embodiment uses phosphors 162 and 164 that emit near-infrared light. Then, a phosphor 162 that emits near-infrared light whose center fluorescence wavelength is on the short wavelength side and a phosphor 164 that emits near-infrared light whose center fluorescence wavelength is on the long wavelength side are stacked. Then, as the sum of the emission characteristics from the individual near-infrared light-emitting phosphors 162 and 164, near-infrared fluorescence light 178 emitting fluorescence in the wavelength range of 0.9 ⁇ m to 1.3 ⁇ m is obtained.
  • This laminated structure is then fixed within a transparent sealing area 166 made of transparent resin. Furthermore, the outside of this transparent sealing area 166 is surrounded by a waterproof coating layer 168 containing a material with low moisture permeability such as polyethylene, transparent silicone, transparent Teflon (registered trademark), or the like. The action of this waterproof coating layer 168 prevents moisture contained in the outside world from entering into these phosphors 162 and 164.
  • this hybrid light emitting section 470 two light emitting sources 160 and 170 are arranged, and a common electrode 174 is shared for power sharing.
  • Emitted light from an LED light source as a first light source 160 excites near-infrared light emitting phosphors 162 and 164. Therefore, the emission wavelength of this excitation LED light source must be shorter than the fluorescence wavelength emitted in the range of 0.9 ⁇ m to 1.3 ⁇ m.
  • An LED light source having an appropriate emission wavelength within the range of 600 nm to 900 nm is selected in accordance with the characteristics of the near-infrared light emitting phosphors 162 and 164.
  • a semiconductor laser light source (semiconductor laser element 500) may be used as the second light emitting source 170.
  • the second light emitting source 170 may have a longer emission wavelength than the fluorescence wavelength (emission wavelength) of the near-infrared light emitting phosphors 162 and 164.
  • the emitted light 176 from the second light emitting source 170 passes through the near infrared light emitting phosphors 162 and 164 without affecting the near infrared light emitting phosphors 162 and 164, and the hybrid light emitting section 470 It can be effectively used outside of.
  • two photodetectors may be placed in the hybrid light emitting section 470 to monitor the respective amounts of light emitted from the first and second light sources 160 and 170. . By applying feedback to the individual light emission amounts using this photodetector, the light emission amounts from the first and second light sources 160 and 170 can be stabilized.
  • the phosphor material contains atoms or ions belonging to rare earth elements or transition elements.
  • the phosphor material contains atoms or ions belonging to rare earth elements or transition elements.
  • the maximum emission wavelength appeared within the wavelength range of 1030 nm to 1350 nm.
  • the phosphor material contains atoms or ions belonging to rare earth elements.
  • the maximum emission wavelength appeared in a wavelength band around 1 ⁇ m.
  • trivalent neodymium Nd 3+ was used, maximum emission wavelengths appeared in the wavelength bands of 0.9 ⁇ m, 1.06 ⁇ m, and 1.3 ⁇ m, respectively.
  • trivalent samarium Sm 3+ was used, fluorescence emission was observed in a wide wavelength band of 0.85 ⁇ m to 1.2 ⁇ m.
  • trivalent erbium Er 3+ or trivalent praseodymium Pr 3+ may be used as atoms or ions belonging to rare earth elements.
  • the present embodiment is not limited thereto, and atoms or ions belonging to any rare earth element or transition element may be used for the phosphor material.
  • FIG. 11B shows the detailed structure inside the near-infrared emitting phosphors 162 and 164 in this embodiment.
  • fluorescent substances 182 to 186 having different particle sizes (having a predetermined particle size distribution) are dispersed.
  • a glass material such as glass SiO 2 or bismuth oxide Bi 2 O 3 or antimony oxide Sb 2 O 3 may be used.
  • the material for forming the binder region 180 is not limited thereto, and epoxy resin, acrylic resin, or silicone resin may be used.
  • the fluorescent material 182 contains any of the atoms or ions A belonging to the rare earth elements or transition elements described above.
  • B is an atom or an ion belonging to a rare earth element or a transition element different from A.
  • the fluorescent substance 186 contains an atom or an ion C belonging to a rare earth element or a transition element different from the above A and B.
  • the different atoms or ions A, B, and C have different central emission wavelengths when emitting fluorescence. Therefore, when fluorescent substances 182 to 186 containing different atoms or ions A, B, and C are mixed together, an effect of emitting fluorescence in a wide wavelength range is produced as near-infrared emitting fluorescent substances 162 and 164.
  • the center wavelength of the fluorescence emission for each type of ion explained above corresponds to the fluorescence emission that occurs inside a relatively large lump.
  • the energy levels of electron orbit change slightly, causing a shift in the central emission wavelength.
  • the fluorescent substances 182 to 186 are formed into fine particles with a particle size of about 3 ⁇ m to 10 ⁇ m, interaction between lattice vibrations (interatomic vibrations) within the fine particles and electron orbital levels occurs. Therefore, when the particle size of the fluorescent substances 182 to 186 changes, the center wavelength of fluorescence emission changes.
  • the fluorescent substances 182 to 186 contained in the near-infrared emitting phosphors 162 and 164 have a wide particle size distribution.
  • the effect of greatly expanding the wavelength range in which fluorescence is emitted is created.
  • the average particle size of the fluorescent substances 182 to 186 is set within the range of 0.5 ⁇ m to 100 ⁇ m for ease of manufacture.
  • the particle size distribution range of the fluorescent substances 182 to 186 in this embodiment is defined by the ratio range of the maximum particle size and the minimum particle size of the fluorescent substances 182 to 186 contained in the same near-infrared emitting phosphors 162 and 164. .
  • the minimum particle size in the fluorescent substance 182 containing atoms (or ions) A belonging to rare earth elements or transition elements is defined as D A min and maximum particle size D A max, and the ratio between them is N A ⁇ D A max/ Manufacture control is carried out so that D A min ⁇ MA falls within the range.
  • the minimum particle size in the fluorescent substance 182 containing atoms (or ions) B belonging to rare earth elements or transition elements is defined as D B min and maximum particle size D B max, and the ratio between them is N B ⁇ D B Manufacture control is performed so that max/D B min ⁇ MB falls within the range.
  • the minimum particle size in the fluorescent material 182 containing atoms (or ions) C belonging to rare earth elements or transition elements is defined as D C min and maximum particle size D C max, and the ratio between them is N C ⁇ D C max. /D C min ⁇ M C Manufacture control is carried out so that the range is within the range.
  • FIG. 11C shows a method for manufacturing the near-infrared emitting phosphor in this embodiment. Start of production of near-infrared emitting phosphor At the beginning of ST01, lumps of fluorescent material each containing atoms (or ions) A/B/C belonging to a rare earth element or a transition element shown in step 02 are created.
  • powders of their oxides Yb 2 O 3 , Nd 2 O 3 , and Sm 2 O 3 are used.
  • powders of glass SiO 2 , bismuth oxide Bi 2 O 3 , and antimony oxide Sb 2 O 3 are prepared as inorganic materials (oxide materials) to be mixed with these materials. When these are mixed individually and kept at a temperature of 1,250°C for about 10 minutes, they melt to form a lump of fluorescent material.
  • this lump is crushed and powdered.
  • This powder is finely ground until the average particle size of each fluorescent substance contained in the powder is about 3 ⁇ m to 10 ⁇ m.
  • the fluorescence emission intensity distribution characteristics of the near-infrared emitting phosphors 162 and 164 in the wavelength direction are greatly influenced by the particle size distribution of the phosphors. Therefore, the powder obtained by pulverization is subjected to particle size selection using a sieve with a uniform mesh size.
  • the pulverized powder is passed through a sieve with a large mesh size to remove fluorescent substances with non-standard particle sizes.
  • the mesh size of the sieve is gradually reduced to sequentially select fluorescent substances whose particle size falls within a predetermined range. In this way, fluorescent substance powders 182 to 186 having a predetermined particle size distribution are extracted (ST04).
  • step 05 fluorescent material powders 182 to 186 having respective particle size distributions and a liquid or powder binder 180 are blended.
  • this liquid or powdered binder 180 is solidified (hardened or coagulated) (ST06), and the production of the near-infrared emitting phosphor is completed (ST07).
  • a glass material is used as the binder 180
  • the powdered glass material and the fluorescent substance powders 182 to 186 are mixed and then heated to a high temperature to harden the mixture.
  • an organic material such as an epoxy resin or a silicone resin
  • it is left for a specific period of time until it is cured by the action of a curing agent.
  • an acrylic photocurable resin is used as the binder 180, it may be cured by ultraviolet irradiation.
  • FIGS. 12A and 12B show a structure in which the optical arrangement in the light source section 2 and the measurement section 8 that performs spectroscopic measurement are integrated and miniaturized.
  • the hybrid light emitting section 470 described in Section 4.3 is used as the light emitting section 470.
  • Both of the embodiments shown in FIGS. 12A and 12B use a spectroscopic element 320 using a reflective blazed grating to measure spectral characteristics.
  • the light of each measurement wavelength separated by the spectroscopic element 320 is focused on the line sensor 300, which is composed of a one-dimensionally arranged photodetection cell array.
  • FIG. 12A differs from FIG. 9F(a) in that the light after passing through the predetermined optical member 90 is focused on the surface of the measurement target 22.
  • a folding mirror plate 314 is installed immediately before the condensing position of the light after passing through the predetermined optical member 90, and is folded back toward the back side of the page. After that (not shown), the light is focused on the surface of the measurement object 22 installed on the back side of the paper.
  • the emitted light from this hybrid light emitting unit 470 enters the living body and is repeatedly reflected diffusely within the living body. A part of the light diffusely reflected within the living body exits the surface of the measurement object 22 (living body surface). A portion of the light emitted from the surface of the measurement object 22 (living body surface) passes through the pinhole 310. The light passing through the pinhole 310 is reflected by the half mirror plate 312 and becomes parallel light through a collimating lens or F ⁇ lens 322.
  • the spectroscopic element 320 Since the spectroscopic element 320 is slightly tilted, the light reflected by the spectroscopic element 320 passes through the upper part of the half mirror plate 312 and reaches the line sensor 300.
  • the structure of the example embodiment shown in FIG. 12A may not be used, but the optical arrangement of FIG. 9F(b) in which the collimating lens or cylindrical lens 120 is removed may be used.
  • FIG. 12B uses the same optical arrangement as FIG. 9G, and also bends the optical path with two folding mirror plates 314 and 316 similarly to FIG. 12A, making it possible to reduce the thickness.
  • the F ⁇ lens or collimating lens 324 used in FIG. 12B may be omitted, and the light reflecting surface of the reflective predetermined optical member 90 may be made into a concave curved surface instead. This results in a reduction in the number of optical parts, which has the effect of making the optical system more compact and lower in price.
  • FIG. 13(a) shows the basic principle of generating speckle noise, which is a type of optical noise.
  • Two light reflecting regions 1046 are arranged at a distance P apart from each other.
  • FIG. 13A shows the reflection intensity of reflected light 1048 that is vertically incident on the light reflection region 1046 and reflected in the ⁇ 0 direction.
  • the reflected intensity at that time is proportional to cos 2 ( ⁇ P ⁇ 0 / ⁇ ). What is important here is that the reflection intensity changes periodically in the reflection direction ⁇ 0 of the reflected light 1048. This periodic change in reflection intensity is related to speckle noise.
  • FIG. 13 For a case where a plurality of light reflection areas 1046 are regularly arranged with a period P.
  • the reflection direction ⁇ 0 that enters the user's eyes changes for each reflection location within the plurality of light reflection regions 1046. Therefore, there are areas where the reflection amplitudes from adjacent light reflection areas 1046 reinforce each other and appear bright, and areas where the reflection amplitudes cancel each other out and appear dark. This kind of appearance is called a speckle noise pattern.
  • FIG. 13(b) shows the reflection intensity of the reflected light 1048 reflected in the ⁇ 0 direction when the angle of incidence of the incident light 1042 on the two light reflection regions 1046 changes to ⁇ i .
  • the reflected intensity at that time changes as cos 2 ⁇ P( ⁇ 0 ⁇ i )/ ⁇ .
  • light synthesis between different wave trains corresponds to intensity addition (synthesis of light intensity values) because different wave trains do not interfere with each other.
  • the first light 202 including a part of at least one wave train is vertically incident on the two light reflection regions 1046.
  • second light 204 including at least a part of another wave train that does not optically interfere with the wave train is made incident at an incident angle ⁇ i as shown in FIG. 13(b).
  • the light intensity of the combined light (intensity-added light) reflected in the ⁇ 0 direction is given by cos 2 ( ⁇ P ⁇ 0 / ⁇ )+cos 2 ⁇ P( ⁇ 0 ⁇ i )/ ⁇ .
  • ⁇ i the value of the light intensity of the second term is the minimum when the light intensity of the first term is maximum in the previous equation
  • speckle noise optical noise
  • the intensity addition of only two beams 202 and 204 that are incoherent (or low coherent) with each other is explained.
  • the present invention is not limited thereto, and three or more types (or four or more types) of light 202, 204, and 206 that have a non-interfering relationship with each other may be irradiated simultaneously onto the measurement target 22 by changing the irradiation angle. Increasing the number of irradiated lights that are incoherent with each other (or have low coherence) increases the number of averaged speckle noises (optical noises), which increases the speckle noise (optical noise) reduction effect. do.
  • FIG. 14A shows an example embodiment illustrating an optical arrangement that utilizes the principles described above to reduce spec noise (optical noise).
  • a phase characteristic conversion element 1050 such as a diffuser plate is used as a method of overlappingly irradiating the light irradiation target 1030 with the respective lights 202, 204, and 206 that have passed through different regions 212, 214, and 216 while changing the irradiation angle. Since the surface of the phase characteristic conversion element 1050 has fine irregularities, light passing therethrough is diffused.
  • the irradiation angle to an arbitrary position on the light irradiation target 1050 changes as ⁇ 1 , ⁇ 2 , and ⁇ 3 for the first light 202 , the second light 204 , and the third light 206 .
  • the first light 202, the second light 204, and the third light 206 are irradiated in an overlapping manner at this position.
  • the pattern of speckle noise (optical noise) appearing on the light irradiation target 1050 is different between the first light 202, the second light 204, and the third light 206. Since the first light 202, the second light 204, and the third light 206 have a non-interfering (or low-interfering) relationship, different speckle noise patterns are mixed together on the light irradiation target 1030. As a result, the speckle noise pattern is averaged (smoothed), and the overall amount of speckle noise (optical noise amount) is reduced.
  • the first light 202, the second light 204, and the third light 206 are irradiated onto the surface of the light irradiation target 1050 in an overlapping manner. Therefore, the surface of this light irradiation object 1050 corresponds to the photosynthesis site 220. Furthermore, it is considered that the surface of this light irradiation target 1050 also serves as the entrance surface 92 of the predetermined optical member. Then, the perpendicular line perpendicular to the surface of the light irradiation object 1050 corresponds to the incident surface side perpendicular line 96. As shown in FIG.
  • the distance between the traveling directions of the first light 202, the second light 204, and the third light 206 and the normal line 96 on the incident surface side, which travel toward the light irradiation target 1050, is ⁇ 1 , It has angles of ⁇ 2 and ⁇ 3 .
  • ⁇ 1 the distance between the traveling directions of the first light 202, the second light 204, and the third light 206 and the normal line 96 on the incident surface side, which travel toward the light irradiation target 1050.
  • FIG. 14B shows an application example of this embodiment.
  • lights 202, 204, and 206 that do not interfere with each other (or have low interference) are focused at spatially different positions.
  • the Keller illumination system 1026 is adopted as the illumination system for the light irradiation target 1030
  • the lights 202, 204, and 206 focused at different positions mix (overlap) with each other and are illuminated at any arbitrary position within the light irradiation target 1030. is irradiated. Further, the irradiation angles at this time are different from each other.
  • the speckle noise pattern is averaged (smoothed) and the overall speckle noise (optical noise) is reduced.
  • FIG. 14B As a method of focusing the lights 202, 204, and 206 that do not interfere with each other (or have low interference) at spatially different positions, in FIG. 14B, a fly-eye lens 1028 in which lenses with multiple optical axes are arranged in the same space is used. I am using it.
  • this fly-eye lens 1028 is placed immediately after the optical characteristic conversion element 210.
  • this fly's eye lens 1028 is placed immediately in front of the optical property conversion element 210 and is integrally formed with the optical property conversion element 210.
  • the third, second, and first lights 206, 204, and 202 that have passed through the third, second, and first regions 216, 214, and 212 individually. are focused on the ⁇ , ⁇ , and ⁇ positions, respectively.
  • the lights 206, 204, and 202 after passing through each condensing position are mixed together and irradiate the light irradiation target 1030 with different irradiation angles.
  • a fly's eye lens 1028 is used as a method for focusing the light beams 206, 204, and 202 passing through different regions 216, 214, and 212 at different positions ⁇ , ⁇ , and ⁇ .
  • the light is not limited to this, and the light may be focused at different positions ⁇ , ⁇ , and ⁇ using any other method.
  • a liquid crystal lens array may be used in place of fly's eye lens 1028.
  • FIG. 14C(a) shows an application example using bundle fiber 1040.
  • the light source section 2 includes a light emitting section 470 and an optical characteristic converting section 480.
  • the first light 202 and the second light 204 that are non-interfering with each other (or have low coherence) exiting the light source section 2 are irradiated onto the measurement object 22 by the Keller illumination system 1026.
  • the focal length of the collimating lens 318 installed in this Keller illumination system 1026 controls the value of the difference in the irradiation angle between the first light 202 and the second light 204 irradiated onto the measurement object 22 . That is, when the focal length of the collimating lens 318 is short, the difference in the irradiation angle between the two becomes large.
  • the first region 212 and the second region 214 have different thicknesses. If the optical path length difference between them exceeds the coherence length ⁇ L 0 (or twice that), the coherence between the first light 202 and the second light 204 decreases.
  • the condensing lens 314 condenses the first and second lights 202 and 204 onto the incident surface of the bundle fiber 1040.
  • first light 202 and second light 204 each enter different core regions within bundle fiber 1040. Due to the combination of the difference in the core region through which the light passes and the collimator lens 318, the traveling direction between the light beams 202 and 204 emitted from the bundle fiber 1040 changes.
  • FIG. 14C(b) shows an optical system in which a phase characteristic conversion element 1050 is arranged just before the incident surface of the bundle fiber 1040.
  • the first and second lights 202 and 204 that have passed through the phase characteristic conversion element 1050 have their phase characteristics converted and enter the bundle fiber 1040.
  • a diffuser plate having a fine structure on its surface, such as ground glass, may be used.
  • the present invention is not limited to this, and a grating, a hologram element, a Fresnel zone plate, etc. may be used.
  • the phase characteristic conversion element 1050 is arranged near the convergence plane of the first light 202 and the second light 204.
  • the first light 202 and the second light 204 mainly pass through different core regions within the bundle fiber 1040.
  • the first light 202 and the second light 204 mix with each other when passing through the phase characteristic conversion element 1050. As a result, the first light 202 and the second light 204 pass through the same core region within the bundle fiber 1040.
  • FIG. 14D shows the results of an experiment to confirm the effect actually performed.
  • the horizontal axis in FIG. 14D represents the position on the measurement target. Further, the vertical axis in FIG. 14D represents the measured light amount. When the amount of speckle noise is large, the amount of variation in the measured light amount appears large.
  • FIG. 14D(a) shows the measurement results of the speckle pattern when the measurement target 22 is irradiated with conventional light that has passed through the core region within the single-core optical fiber or the center within the light guide 330/332/340.
  • the light intensity fluctuates greatly, and large speckle noise appears.
  • FIG. 14D(b) shows the measurement results of the speckle pattern when the optical system of FIG. 14C(b) is employed.
  • the optical conversion element 210 is made of quartz glass, and is divided into 48 elements each having a thickness different by 1 mm.
  • As the phase characteristic conversion element 1050 a diffusion plate having an Ra value of 0.5 ⁇ m was used.
  • the length of the bundle fiber 1040 was 1.5 m, and 320 optical fibers each having a core diameter of 230 ⁇ m and an NA of 0.22 were bundled within a range of 5 mm in diameter.
  • the focal lengths of the condensing lens 314 and the collimating lens 318 were both set to 50 mm.
  • optical interference noise (speckle noise) in FIG. 14D(b) is significantly reduced.
  • Section 5.2 Characteristics of various optical fibers and characteristics of light passing through the core region
  • Section 5.2 we will explain the characteristics of various optical fibers and the characteristics of light passing through the core region 112 of the optical fiber.
  • Section 5.3 a speckle noise reduction method using this characteristic will be explained.
  • FIG. 15A(a) shows the characteristics of a single mode fiber.
  • a cladding region 114 made of a material with a relatively low refractive index is arranged so as to surround the core region 112 with a relatively high refractive index.
  • n 1 refractive index of the core region
  • n 2 refractive index of the cladding region
  • the optical amplitude distribution of light propagating within the core region 112 of this single mode optical fiber shows the characteristics on the right side of FIG. 15A(a). That is, the light amplitude takes a maximum value near the center of the core region 112, and decreases as it approaches the periphery within the core region 112.
  • the amplitude distribution of light (mode of amplitude distribution) or electric field distribution 152 within this core region 112 is referred to as a "fundamental mode.”
  • the amplitude distribution (amplitude distribution mode) of light in the core region 112 takes on a different amplitude distribution from the amplitude distribution on the right side of FIG. 15A. become.
  • An optical fiber in which an amplitude distribution (another mode) other than the fundamental mode can be formed within the core region 112 is called a multimode fiber.
  • modes (amplitude distribution) other than the above-mentioned fundamental mode are referred to as "higher-order modes.”
  • FIG. 15A(b) shows a method of focusing light within the core region 112.
  • a focusing lens 330 focuses the light into the core region 112.
  • the focused spot has a width d.
  • Figure 15B shows the characteristics of two types of optical fibers.
  • the right side of FIG. 15B shows the refractive index distribution 138 within the core region 112 and in the cladding region 114.
  • the vertical axis indicates the position 124 within the optical fiber cross section.
  • the optical fiber shown in FIG. 15B(a) is a step-index (SI) type optical fiber in which the refractive index within the core region 112 is uniform throughout.
  • the optical fiber shown in FIG. 15B(b) is a grade index GI (graded index) type optical fiber.
  • the central portion of the core region 112 of this GI type optical fiber has a high refractive index, and the peripheral portion has a low refractive index.
  • SI type or GI type optical fiber may be used.
  • the limit ⁇ max of the convergence angle of light that can pass through the core region 112 is determined. That is, when light is focused by the condensing lens 330 in FIG. 15A(b), light exceeding ⁇ max as the value of half-maximum ⁇ of the aperture angle cannot be totally reflected at the interface between the core region 112 and the cladding region 114. Become. Therefore, this light jumps out of the optical fiber from the core region 112 via the cladding region 114, as indicated by the dashed arrow in FIG. 15B.
  • the sine function value sin( ⁇ max) of ⁇ max at this time is defined as the NA value of the optical fiber. And as a condition that this NA value satisfies,
  • Equation 28 has the effect of providing high versatility and facilitating optical design.
  • Equation 28 shows the conditions under which higher-order modes can occur within the core region 112. Therefore, the condition under which only the fundamental mode occurs in the core region 112 is given by an expression obtained by reversing the direction of the inequality sign in Equation 28. Also, as a condition for the incident angle ⁇ for light propagation within the core region 112,
  • the inside of the core region 112 of the optical fiber corresponds to the predetermined optical member 90 . Further, the inside of the core region 112 also serves as a photosynthesis site 220.
  • the entrance surface of the optical fiber corresponds to the entrance surface 92 of the predetermined optical member. Further, when the optical fiber has a structure in which the entrance surface is cut vertically, the optical axis direction of the optical fiber (inner core region 112) is parallel to the normal line 96 on the entrance surface side. Then, the first light 202 and the second light 204 are identified among the incident lights having different incident angles ⁇ to the core region 112.
  • FIG. 15C(a) shows a state in which the second light 204 enters the core region 112.
  • the second light 204 enters the core region 112 from a direction substantially parallel to the normal line 96 on the incident surface side.
  • the incident angle ⁇ (the angle between the incident direction of the second light 204 and the normal to the incident surface side 96) satisfies the condition of Equation 30. Therefore, the electric field distribution 152 of the second light 204 within the core region 112 forms a fundamental mode (TE (transverse electric) 1).
  • FIG. 15C(b) shows a state in which the first light 202 enters the core region 112.
  • the first light 202 and the second light 204 travel in different directions just before the entrance surface 92 of the predetermined optical member (here, an optical fiber) 90, so Regarding the incident angle ⁇ of the first light 202 (the angle between the incident direction of the first light 202 and the normal to the incident surface side 96), the relationship ⁇ 0 holds true.
  • the first light 202 passes through the center of the core region 112 on the entrance surface 92 of the predetermined optical member (the entrance surface of the core region 112).
  • Equation 28 (or Equation 23) holds true regarding the diameter D of the core region 112.
  • the incident angle ⁇ of the first light 202 is set so as to satisfy Equation 26 (or Equation 25).
  • this incident angle ⁇ needs to satisfy the condition of Equation 29. Therefore, regarding the incident angle ⁇ range of the first light 202,
  • the incident angle ⁇ of the first light 202 is set to be larger than the incident angle of the second light, so that an electric field distribution mode (higher-order mode) different from that of the second light is formed within the core region 112.
  • the electric field value of the TE2 mode within this higher-order mode becomes 0 at the center of the cross-sectional position 132 in the core region 112.
  • the polarity of the electric field is reversed in the direction in which the cross-sectional position 132 within the core region 112 is shifted.
  • the difference between the fundamental mode (TE1 mode) and TE2 mode within the core region 112 appears in the difference in the intensity distribution characteristics of the light emitted from the optical fiber.
  • the second light 204 propagates in the fundamental mode (TE1 mode) within the core region 112
  • the light cross-sectional intensity distribution (Far Field pattern) at a location away from the emission location from the optical fiber is The intensity distribution is bright and the periphery is dark.
  • the TE2 mode within the core region 112 like the first light 202 it exhibits a ⁇ doughnut-shaped intensity distribution in which the center is relatively dark and the areas slightly shifted from the center are bright.'' . Therefore, by observing the intensity distribution of the light emitted from the optical fiber, it is possible to predict the difference in the mode of the light propagating within the core region 112.
  • Equation 31 The value of ⁇ in Equation 31 is determined from the experimental results shown in FIG. 17A.
  • the appropriate value of ⁇ is considered to be 3/4 (preferably 1/2). Furthermore, when ⁇ is set to 1/4, the probability of taking the TE2 mode increases.
  • FIG. 15C(c) shows that the incident angle ⁇ of the third light 206 (the angle between the traveling direction of the third light 206 immediately before entering the entrance surface 92 of the predetermined optical member and the perpendicular line 96 on the entrance surface side) is The state is shown when the light is set to be even larger than the light 202.
  • the condition for the incident angle ⁇ at this time is
  • Equation 28 The condition for the inner diameter D of the core region 112 that allows the TE3 mode (higher order mode) is given by Equation 28.
  • the incident angle ⁇ changes at the passing position of the condensing lens. Therefore, light propagates within the core region 112 in a state in which different mode-forming lights are mixed.
  • FIG. 16A shows a combination of mode-forming light that is bilaterally symmetrical and mode-forming light that is asymmetrical with respect to the center position within the core region 112.
  • FIG. 16A(a) shows an electric field distribution 152 of light forming a bilaterally symmetrical fundamental mode (TE1 mode).
  • FIG. 16A(b) shows an electric field distribution 152 of left-right asymmetric TE2 mode forming light.
  • FIG. 16A(c) shows an electric field distribution 152 that combines both.
  • the center of gravity position when taking the intensity distribution of the combined light is shifted between the left diagram (L) and the right diagram (R). That is, on the left side (L) of FIG. 16A(c), the center of gravity position 116A is shifted to the left side from the center position within the core region 112. Further, on the right side (R) of FIG. 16A(c), the center of gravity position 116B is shifted to the right side from the center position within the core region 112.
  • This gravity center position shift is not limited to light forming the TE2 mode.
  • a shift in the center of gravity occurs.
  • the explanation so far has mainly focused on SI type optical fibers.
  • the above description is not limited to this, and the above description is also applicable to GI type optical fibers.
  • FIG. 16B shows an embodiment in which speckle noise is reduced by utilizing this center of gravity position shift.
  • a mask pattern MP is placed in the optical path of the parallel light to extract only the upper fan-shaped area A within the laser beam cross section 510.
  • the condenser lens 330 condenses this extracted light into the core region 112 of the waveguide element (optical fiber/optical waveguide/light guide) 110.
  • a gravity center position 116A of the intensity distribution occurs at a position shifted from the center position of the core region 112.
  • a collimating lens 318 converts the light emitted from the waveguide element (optical fiber/optical waveguide/light guide) 110 into parallel light.
  • a center of gravity position 116B of the intensity distribution occurs within the exit plane of the waveguide element (optical fiber/optical waveguide/light guide) 110.
  • This center of gravity position 116B appears at a position opposite to the center of gravity position 116A based on the center position of the core region 112.
  • the traveling directions of the parallel light after passing through the collimating lens 318 are slightly shifted from each other in A and B.
  • a relationship phase asynchronization 402 in which light A extracted in the upper fan-shaped region A and light B extracted in the lower fan-shaped region B do not interfere is established.
  • speckle noise is reduced as described with reference to FIG. 13.
  • FIG. 16C shows an example embodiment that uses optical property conversion elements to reduce speckle noise.
  • the laser beam cross section 510 is divided into eight regions (angular division) in the angular direction around the optical axis.
  • the light (elements) that have passed through each region have an optical path length difference that is greater than or equal to (twice the value of) the coherence length ⁇ L 0 .
  • eight barycenter positions 116 with mutually different intensity distributions are formed.
  • asymmetric electric field modes TE2, etc.
  • a focusing lens 330 focuses light onto the entrance surface 92 within the core region 112.
  • the spot size incident on the incident surface 92 in the core region 112 is increased by shifting the light collection position, the speckle noise reduction effect weakens.
  • the spot size on the entrance surface 92 in the core region 112 is increased, the frequency of total reflection at the interface between the core region 112 and the cladding region 114 increases. Since a phase shift occurs due to total reflection at this interface, the effect of reducing speckle noise is weakened.
  • the ratio of the spot size (diameter) to the inner diameter D of the core region 112 must be 1 or less. Moreover, this ratio is desirably 3/4 or less, or 1/2 or less.
  • the spot size here is defined as the effective beam diameter of the optical system.
  • the effective beam diameter may be defined as the diameter when the maximum diameter of the laser beam cross section 510 that can pass through the condenser lens 330 is projected onto the incident surface 92 in the core region 112.
  • the light intensity on the light collecting surface does not have a rectangular characteristic, but often takes a light intensity distribution that is maximum at the center and decreases at the periphery.
  • the diameter of the range that takes half the maximum intensity in the intensity distribution on the incident surface 92 in the core region 112 (half-width) or the diameter of the range that takes the e -2 value of the maximum intensity (e - 2 width) may be regarded as the spot size.
  • the center of the spot (laser beam cross section 510) on the incident surface 92 in the core region 112 deviates significantly from the center of the core region 112, the phase shift due to total reflection at the interface between the core region 112 and the cladding region 114 will occur.
  • the amount of shift increases. Therefore, the allowable amount of deviation between the center of the spot (laser beam cross section 510) on the incident surface 92 in the core region 112 and the center of the core region 112 in order to obtain the effect of reducing speckle noise will be explained.
  • this amount of deviation needs to be D/2 or less. Further, this amount of deviation is preferably D/4 (or D/8) or less.
  • the TE3 mode light propagating within the core region 112 has electric field distribution characteristics that are symmetrical with respect to the center position of the core region 112. Therefore, the TE3 mode light does not contribute to an increase in the amount of deviation of the center of gravity of the intensity distribution. Therefore, in order to effectively reduce speckle noise, it is desirable to satisfy the condition of sin ⁇ NA for all incident angles ⁇ of light incident on the core region 112.
  • the amount of deviation of the center of gravity in the intensity distribution of the combined light is the total amplitude between the second light 204 forming the reference mode (TE1 mode) and the first light 202 forming the TE2 mode. It changes depending on the amount. That is, when the relative total amplitude amount of the first light 202 forming the TE2 mode is increased, the amount of gravity center position shift in the intensity distribution of the combined light increases. Conversely, if the first light 202 forming the TE2 mode does not exist, no shift in the center of gravity within the intensity distribution of the combined light occurs. Therefore, in order to effectively reduce speckle noise, the maximum incident angle ⁇ for all light incident on the core region 112 needs to satisfy Expression 31.
  • a difference in the incident angle (difference in traveling direction) between the first light 202 and the second light is used to generate a difference in mode within the core region 112. Therefore, this embodiment assumes the use of a multimode fiber (either SI type or GI type may be used). Therefore, it is necessary to satisfy Equation 28 regarding the diameter D within the core region.
  • FIG. 17A shows the experimental results that confirmed the speckle noise reduction effect in this embodiment.
  • the 520 nm wavelength light emitted from the semiconductor laser element 500 was passed through the optical system shown in FIG. 16C.
  • the light after passing through the collimating lens 318 was reflected in a 90 degree direction on the surface of the diffuser plate with an average surface roughness Ra of 2.82 ⁇ m. This reflected light was then observed with a CCD camera.
  • the multimode optical fiber used in the experiment was an SI type with a core diameter D of 600 ⁇ m, an NA value of 0.22, and a total length of 1.5 m.
  • speckle contrast Cs speckle contrast
  • FIG. 17A Shown on the vertical axis.
  • the right vertical axis of FIG. 17A shows the ratio of the Cs value when using the optical characteristic conversion element 210, which is angularly divided into eight regions, to the Cs value when using conventional light.
  • the ratio of the NA value to the sine function value sin ⁇ of the maximum incident angle ⁇ for all the light incident on the core region 112 calculated using the effective beam system is set on the horizontal axis of FIG. 17A. As the value on this horizontal axis increases, the spec noise reduction effect increases.
  • FIG. 17B shows the change in the Cs value when the angular division (horizontal axis) of the optical property conversion element 210 is changed.
  • the experimental conditions are the same as in FIG. 17A.
  • the angle division is 1, a conventional optical system is shown before using the optical characteristic conversion element 210. As the number of angular divisions increases, the amount of speckle noise decreases.
  • the sine function value of the maximum incident angle ⁇ for all the lights incident on the core region 112 calculated using the effective beam system is defined as the execution NA value.
  • FIG. 17B(a) shows the experimental results when the execution NA value is 1/29
  • FIG. 17B(b) shows the experimental result when it is 1/44.
  • FIG. 18A shows an example of an embodiment regarding an optical system in the light source unit 2 that can support a function of high-speed control of the light emission amount.
  • a cross section 510 of emitted light from the semiconductor laser device 500 has elliptical characteristics.
  • two cylindrical lenses 256 and 258 whose bus bars are perpendicular to each other are used. That is, the long-axis side cylindrical lens 256 converts the emitted light from the semiconductor laser element 500 into parallel light in the long-axis direction.
  • the short-axis side corresponding cylindrical lens 256 converts the emitted light from the semiconductor laser element 500 into parallel light in the short-axis direction.
  • an optical characteristic changing element 210 that divides the angle into eight regions is placed. As shown in FIG. 17B, increasing the number of angular divisions in the optical characteristic conversion element 210 increases the speckle noise reduction effect. Therefore, the number of angular divisions may be set to an arbitrary number.
  • the optical path converting prism 252 splits the light passing through the optical characteristic changing element 210 into a photodetecting element 250 and a waveguide element (optical fiber/optical waveguide/light guide) 110.
  • the light input/output surface within the optical path converting prism 252 is an antireflection coated surface 246 . Further, on the total reflection surface 248, the light is totally reflected within the optical path changing prism 252.
  • the condensing lens 330-1 condenses a portion of the light reflected by the reflecting surface 254 on the light receiving surface of the photodetector element 250. Further, the condenser lens 330-2 condenses the remaining light toward the waveguide element (optical fiber/optical waveguide/light guide) 110.
  • FIG. 18B is a diagram illustrating detailed contents of a part of FIG. 1. In FIG. 18B, only the photodetector element 250 and the semiconductor laser element 500 in the light source section 2 shown in FIG. 18A are extracted and illustrated.
  • the inside of the light emission amount control section 30 is composed of a preamplifier circuit 716, a differential calculation circuit 712, and a current drive circuit 718.
  • the light emission amount signal detected by the photodetector element 250 is amplified by the preamplifier circuit 716.
  • the difference calculation circuit 712 calculates the difference value between the light emission amount signal amplified by the preamplifier circuit 716 and the signal given from the time-varying light emission amount generation circuit 728, and outputs this difference value to the current drive circuit 718.
  • the current drive circuit 718 controls the amount of light emitted from the semiconductor laser device 500 by driving the semiconductor laser device 500 with a current value corresponding to this output signal.
  • the recording signal generation section 32 includes a time-varying light emission amount generation circuit 728 and a memory circuit 726.
  • This recording signal generation section 32 can generate any complex time-varying light emission pattern. This allows light emission based on a complex and arbitrary time-varying light emission pattern.
  • a memory circuit 726 stores this complex and arbitrary time-varying light emission pattern. Information on this complex and arbitrary time-varying light emission pattern may be recorded in advance in an external storage medium such as a USB memory or a hard disk. In this case, time-varying light emission pattern information is transferred into the memory circuit 726 via the external storage element drive circuit 72 under the control of the control circuit 720 .
  • a connection terminal for synchronization is provided.
  • a signal line 730 for synchronization with the outside (including the measurement unit 8) is installed at this connection terminal for synchronization, and a reference clock synchronized with the signal transmitted through the signal line 730 for synchronization with the outside is provided. It is generated within the reference clock generation circuit 732.
  • a communication control unit 740 is installed in order to perform time-series highly accurate cooperative operations with the outside including the measurement unit 8. Note that this communication control unit 740 is included in a part of the information transmission path 4 described in FIG.
  • the light source section 2 described here has a structure that allows information communication via either wired or wireless communication media.
  • the wired communication execution unit 738 controls information communication by wire with the outside.
  • a wireless communication execution unit 736 controls wireless information communication with the outside.
  • the communication control interface processing unit 742 performs information processing (data processing) regarding the information content communicated via either wired or wireless route.
  • information communicated with the outside including the measurement unit 8 may have a complicated data structure as described later with reference to FIG. 19C.
  • the communication information decoding unit 748 decodes such a complicated data structure.
  • the light emission pattern of the semiconductor laser element 500 may include confidentiality, and encrypted information may be transferred.
  • the authentication processing control unit 746 performs processing related to the transfer of encrypted information, such as authentication processing with a communication partner and encryption key exchange.
  • the electronic circuit shown in FIG. 18B was explained as an example applied to the optical system described in FIG. 18A.
  • the present invention is not limited thereto, and may be applied to, for example, the optical system shown in FIG. 9I. In this case, all operations can be performed simply by replacing the semiconductor laser element 500 in FIG. 18B with the light emitting section 470.
  • the electronic circuit shown in FIG. 18B may be applied to any light source section 2.
  • the amount of light emitted can be controlled with high precision by distributing a part of the emitted light from the light source section 2 to the photodetecting element 250 and monitoring the amount of light emitted.
  • Section 6.3 Format example of light emission waveform setting The method for setting a complex and arbitrary time-varying light emission pattern generated in the recording signal generation unit 32 is basically a digital signal of a ⁇ series of light emission amounts at predetermined time intervals.'' stipulated. The amount of light emitted at each elapsed time is expressed in binary data. Therefore, as this time-varying light emission pattern, a CSV file format or a relational database format representing a binary data series for each elapsed time may be adopted.
  • FIG. 19A shows an example embodiment of a data format that defines a light emission waveform.
  • a plurality of different light emission patterns can be defined simultaneously.
  • a time-varying light emitting pattern ID (identification) (identification information) 750 is set at the beginning so that each time-varying light emitting pattern can be identified. Placing the time-varying light emitting pattern ID (identification information) 750 at the beginning or a location near the beginning has the effect of facilitating the light emitting pattern search.
  • the reference clock frequency 752 method defines the reference clock frequency generated by the reference clock generation circuit 732.
  • the data step time interval 754 indicates the time interval when setting the amount of light emission for each elapsed time.
  • the time-varying light emitting pattern duration 756 represents the duration of the light emitting pattern defined by the time-varying light emitting pattern ID (identification information) 750.
  • the total number of steps 758 of the time-varying light emitting pattern represents the number of steps of the time-series changing light emitting pattern defined by the time-varying light emitting pattern ID (identification information) 750.
  • the value obtained by multiplying the total number of steps 758 of this time-varying light emitting pattern by the data step time interval 754 corresponds to the time-varying light emitting pattern duration 756.
  • the dynamic range (bit gradation number) 760 of the time-varying light emission amount represents the number of representation bits of binary data that defines one light emission amount. By increasing the number of bits, it is possible to set even minute changes in the amount of light emitted.
  • the full range output light amount value 762 indicates the light amount value output from the light source section 2 when the binary data is set to the maximum value.
  • Identification of the time-varying light emitting pattern specified from the outside is basically specified using information of the time-varying light emitting pattern ID (identification information) 750.
  • a time-varying light emission pattern can be specified using an analog signal level input from the outside.
  • a signal line for external synchronization can be set to an analog level, and the signal level set by this signal line can be used to switch the time-varying light emission pattern.
  • the information used at this time corresponds to an input signal level maximum value 764 that specifies the light emission pattern ID and an input signal level minimum value 766 that specifies the light emission pattern ID.
  • a variable luminescence pattern may be emitted.
  • the light source section 2 when the light emission level is set to "0", the light source section 2 is basically in a state of "no light emission".
  • the present invention is not limited to this, and even when the light emission level is set to "0", a slight amount of light may be emitted.
  • the small amount of light emitted at this time can be set as the idling light amount value 770. If the idling light emitting amount value 770 is other than "0", the value of the idling light emitting amount presence/absence flag 768 becomes “1".
  • binary data series indicating the amount of light emitted at each elapsed time for each time-varying light emission pattern ID (identification information) 750 binary values are arranged below the above data according to the elapsed time.
  • the value of the total data size 772 at this time is given by the product of the value set in the total step number 758 of the time-varying light emitting pattern and the value set in the dynamic range (number of bit gradations) 760 of the time-varying light emitting amount. .
  • FIG. 19B shows an example of communication control between the inside of the light source section 2 and the outside thereof.
  • the external part of the light source unit 2 that is the communication partner is called a host.
  • this host may be associated with the internal system control unit 50.
  • a light emission pattern is transmitted 788 from the host 50 side to the light source section 2 in advance, and then the light emission timing is synchronized between the host 50 and the light source section 2. After synchronizing the light emission timing, the light source section 2 starts emitting light in the specified light emission pattern.
  • a mutual authentication period 780 first. First, a host ID is transmitted from the host 50 side to the light source unit 2. When the light source section 2 receives the host ID, the light source section 2 returns the light source section ID to the host side. Next, the host 50 transmits the host side encryption key to the light source section 2, and then the light source section 2 transmits the light source section side encryption key to the host 50 side. Providing such a mutual authentication period 780 has the effect that the light source unit 2 can communicate information with any device around the world via the Internet.
  • a control signal is encrypted with a special key (for example, a composite key of a host side encryption key and a light source side encryption key) generated from the encryption key exchanged between the host 50 and the light source unit 2.
  • a special key for example, a composite key of a host side encryption key and a light source side encryption key
  • the light emission pattern transmission period 788 information on the light emission pattern is transmitted 784 after the control signal transmission period 790.
  • the light emission pattern information is encrypted with a special key (for example, a composite key of the host side encryption key and the light source side encryption key) generated from the encryption key exchanged between the host 50 and the light source unit 2. is transmitted in the format shown in FIG. 19A.
  • a light emission period 794 begins after the control signal transmission period 790, and light emission in a specified pattern is started.
  • FIG. 19C shows an example data structure within a control signal 790 sent during a light emission timing synchronization period 798.
  • the control signal 790 in FIG. 19C(a) has a data structure shown in FIG. 19C(b).
  • the first placed preamble 640 is used for synchronization.
  • the structure of the source/receiver confirmation information 620 placed next (transmitted after the preamble 640) is shown in FIG. 19C(c).
  • an ID (identification information) 622 on the host (sending source) side is transmitted, followed by an ID (identification information) 628 on the light source (receiving destination) side.
  • an IP address may be used in addition to the ID (identification information).
  • the control signal identification information 642 stores type information of the control signal 790 transmitted this time. Using this type information, it is possible to identify whether it is the light emission pattern transmission period 788 or the light emission timing synchronization period 798.
  • any information of the identification information registered in the time-varying light emitting pattern ID (identification information) of FIG. 19A can be written.
  • the light source section 2 decodes the information stored in this light emission pattern identification information 750 and recognizes the light emission pattern that emits light immediately after this.
  • the light emission start timing designation information 648 specifies the timing at which the light source section 2 starts emitting light. Specifically, the start timing of the synchronization preamble 640 or the transmission start (or reception start) timing of this light emission start time designation information 648 is set as a reference timing, and the delay time from this reference timing to the start of light emission is specified. For example, when performing complex time-series processing in which the light source section 2 and the measurement section 8 cooperate, which will be described later in Chapter 8, the light source section 2 and the measurement section 8 may ) requires highly accurate timing synchronization. By using this light emission start timing designation information 648, an effect is produced in which highly accurate synchronization processing between the light source section 2 and the measuring section 8 can be performed.
  • the single optical element position 10 incorporates the light source section 2 explained in Chapter 6 of this book.
  • the present invention is not limited thereto, and the light source unit 2 described in Chapter 6 of this book may exist independently and be connected to the host 50 via the Internet.
  • the light source unit 2 When used as part of an arbitrary system within the Internet, rather than as a stand alone type, the light source unit 2 has the effect of greatly expanding its applications.
  • FIG. 20A shows a high-precision measurement method in this embodiment.
  • the main parts inside the optical device 10 described in FIG. 1 are extracted and drawn. That is, optical measurement 1002 is performed on the measurement object 24 within the measurement unit 8 .
  • the signal processing unit 42 then analyzes the results of the optical measurement 1002 and extracts necessary information 1004.
  • processing 1000 may be performed using the following procedure.
  • This processing procedure 1000 is 1. Acquisition of first information using detection light from measurement target object 222. 3. Perform processing for optical disturbance noise reduction or electrical disturbance noise reduction 1012 in the detection signal using the first information. Second information acquisition processing is sequentially performed using the noise-reduced signal after the above processing.
  • the extracted information extracted 1004 with high accuracy obtained here is transferred 1006 via the information transmission path 4.
  • the transfer format 1014 used during this information transfer 1006 A] Diversion of existing image and video compression methods or their expanded formats B] Multiplex transfer method of packs or packets distributed for each type of data C] Relationships linked (managed) within hypertext Individual transfer of each file may also be used.
  • Various information transferred 1006 in this transfer format 1014 is saved 1010 in the collected information storage area 74. Alternatively, it may be displayed 1008 on the display unit 18 or the information providing unit 72.
  • FIG. 20B shows a list of examples of information used in this embodiment.
  • the information can be classified into categories 1020 such as unnecessary optical effects, the shape and position of the object to be measured, detection of a moving object to be measured, the composition ratio of the constituent parts, and activities that change over time.
  • the extracted information summary 1022 includes optical effects inside the measurement target, optical effects on the surface of the measurement target, optical effects in the middle of the light propagation path, shape contour information and feature information, and moving body area. Examples include analysis of constituent materials in solids, content of substances in liquids, and biological activities.
  • FIG. 20C shows a list of disturbance noise generation causes 1036 and countermeasures 1038 for each measurement target region 1032 within the measurement target object 22.
  • the causes of electrical disturbance noise 1036 are shot noise, thermal noise, electromagnetic induction noise, etc., regardless of the measurement target area 1032.
  • the carrier component may be extracted E1 by band-limiting the detection signal.
  • the present embodiment is not limited to this, and lock-in amplification (Lock-in Amplifier) E2 may be performed.
  • This lock-in amplification E2 requires synchronization of the frequency and phase of the reference signal with respect to the detection signal. Therefore, in this embodiment, various information included in the category 1020 of activities with time changes in FIG. 20B may be used as the first extracted information 1004 to perform the frequency and phase synchronization.
  • the error correction function E3 of the digitized signal may be used.
  • a technique such as PRML (Partial Response Most Likelihood) may be used to automatically correct the signal sequence to the most appropriate signal sequence.
  • the cause 1036 of optical disturbance noise differs slightly depending on the measurement target area 1032 within the measurement target object 22.
  • a common cause 1036 of optical disturbance noise in both cases is the influence of optical interference noise. This method of reducing optical interference noise corresponds to the technical content already explained in Chapters 2, 3, and 5.
  • optical disturbance noise 1036 include the contamination of other optical effects.
  • the signal processing unit 42 performs arithmetic processing (signal processing or signal analysis) L3 between measurement signals to prevent the influence of other optical effects from being mixed in. remove.
  • the signal processing section 42 detects other optical effects from the measurement signal acquired from the measurement section 8 or the signal reception section 40.
  • the process of extracting 1004 information based on the result corresponds to first extracted information 1004.
  • Next 2. Performing the process of optical disturbance noise reduction or electrical disturbance noise reduction 1012 in the detection signal using the first information, This corresponds to the process of removing the component of the first extraction information 1004 from the measurement signal.
  • the acquisition of the second information is This corresponds to a second information extraction after the influence of other optical effects has been removed.
  • the cause 1036 of optical disturbance noise that occurs depending on the measurement target area 1032 within the measurement target 22 is that it does not occur when measuring the comprehensive characteristics of the entire measurement target 22, but only local characteristics within the measurement target 22. There is a cause 1036 that occurs for the first time when measuring.
  • the optical disturbance noise generation cause 1030 is the influence of disturbance light entering from outside the local region to be measured.
  • an aperture is provided at an imaging position or a confocal position for the local area to be measured.
  • a limit may be set to block unnecessary disturbance light L4. Accordingly, when performing three-dimensional measurement inside the measurement target object 22, for example, it is possible to prevent detection light from a depth position other than the local area to be measured from being erroneously measured as disturbance light.
  • FIG. 21A shows an example of this embodiment.
  • first extraction information 1218 is extracted from the measurement signal obtained from the measurement unit 8 (or signal reception unit 40).
  • a time-series spectral characteristic signal, a time-series image signal, or a data cube signal obtained from the measuring section 8 in the optical device 10 is transferred to the signal receiving section 40 .
  • a predetermined time series signal 1208 is partially extracted 1202 from this input signal.
  • a predetermined time-series signal 1208 partially extracted 1202 within the signal receiving unit 40 is transferred to the signal processing unit 42.
  • the signal processing unit 42 performs reference signal extraction 1210 using the predetermined time series signal 1208. Then, the DC component is further removed 1212 from this reference signal, and the form containing only the AC component is used as first extracted information 1218.
  • the time-series spectral characteristic signal, time-series image signal, or data cube signal transferred from the signal receiving unit 40 to the signal processing unit 42 is multiplied 1230 by the first extraction information 1218 described above. If the signal transferred to the signal processing unit 42 is a time-series spectral characteristic signal, multiplication is performed for each measurement wavelength. Further, if the signal transferred to the signal processing unit 42 is a time-series image signal, multiplication is performed for each pixel. Furthermore, when the data cube signal is transferred, multiplication is performed for each measurement wavelength within each pixel.
  • time-series DC components are extracted 1236 for each wavelength or each pixel by the action of an ultra-narrow band low-pass filter, and second extraction information 1018 is generated in the predetermined signal extraction section 680.
  • band limitation may be performed to extract only the carrier component corresponding to the first extraction information 1218 E1.
  • carrier component extraction E1 based on band limitation
  • extracting only the DC component by lock-in amplification E2 has a higher DC component extraction effect and improves the accuracy of the second extraction information 1018.
  • FIG. 21B shows another embodiment of this embodiment capable of reducing electrical disturbance noise.
  • first extracted information 1218 is extracted 1004 from the measurement signal from the measurement unit 8.
  • first extraction information 1218 is extracted 1004 from a predetermined time-series signal 1208 obtained from the light emission amount control unit 30.
  • the predetermined time series signal 1208 obtained from the light emission amount control section 30 for example, an output signal from the light emission amount output circuit 702 in FIG. 18B may be used.
  • the measurement accuracy when measuring in an environment where ambient light is likely to enter, the measurement accuracy will be significantly reduced due to the influence of the ambient light.
  • the amount of predetermined light 230 emitted from the light emitting unit 2 is modulated, and only the signal component corresponding to the modulated light is extracted 1004 as second extraction information 1018 as shown in FIG. 21B. is significantly improved.
  • FIG. 21C shows a method of reducing electrical disturbance noise by irradiating the measurement object 22 with pulsed light as an example of an applied embodiment of FIG. 21B.
  • the light emission amount modulation signal 1228 transmitted from the signal processing section 42 to the light emission amount control section 30 may take the form of a rectangular pulse waveform.
  • the reference pulse is generated 1220 in the time-varying component extraction processing section 700 within the data processing block 630.
  • the pulse counter 1222 a pulse is generated once every predetermined number of times the reference pulse 1220 is generated.
  • the pulses output by this pulse counter 1222 are used as the first extraction information 1218.
  • This first extraction information 1218 is used as a light emission amount modulation signal 1228 in the light emission amount control unit 1228, and the amount of light irradiated onto the measurement object 22 changes in a rectangular pulse shape in accordance with this light emission amount modulation signal 1228. .
  • This first extraction information 1218 (output pulses of the pulse counter 1222) is also simultaneously transferred to the multiplication circuit 1230 for each wavelength or each pixel. In this way, in the example application embodiment shown in FIG. 21C, the same first extracted information 1218 is used for multiple purposes simultaneously.
  • the time-series spectral characteristic signal, time-series pixel signal, or data cube signal obtained from the measurement unit 8 is detected in synchronization 1224 with the reference pulse 1220 generated in the time-varying component extraction processing unit 700, and the time-varying component is detected.
  • the signal is transferred to a multiplication circuit 1230 for each wavelength or each pixel in the extraction processing section 700.
  • the multiplication circuit 1230 for each wavelength or each pixel can be configured with a very simple circuit.
  • the multiplication circuit 1230 for each wavelength or pixel is composed of only an inverter (polarity inversion) circuit 1226 and a switch 1232. Then, in accordance with the first extraction information 1218 given from the pulse counter 1222, the polarity of the signal transmitted to the time-series DC component extraction circuit (ultra-narrow band low-pass filter) 1236 for each wavelength or pixel is switched (the first The signal polarity switching in synchronization with the extraction information 1218 will be described later).
  • the applied embodiment example shown in FIG. 21C may be used for length measurement or three-dimensional image measurement (three-dimensional image measurement).
  • Light propagates through air at a speed of approximately 3 ⁇ 10 8 m/s, so in a pulse width period of 1 nS, light travels approximately 30 cm.
  • the distance to the measurement object 22 can be measured (length measurement) by measuring the time it takes for the light reflected from the surface of the measurement object 22 placed far away to return. For example, if a pulse with a pulse width of 1 nS and a duty ratio of 50% is used as the reference pulse 1220 and a change in reflected light intensity is measured according to the pulse count value 1222, length can be measured with a spatial distance resolution of 30 cm.
  • three-dimensional image measurement three-dimensional image measurement becomes possible.
  • the above reference pulse 1220 is fixed, and the pulsed light emission amount (light emission amount modulation signal) 1223 from the light emission amount control section 30 is controlled at intermittent timing according to the pulse count value 1222.
  • the output signal 1200 for each pixel from the image sensor 300 is transmitted to the time-varying component extraction processing section 700 in synchronization with the reference pulse 1220 described above.
  • the length measurement method itself using laser pulses is applied to LiDAR (Light Detection and Ranging) used for self-driving cars.
  • LiDAR Light Detection and Ranging
  • the measurement accuracy is significantly reduced due to speckle noise caused by the coherence of laser light.
  • the spatial interference noise reduction method explained in Chapter 12 highly accurate length measurement and three-dimensional image (video) measurement becomes possible.
  • FIG. 22 is a characteristic explanatory diagram when a charge accumulation type signal receiving section is used as the measuring section 4.
  • Most of the spectral characteristic signals, image signals, and data cube signals cannot be obtained continuously in time series, but are time-divided into a measurement period 1258 and a data transfer period 1254 (Fig. 23A (a) and Fig. 23B ( a)). That is, during the measurement period 1258, measurement data is accumulated in the charge amount storage section 1170. Then, during the data transfer period 1254, the accumulated data is transferred to the signal processing section 42 via the data transfer section 1180.
  • FIG. 22 shows an example of the principle of generating a spectral characteristic signal using an organic semiconductor.
  • Each of the organic semiconductor layers 1102, 1104, and 1106 has a different absorption wavelength for the detection light 1100.
  • the first organic semiconductor layer 1102 closest to the incident side of the detection light 1100 absorbs only the detection light 1100 in a predetermined wavelength range.
  • only the detection light 1100 containing light of other wavelengths that has escaped absorption by the first organic semiconductor layer 1102 passes through the first organic semiconductor layer 1102.
  • the second organic semiconductor layer 1104 absorbs the detection light 1100 in other wavelength ranges among the other wavelengths of light that have escaped absorption in the first organic semiconductor layer 1102.
  • the organic semiconductor layers 1102, 1104, and 1106 are each sandwiched between a pair of transparent conductive films, and the transparent conductive films are further partitioned by transparent insulating layers 1124 and 1126. Further, pixel regions 1152 and 1154 are defined by the arrangement of the transparent conductive films. That is, the left side in the left diagram of FIG. 22 forms the first pixel area 1152, and the right side forms the second pixel area 1154.
  • the detection light 1100 in a predetermined wavelength range is absorbed within the organic semiconductor layers 1102, 1104, 1106, charges are generated within the organic semiconductor layers 1102, 1104, 1106, and are used as detection signals.
  • the detection light 1100 enters the left side of the first organic semiconductor layer 1102 and is absorbed within the first organic semiconductor layer 1102, charges are generated within the first organic semiconductor layer 1102. Since the lower transparent conductive film 1112 adjacent to the first organic semiconductor layer 1102 is connected to the ground line, the charges generated within the first organic semiconductor layer 1102 are preamplified via the transparent conductive film 1142. Enter 1150-6.
  • the charge that has entered the preamplifier 1150-6 is stored in the capacitor 1160-6 for a predetermined period (during the measurement period 1258).
  • a feature of the charge accumulation type signal receiving section 40 is that charge is continuously accumulated in the capacitor 1160-6 within a predetermined period (during the measurement period 1258).
  • the amount of charge stored in this capacitor 1160-6 is transferred to the amount of charge storage section 1170-2 at the end of a predetermined period, and then the amount of charge is discharged. Thereafter, charge is again stored in the capacitor 1160-6 during the next predetermined period (during the measurement period 1258).
  • the detection light is separated for each measurement wavelength using the spectroscopic element (blazed grating) 320 in FIG. 12A or FIG. 12B21A
  • a line sensor or a two-dimensional array sensor is used as the image sensor 300.
  • the measurement signal is output in a time-divided manner into the measurement period 1258 and the data transfer period 1254.
  • the detection signal band limiting method E1 or lock-in amplification method E2 which is suitable for a measurement signal in which the measurement period 1258 and the data transfer period 1254 are time-divided, and the error correction method E3 of the digitized signal are used.
  • the measurement period 1258 becomes relatively long, and measurement accuracy using the band limit E1 and lock-in amplification E2 tends to deteriorate.
  • the measurement period 1258 is When the time length becomes longer, the extraction accuracy of the first extraction information 1218 tends to decrease.
  • FIG. 23A shows a method of extracting first extraction information 1218 with high precision 1004 for a relatively long measurement period 1258.
  • the lock-in amplifier circuit E2 described in FIG. 21A is used.
  • FIG. 23A shows a processing method up to information extraction 1004 in which changes in near-infrared light absorption amount in response to changes in blood flow 1252 are used as first extraction information 1218.
  • FIG. 23A(a) a measurement signal obtained by time-sharing a measurement period 1258 and a data transfer period 1254 enters the signal processing unit 42.
  • FIG. 23A(b) shows an example of a time-divided measurement signal format sent from the signal receiving section 40.
  • the vertical axis represents the blood flow rate 1252 obtained as a change in the absorption amount of near-infrared light.
  • the horizontal axis in FIG. 23 indicates elapsed time 1250.
  • no measurement signal is obtained from the charge storage type signal receiving section (measuring section 8) during the data transfer period 1254. Therefore, as shown in FIG.
  • step-like measurement signals are obtained intermittently from measurement period 1258.
  • the signal processing unit 42 converts the intermittently step-like measurement signal into a continuous one using a sample hold method.
  • the measurement signal changes discontinuously in a stepwise manner.
  • the measurement signal that changes discontinuously in a stepwise manner as shown in FIG. 23A(c) is smoothed using an optimized multiplex parallel bandpass filter. Furthermore, as shown in FIG. 23A(e), the DC component in the waveform of FIG. 23A(d) is removed to generate first extraction information 1218.
  • FIG. 23B shows a signal processing (data processing) process up to the information extraction 1004 of the second extraction information 1018 using the signal processing (data processing) method of FIG. 21A.
  • FIG. 23B(a) shows an example of the format of the measurement signal sent from the signal receiving section 40.
  • the measurement period 1258 and the data transfer period 1254 are time-divided and transferred.
  • FIG. 23B(b) shows time-series data for each measurement wavelength in the spectral characteristic signal or time-series data for each pixel in the image sensor, and measurement in the spectral characteristic signal for each pixel in the image sensor included in the data cube. It represents time-series data 1200 for each wavelength. Since the data transfer period 1254 is not measured, it is sent as intermittent rectangular (pulse-like) time-series data.
  • FIG. 23B(c) shows the waveform of the first extracted information 1218 extracted 1004 in FIG. 23A(e).
  • FIG. 23B(d) represents the result of multiplication for each time series between FIG. 23B(b) and FIG. 23B(c).
  • the waveform in FIG. 23B(d) matches the output waveform of the multiplication processing unit 1230 for each wavelength or pixel. Since there is a period in which the waveform in FIG. 23B(c) takes a "negative value,” there is also a period in which the waveform in FIG. 23B(d) takes a "negative value.”
  • FIG. 23B(e) shows the result of the second extracted information 1018 extracted 1004 in FIG. 21A.
  • the DC component of the discrete signal in FIG. 23B(d) is extracted using the action of the time-series DC component extraction unit (ultra-narrow band low-pass filter) 1236 for each wavelength or pixel in FIG. 21A, the result is as shown in FIG. 23B.
  • a constant value that does not depend on the elapsed time 1250 as shown in (e) can be obtained.
  • FIG. 24A shows the optical arrangement inside the light source section 2 used for measurement.
  • the optical system described in FIG. 4 and the optical system described in FIG. 18A were combined using a dichroic mirror 350.
  • a laser beam with an emission wavelength of 1330 nm was used.
  • An SI type multimode single-core fiber SF with a core diameter of 0.6 mm guided this combined light to the tip 360 of the index finger.
  • Another SI type multimode single-core fiber SF with a core diameter of 0.6 mm guided the light that passed through the index finger tip 360 (light scattered within the index finger tip 360) to the spectrometer in the measurement unit 8.
  • FIG. 24B(a) shows the spectral characteristics of the light irradiated onto the index finger tip 360.
  • the amount of light emitted at the laser light emission wavelength (1330 nm) is overwhelmingly large. Note that the long wavelength side of the emitted light from the halogen lamp HL is blocked by a dichroic mirror 350.
  • FIG. 24B(b) shows the spectral characteristics of the light transmittance of transmitted light that has passed through the index finger tip 360 (light that has repeatedly been scattered inside the index finger tip 360 and then exited from the opposite side of the index finger tip 360).
  • the spectrometer was able to detect a sufficient amount of light even for 1330 nm wavelength light (laser light).
  • FIG. 25A shows the temporal change in relative light transmittance detected by a spectrometer.
  • the value obtained by dividing the light transmittance obtained from actual measurement data by the light transmittance that does not change over time measured in advance in FIG. 24B(b) is defined as relative light transmittance.
  • This relative light transmittance was then used on the vertical axis of FIG. 25A.
  • the horizontal axis in FIG. 25A shows the passage of time in 0.1 second increments. Therefore, every time the time elapsed value advances by 10, one second passes. Since a sufficient amount of light with a wavelength of 1330 nm (laser light) could be detected, a signal from this wavelength light can be used as the first extraction information 1218 (FIG. 20A or FIG. 21A).
  • FIG. 25A(a) shows the relative light transmittance of 1330 nm wavelength light (laser light) over time.
  • laser light laser light
  • FIG. 25A(a) shows the relative light transmittance of 1330 nm wavelength light (laser light) over time.
  • glucose (carbohydrate) has an absorption peak within the wavelength range of 0.9 ⁇ m to 1.0 ⁇ m.
  • proteins exhibit absorption peaks within the wavelength range of 0.95 ⁇ m to 1.1 ⁇ m.
  • FIG. 25(b) shows the relative light transmittance over time at a wavelength of 1026 nm, which is expected to be absorbed by the peptide skeleton.
  • FIG. 25(c) shows the temporal change in relative light transmittance at a wavelength of 928 nm, which is expected to be absorbed by glucose.
  • the form of the second extracted information 1018 in FIG. 21A is not taken, but the raw signal waveform obtained from the measurement unit 8 is shown.
  • the signal amplitude obtained from the 928 nm wavelength light is larger than the signal amplitude obtained from the 1026 nm wavelength light.
  • FIG. 25B shows the result of adding measurement data using light of other wavelengths.
  • the light transmittance of the DC component is lowest at a wavelength of 971 nm. Therefore, as shown in FIG. 25B(d), a signal obtained from light with a wavelength of 971 nm is also shown.
  • the signal amplitude obtained from light with a wavelength of 971 nm (FIG. 25B(d)), where the DC component of light transmittance was the smallest, is the same as the signal amplitude obtained from light with a wavelength of 1026 nm, where the DC component of light transmittance is relatively large (Fig. 25B(d)). It is slightly larger than FIG. 25B(b)). From the above explanation, it can be seen that the amplitude of change in relative light transmittance synchronized with pulsation changes with the measurement wavelength.
  • Section 8.1 Embodiment of combination of light source section and measurement section with built-in image sensor
  • Section 7.2 internal structure of image sensor and data acquisition timing in this embodiment, a length measurement method using laser pulses is explained. explained.
  • the amount of light that reaches a long distance is inversely proportional to the square of the distance from the light emitting point. Therefore, when measuring a distance at a large distance, a sufficiently large amount of light emission is required. Those in the vicinity of this light-emitting point are exposed to extremely high intensity laser beam irradiation, increasing the risk of eye damage. In order to reduce the risk of damage to the eyes, it is desirable to use near-infrared laser light for length measurement, as explained in Section 4.1.
  • FIG. 26A shows the structure of a 3D (dimensional) color image sensor 1280 (capable of length measurement).
  • Various optical filters 1272 to 1278 are arranged on the surface of each pixel 1262 to be imaged, and wavelength restrictions are applied to light that can reach each pixel 1262 to be imaged.
  • an optical filter 1272 that transmits only red light and near-infrared light is installed immediately in front of pixels 1262-1 and 1262-2 that detect red light and near-infrared light.
  • an optical filter 1274 that transmits only green light and near-infrared light is installed.
  • an optical filter 1276 that transmits only blue light and near-infrared light is installed immediately before the pixels 1266-1 and 1266-2 that detect blue light and near-infrared light.
  • an optical filter 1278 that transmits only white light and near-infrared light is installed immediately before the pixels 1268-1 and 1268-2 that detect white light and near-infrared light.
  • near-infrared laser light is used for length measurement using laser light.
  • FIG. 26B shows (the equivalent circuit of) the electronic circuit within this 3D color image sensor 1280.
  • Preamplifiers 1150-1 and 2 are individually connected to pixels 1262-1 and 2 that detect red light and near-infrared light, respectively.
  • preamplifiers 1150-3 and 4 are individually connected to pixels 1264-1 and 2 that detect green light and near-infrared light.
  • charges are accumulated in capacitors 1160-1 to 1160-4 according to detection signals from each preamplifier 1150-1 to 1150-4.
  • the interlocking switches 1300-1 and 1300-2 are independently turned ON/OFF depending on the exposure time and non-exposure time.
  • the ON/OFF timing of these interlocking switches 1300-1 and 1300-2 is controlled by exposure timing setting circuits 1292-1 and 2.
  • interlocking switches 1300-1 and 1300-2 are separately turned off, and charges are stored in capacitors 1160-1 to 1160-4 corresponding to preamplifiers 1150-1 to 1150-4 individually.
  • the interlocking switches 1300-1 and 2 are disconnected separately, and the detection signals from the pixels 1262-1 and 2 and 1264-1 and 2 in the 3D color image sensor 1280 are emitted toward the ground wire. Ru.
  • the charges stored in capacitors 1160-1 to 1160-4 are discharged.
  • Envelope detection circuits 1288-1 to 4 are individually connected to each preamplifier 1150-1 to 1150-4. At the end of exposure, the output voltages of the envelope detection circuits 1288-1 to 1288-4 are temporarily stored in the page memories 1296-1 and 1296-2. The output voltage data temporarily stored in the page memories 1296-1 and 1296-2 is periodically transferred to the outside via the readout circuit 1290.
  • the detection signal is temporarily stored in the page memories 1296-1 and 1296-2 at each exposure timing.
  • page memories 1296-1 and 1296-2 capable of storing detection signals at each exposure timing, an effect is produced in which detection signals during a very short exposure period can be stably detected.
  • FIG. 26C shows the control timing of the exposure timing setting circuit 1292 in FIG. 26B.
  • the exposure period in FIG. 26B i.e., the period during which the preamplifier 1150 continues to transmit the detection signal from the pixel 1264 in the 3D color image sensor to the envelope detection circuit 1288 by accumulating the charge in the capacitor 1160
  • the exposure period in FIG. 26B
  • the connection of the interlocking switch 1300 is cut off (FIG. 26C(b)) only while time elapses from time t1 to t1+ ⁇ (FIG. 26C(a)).
  • FIG. 26C(c) shows the timing at which the output of the envelope detection circuit 1288 is taken into the page memory 1296. In this way, immediately after the exposure period ⁇ ends, the image is captured into the page memory 1296.
  • FIG. 26C(d) shows the output signal waveform of the envelope detection circuit 1288 before and after the exposure period.
  • the amount of charge accumulated in the capacitors 1160-1 to 1160-4 is "0", so the output signal of the envelope detection circuit 1288 maintains the state of "0".
  • charge begins to accumulate in capacitors 1160-1 to 1160-4, so the output signal of envelope detection circuit 1288 begins to increase.
  • the charges in the capacitors 1160-1 to 1160-4 are discharged, but the envelope detection circuit 1288 maintains the state immediately before the exposure period ⁇ ends.
  • FIG. 26C(e) shows data taken into page memory 1296.
  • the data in the page memory 1296 before the exposure period ⁇ has an initial value of “0”.
  • the exposure timing setting circuit 1292 issues a data capture instruction to the page memory 1296.
  • the output data of the envelope detection circuit 1288 is captured into the page memory 1296. This captured data is delivered to readout circuit 1290 at appropriate timing.
  • FIG. 27A shows a combination of measurement unit 8 with built-in 3D color image sensor 1280 and light source unit 2 described in the previous Section 8.1.
  • An example embodiment of an optical system is shown.
  • a combination of FIGS. 18A and 18B may be used.
  • the external synchronization signal line 730 in FIG. 18B highly accurate time coordination regarding the exposure period ⁇ of the image sensor 1280 can be achieved.
  • the present invention is not limited to this, and any structure of the light source section 2 including that shown in FIG. 9I may be used.
  • the amount of speckle noise is significantly reduced as the number of angular divisions increases within the optical characteristic changing element 210.
  • the speckle noise pattern changes depending on the irradiation angle to the measurement target object 22. Therefore, by changing the irradiation angle onto the measurement object 22 over time and time-averaging (or time-integrating) the measurement results, the amount of speckle noise can be further reduced.
  • a light reflecting plate 520 is placed in the optical path of the emitted light from the light source section 2, and the inclination angle of the light reflecting plate 520 is slightly moved using piezoelectric elements 526 and 528. Based on this, the irradiation angle to the measurement target object 22 is changed over time.
  • a half mirror 536 is disposed within the measurement unit 8, and a portion of the reflected light from the measurement object 22 (not shown) reaches the 3D color image sensor 1280.
  • An imaging lens moving mechanism 540 moves the imaging lens 144 along the optical axis direction. Due to the function of this imaging lens moving mechanism 540, the 3D color image sensor 1280 can be placed at an imaging position for the measurement target 22 located at an arbitrary position.
  • a two-dimensional array optical shutter 530 is installed at the same imaging position as the 3D color image sensor 1280.
  • This two-dimensionally arrayed optical shutter 530 may be configured with any optical shutter such as a liquid crystal shutter.
  • this two-dimensional array optical shutter 530 is used, measurement accuracy is improved when near-infrared light is used for length measurement using the 3D color image sensor 1280.
  • a method of using the two-dimensional array optical shutter 530 will be explained.
  • the moisture absorption distribution on the surface of the measurement object 22 can be determined.
  • the influence of the moisture absorption distribution on the surface of the measurement object 22 can be reduced.
  • a two-dimensionally arrayed optical shutter 530 is arranged within the measurement section 8, and the light source section 2 and the measurement section 8 are arranged in close proximity.
  • the present invention is not limited thereto, and the light source section 2 and the measurement section 8 may be arranged at separate positions.
  • the light source section 2 and the measurement section 8 are arranged at separate positions.
  • a light reflecting plate 520 including piezoelectric elements 526 and 528 and a two-dimensionally arranged optical shutter 530 may be built into the light source section 2.
  • FIG. 27B shows the control circuit configuration of the 3D color image sensor 1280.
  • a light emission timing control section 1302 within the light emission amount control section 30 controls the light emission amount of near-infrared light within the light source section 2 described above.
  • a non-emission color image (video) storage section 1316 in the signal receiving section 40 stores a color image (video) collected by the 3D color image sensor 1280 in the measurement section 8 when the light source section 2 is not emitting light.
  • the light emission timing control section 1302 in the light emission amount control section 30 performs continuous light emission control of the near-infrared light in the light source section 2 described above.
  • a color image (video) storage section 1318 that includes the light source wavelength light in the signal receiving section 40 stores the color image (video) collected by the 3D color image sensor 1280 at this time.
  • the difference calculation processing unit 1314 in the signal processing unit 42 extracts the difference in color images (videos) depending on whether or not the light source unit 2 emits light.
  • An image (video) storage unit 1312 of only the light source wavelength light in the signal processing unit stores the difference result.
  • the saved contents are then transferred to the pattern setting unit 1304 of the two-dimensionally arrayed optical shutter in the light emission amount control unit 30.
  • the pattern setting unit 1304 of the two-dimensionally arrayed optical shutter controls the transmitted light amount distribution characteristics of the two-dimensionally arrayed optical shutter 530.
  • the control circuit in FIG. 27B not only performs this but also performs various timing controls that will be described later from FIG. 28A onwards.
  • An exposure timing setting section 1310 in the signal receiving section 40 performs various timing controls to be described later.
  • a light emission timing control section 1302 in the light emission amount control section 30 controls the light emission timing of the light source section 2 in accordance with a command from the internal system control section 50.
  • the light source section 2 and the measurement section 8 are arranged at physically separate positions, and are connected via the Internet. Both may be connected. In this case, Internet communication may be performed between the light source section 2 and the measurement section 8 (via the host 50) using the method described in Section 6.4.
  • the content of the measurement using the combination of the 3D color image sensor 1280 and the light source section 2 uses the 3D color image (video) in the application field (various optical application fields) compatible section 60 (duplicated description in FIG. 1 and FIG. 27B). It is used by application software 58.
  • An example of how the application software 58 is used using this 3D color image (video) will be described later in Chapter 9 below. However, it is not limited to the service provision content described later in Chapter 9, and may be used for any service provision content.
  • FIG. 28A shows a method for measuring a three-dimensional color image (video) with high accuracy.
  • the expression "three-dimensional color image (video) measurement” is used here. Therefore, as an alternative embodiment to the embodiment described in this Section 8.3 and the following Section 8.4, any charge storage type signal receiver and light source (including, for example, the content described in Section 7.3) may be used. Part 2 may be combined.
  • the procedure described in Section 8.3 can be used in conjunction with the "correction process for the detected light characteristic distribution obtained from the measurement object 22 when irradiated with light at the emission wavelength of the light emitting source 2" described in the previous Section 8.2. (performed before or after the correction process).
  • a plurality of light emission patterns 750 (a plurality of light irradiation patterns to the measurement target object 22) in the light source section 2 are used. This difference in the light emission pattern 750 may be notified to the light source section 2 in advance using the light emission pattern identification information 750 (FIG. 19C) in the control signal 790.
  • length measurement is possible over a very wide range, such as a 100 m range or a 1 km range. It is difficult to measure the length within such a wide measurement range at once with high precision. For example, it takes time to achieve high-precision measurement with a measurement error of 1 mm or less over a wide area, such as 1 km, in just one measurement. Further, high measurement accuracy is required only near the location where the measurement target object 22 exists. On the other hand, in a place where there is no object 22 to be measured, high-precision measurement has little meaning.
  • the optimum light emission pattern 750 may be switched depending on the range to be measured and the required accuracy.
  • Three different light emission patterns 750 may be used between the start (ST10) and the end (ST17) of collecting three-dimensional color images (videos).
  • a light emission pattern described later with reference to FIG. 28B(b) is used.
  • a reflected light pattern of the light source wavelength light over the entire measurement distance range is imaged.
  • a light emission pattern described later in FIG. 28C(b) is used.
  • a reflected light pattern of the light source wavelength light is imaged for each measurement distance range.
  • step 15 In the subsequent two-dimensional color image (video) collection in step 15, a light emission pattern described later in FIG. 28D(b) is used. Then, in step 16, detailed distance measurement is performed for each measurement distance range using the image (video) obtained from the light emission pattern.
  • FIG. 28B shows a method of imaging a reflected light pattern with light source wavelength light over the entire measurement distance range.
  • the intensity of emitted light from one light emitting point is inversely proportional to the square of the distance from the light emitting point.
  • the distance from the light source section 2 to the measurement section 8 increases to twice the distance to the measurement object 22. Therefore, the maximum amount of light emitted from the light source section 2 is determined by the measurement range (the maximum distance from the light source section 2 that can be measured). Further, when the measurement target object 22 is moved away from the object 22, it is necessary to increase the amount of light emitted from the light source section 2.
  • FIG. 28B(b) shows a light emission pattern suitable for imaging a total reflection light pattern over the entire measurement distance range.
  • light is emitted continuously, and the amount of light emitted decreases along a quadratic curve as time passes. Accordingly, uniform light reflection characteristics can be collected over the entire measurement distance range.
  • FIG. 28B(c) shows the exposure timing in the measurement unit 8 (3D color image sensor 1280).
  • a rectangular wave period
  • the measurement distance range in this case is measured up to a distance of c(t2-t1)/2. From the image acquired at the exposure timing shown in FIG. 28B(c), a total reflection light pattern over the entire measurement distance range can be seen.
  • FIG. 28C shows a method of imaging a reflected light pattern using light source wavelength light for each measurement distance range.
  • the light source section 2 emits intermittent pulsed light as shown in FIG. 28C(b). This intermittent pulsed light emission is performed in a constant cycle at times t1, t2, and t3.
  • An interference prevention period 798 of a predetermined period is provided within this fixed period, and the light source section 2 is controlled so as not to emit light during this interference prevention period 798. If no light emission is controlled within this interference prevention period 798, erroneous distance measurement based on pulsed light emission at wrong timing can be prevented, and measurement accuracy is improved.
  • the amount of light emission is changed at each light emission timing t1, t2, and t3.
  • the phenomenon in which the amount of detection signals from the distant measurement target 22 decreases has been described above.
  • FIG. 28C(b) shows a rectangular pulsed light emission state.
  • the present invention is not limited thereto, and light may be emitted using a "modulation signal unique to the light source section 2 (ID information signal of the light source section 2)" during the pulse emission period.
  • ID information signal of the light source section 2 ID information signal of the light source section 2
  • Another photodetection element 250 may be arranged within the measurement section 8, and this another photodetection element 250 may simultaneously detect a unique modulation signal (ID information signal of the light source section 2) that is different for each light source section 2.
  • FIG. 28C(c) shows the exposure timing in the measurement unit 8 (3D color image sensor 1280).
  • the delay times ⁇ t1, ⁇ t2, and ⁇ t3 of the exposure timing in the measurement unit 8 (3D color image sensor 1280) with respect to the light emission timing in the light source unit 2 are different for each light emission time t1, t2, and t3.
  • the measurement distance range is changed by changing this delay time.
  • the present invention is not limited thereto, and each delay time may be arbitrarily set as long as there is a difference of a certain value or more between the delay times ⁇ t1, ⁇ t2, and ⁇ t3. Therefore, it is also possible to set ⁇ t1> ⁇ t2> ⁇ t3.
  • Each image acquired at each timing and during the same exposure period ⁇ shows a part of the image acquired in FIG. 28B. Therefore, by comparing each image acquired here with the image acquired in FIG. 28B, it is possible to grasp the approximate distance of each region into which the total reflection light pattern over the entire measurement distance range is finely divided.
  • FIG. 28D shows a detailed distance measurement method within each measurement distance range.
  • the interference prevention period 798 is set in the light emission state of the light source section 2 (FIG. 28D(b)), and the delay times ⁇ t1 and ⁇ t2 of the exposure period ⁇ are changed at each light emission time t1 and t2 of the light source section 2 (FIG. 28D(b)).
  • 28D(c)) is consistent with FIG. 28C.
  • the part that causes the light source section 2 to emit modulated light intermittently is different from FIG. 28C(b).
  • "uniform periodic light emission” is desirable.
  • the duty ratio be 50%.
  • this light emission state satisfies the conditions of "uniform period” and "50% duty ratio”
  • light may be emitted with any waveform. For example, a sine wave, a rectangular wave, a triangular wave, etc. may be freely set.
  • FIG. 28E shows an explanatory diagram of a distance measurement method using the phase detection method.
  • the structure inside the 3D color image sensor 1280 has been described using FIG. 26A.
  • four pixel sets 1262-1, 1264-1, 1266-1, and 1268-1 are used. Further, each exposure timing is shifted in accordance with the modulated light emission state of the light source section 2 described above.
  • FIGS. 28E(b) to (e) show the exposure timing for each of the four pixels 1262, 1264, 1266, and 1268.
  • the exposure periods ⁇ of the four pixels 1262, 1264, 1266, and 1268 are all made to match. Then, the exposure timing is shifted by the exposure period ⁇ . Then, as shown in FIG. 28E(f), the modulated light emission period of the light source section 2 is set to "4 ⁇ ".
  • the amount of delay phase ⁇ of the detection light can be calculated.
  • the amount of delay of the detection light returning to the measurement unit 8 (3D color image sensor 1280 therein) can be determined with high precision.
  • Speckle noise can be significantly reduced using the method described in Section 8.4 Method for reducing the influence of laser speckle noise on measurements.
  • Section 8.4 Method for reducing the influence of laser speckle noise on measurements.
  • FIG. 17B even if the number of angular divisions of the optical characteristic changing element 210 is significantly increased, it is difficult to completely eliminate speckle noise.
  • Chapter 7 explained a high-precision measurement method that combines optical noise reduction technology and circuit technology.
  • FIG. 28F shows a method for further reducing the effects of speckle noise by combining circuit techniques.
  • optical noise reduction measures such as the description in Chapter 7 in the light source unit 2, the explanation in Section 8.2 using Figure 27A, and the explanation in Section 3.3 using Figure 9I are introduced. It is assumed that the execution of However, the present embodiment is not limited thereto, and only the embodiment described below may be implemented without performing optical noise reduction.
  • FIG. 27F(b) shows the light emitting state in the light source section 2.
  • the pulsed light emission explained in FIG. 28C(b) starts from time t2. However, as explained using FIG. 28C(b), light emission modulation may be performed during this pulse light emission period. From the subsequent time t4, the modulated light emission explained in FIG. 28D(b) is performed.
  • each has an exposure period ⁇ at a different timing.
  • the signals detected during the exposure period ⁇ at different timings are stored separately in the page memory 1296 (FIG. 26B) as separate signals.
  • FIGS. 28F(c) to (e) to simplify the explanation, only the exposure period ⁇ corresponding to only one pixel 1262 is shown. However, in reality, as shown in FIGS. 28G and 28H, the exposure start time is shifted by the exposure period ⁇ for each pixel 1262 to 1268.
  • FIG. 27F(c) shows the timing of the first exposure period ⁇ in (the 3D color imaging device 1280 of) the detection unit 8.
  • This first exposure period ⁇ starts from time t1 before the light source section 2 emits light.
  • a color image (video) of only visible light is captured with the light source section 2 in a non-emission state.
  • the image (video) obtained here matches the content of the image (video) stored in the non-emission color image (video) storage unit 1316 in FIG. 27B.
  • FIG. 27F(d) shows the timing of the second exposure period ⁇ in (the 3D color image sensor 1280 of) the detection unit 8.
  • the second exposure period ⁇ starts at a time t3 delayed by ⁇ t0 from the time t2 when the light source section 2 starts emitting pulsed light.
  • an image (image) is obtained in which a reflected image (image) of the light of the emission wavelength of the light source unit 2 irradiated with pulsed light emission is superimposed on a color image (image) obtained from visible light.
  • This image (video) matches the content of the image (video) stored in the color image (video) storage unit 1318 including the light source wavelength light in FIG. 27B.
  • the second exposure period ⁇ in the detection unit 8 (3D color imaging device 1280 thereof) starts ⁇ t 0 after the light source unit 2 starts emitting pulsed light.
  • the actual pulse emission period of the light source section 2 is much longer than the second exposure period ⁇ in (the 3D color image sensor 1280 of) the detection section 8.
  • the reflected images (images) of the light emitted at wavelengths are superimposed.
  • speckle noise is mixed in the reflected image (video) of the light emitted by the light source 2 at the wavelength.
  • the speckle noise mixing rate differs between the four pixels 1262, 1264, 1266, and 1268 that make up one set. In this second exposure period ⁇ , an image (video) mixed with speckle noise is obtained for each pixel 1262 to 1268.
  • each of the areas A1 to A4 shown in FIG. 28E(g) shows an ideal state without speckle noise.
  • the influence rate of speckle noise (rate of change in amount of light caused by speckle noise) differs for each pixel 1262 to 1268. Therefore, when the influence of speckle noise is taken into consideration, the levels A1 to A4 in FIG. 28E(g) individually vary greatly due to the influence of speckle noise.
  • the influence rate of speckle noise for each pixel 1262 to 1268 (rate of variation in light amount caused by speckle noise) is constant regardless of the amount of light emitted by the light emitting unit 2.
  • speckle noise for each pixel 1262 to 1268 occurs when the light emitting pattern in the light emitting unit 2 is in a modulated light emitting state as shown in FIG. It is considered that the influence rate (rate of light amount fluctuation caused by speckle noise) is almost the same.
  • the influence rate of speckle noise (light intensity variation caused by speckle noise) for each pixel 1262 to 1268 is obtained during the second exposure period ⁇ in the detection unit 8 (3D color image sensor 1280).
  • the second information corresponding to the "phase information obtained from the measurement object 22" is extracted using this first extraction information.
  • FIG. 27F(e) shows the timing of the third exposure period ⁇ in (the 3D color imaging device 1280 of) the detection unit 8.
  • This third exposure period ⁇ starts at time t5, which is delayed by ⁇ t0 from time t4 when the light source section 2 starts modulated light emission. Due to the description on the paper, the interval between time t3 and time t4 in FIG. 28F is narrow. However, in reality, the interval between time t3 and time t4 is sufficiently wide.
  • an image (image) is obtained in which a reflected image (image) of light of the emission wavelength of the light source unit 2 irradiated with modulated light is superimposed on a color image (image) obtained from visible light. .
  • the reflected image (video) of light of the emission wavelength of the light source unit 2 includes "phase component information obtained from the measurement object 22.” Further, this "phase information obtained from the measurement object 22" is buried in "speckle noise for each pixel 1262 to 1268.” Therefore, by using the first extracted information acquired in the second exposure period ⁇ , ⁇ the influence rate of speckle noise for each pixel 1262 to 1268 (rate of light amount variation caused by speckle noise)'', speckle noise "Phase component information obtained from the measurement target object 22" (second information) buried in the measurement object 22 is extracted 1000 (FIG. 20A).
  • FIG. 28G shows detection signals for each pixel obtained in the first exposure period and the second exposure period.
  • FIGS. 28G(b) to (e) show detection signals obtained by four pixels 1262 to 1268 forming one set during the first exposure period ⁇ . Red intensity b1, blue intensity b2, green intensity b3, and white intensity b4 in the color information collected in the first exposure period ⁇ are obtained individually. Furthermore, the exposure period ⁇ is shifted by ⁇ for every four pixels 1262 to 1268.
  • FIGS. 28G(f) to (i) show detection signals obtained by four pixels 1262 to 1268 forming one set during the second exposure period ⁇ .
  • the exposure period ⁇ is shifted by ⁇ for every four pixels 1262 to 1268.
  • the amount of reflected light ⁇ hi (1 ⁇ i ⁇ 4) with respect to the light emitted by the light source 2 that is added to the intensity bi (1 ⁇ i ⁇ 4) of each color in the color information shows approximately the same value.
  • the speckle noise is large, the amounts of reflected light ⁇ h1, ⁇ h2, ⁇ h3, and ⁇ h4 added for each of the four pixels 1262 to 1268 differ greatly.
  • FIG. 28H shows detection signals for each pixel obtained in the second exposure period and the third exposure period.
  • FIGS. 28H(f) to (i) match FIGS. 28G(f) to (i).
  • FIGS. 28H(j) to (m) show detection signals obtained by four pixels 1262 to 1268 forming one set during the third exposure period ⁇ .
  • the exposure period ⁇ is shifted by ⁇ for every four pixels 1262 to 1268.
  • the values ⁇ L1, ⁇ L2, L3, and ⁇ L4, in which the phase component information obtained from the measurement object 22 corresponding to the modulated light emission of the light source unit 2 is buried in speckle noise, are the respective color intensities bi (1 ⁇ i ⁇ 4).
  • the third extraction information is extracted.
  • the second information (substantive phase component Ai) mixed in the signal obtained during the exposure period ⁇ can be extracted.
  • FIG. 29A shows the basic cross-sectional structure of the linear variable bandpass filter 190.
  • the thickness of the optical thin film 194 formed on the transparent substrate 192 changes depending on the location.
  • panchromatic light (light containing light with different wavelengths) 188 is made incident on this linear variable bandpass filter 190, it undergoes multiple reflections between the interface between the surface of the optical thin film 194 and the transparent substrate 192.
  • the wavelengths of the transmitted lights 198-1 to 198-4 change depending on the location.
  • FIG. 29B shows an embodiment of a hyperspectral detection method configured by combining this linear variable bandpass filter 190 and a 3D color image sensor 128 having the structure shown in FIG. 26A.
  • a linear variable band pass filter moving mechanism 196 moves the linear variable band pass filter 190 as time progresses. Therefore, the wavelength of the light passing through the pinhole 310 placed at the focusing position of the focusing lens 330 changes with time.
  • the light passing through this pinhole 310 illuminates the measurement object 22 (not shown) via the imaging lens 144.
  • a portion of the reflected light from the measurement object 22 is reflected by the half mirror 536 and is directed toward the 3D color image sensor 128 .
  • the imaging lens moving mechanism 540 moves the imaging lens 144 in the optical axis direction, and aligns the position of the 3D color image sensor 128 with the imaging position of the measurement target 22.
  • the light source section 2 emits panchromatic light and emits light intermittently.
  • the 3D color image sensor 128 collects color images regarding the measurement object 22 during the non-emission period of the light source section 2. Further, during the light emission period of the 3D color image sensor 128 and the light source section 2, an image in which a reflected light image of the wavelength light passing through the linear variable band pass filter 190 and a color image are superimposed is collected. The difference in light intensity between the two images becomes a hyperspectral data cube signal.
  • a heat emitting filament such as a halogen lamp or a mercury lamp may be used as the light emitting section 470 in the light source section 2 that emits panchromatic light.
  • a heat emitting filament such as a halogen lamp or a mercury lamp may be used as the light emitting section 470 in the light source section 2 that emits panchromatic light.
  • the electrical response speed inside the heat-emitting filament is slow, intermittent light emission at high speeds is difficult.
  • FIG. 11A An improved form of the structure in FIG. 11A will be described as an example of this embodiment as a structure of the light source section 2 that can emit panchromatic light capable of high-speed response.
  • the emission wavelength of the semiconductor laser device 500 used as the second light emitting source 170 is set to 1250 nm or less, and is used as an excitation light source for the near-infrared emitting phosphors 162 and 164.
  • the arrangement of the first light emitting source 160 (LED light source) within the light emitting section 470 may be omitted.
  • the fluorescent substances 182 to 186 used in the near-infrared emitting phosphors 162 and 164 are materials with a short fluorescence half-life (materials that emit fluorescence immediately after excitation and stop emitting fluorescence immediately after excitation light irradiation is stopped). ). By appropriately selecting the material, a light emitting section 470 with a fast response speed for fluorescent light emission can be realized.
  • the method for extracting light of a predetermined wavelength from the panchromatic light emitted from the light source section 2 is not limited to the use of the linear variable bandpass filter 190.
  • the predetermined wavelength light may be extracted from the panchromatic light by any method such as using a general optical filter, a variable transmission wavelength Fabry-Bello resonator, or mechanically tilting the spectroscopic element 320.
  • the light source section 2 having the above structure is arranged as shown in FIG. 29B and operated in conjunction with the 3D color image sensor 1280 in the manner described in Sections 8.1 to 8.4, the near-infrared spectral characteristics of the measurement object 22 can be changed. Simultaneously with the measurement, three-dimensional measurement of the surface irregularities of the measurement object 22 becomes possible. In other words, by controlling the light emission of the light source section 2 in conjunction with the control section 8 and controlling the wavelength of the emitted light from the light source section 2, it is possible to simultaneously measure the spectral characteristics of the measurement object 22 and the distance to the measurement object 2. .
  • FIG. 30A is an explanatory diagram of examples of input devices and output devices necessary for using a predetermined service providing domain.
  • a predetermined service providing domain 1058 in cyberspace and allowing users to use it.
  • the end user 1080 needs the input device 1060 and output device 1070 necessary for using the predetermined service providing domain 1058 in the cyberspace.
  • a keyboard and a mouse are mainly used as input devices 1060.
  • a touch screen corresponds to the input device 1060.
  • Touch screens have a greater variety of functions than keyboards.
  • automatic image (video) input technology or automatic voice input technology may be used as an input method with improved functional diversity, operability, and convenience compared to the touch screen. That is, the image (video) collection function and the audio information collection function are used as the device classification 1062 regarding the input device 1080 in this embodiment.
  • a microphone is an input device type 1064 that collects this audio information.
  • the input device form 1064 having an image (video) collection function the 3D color camera 1280 described in Chapter 8 or the visible color camera that can simultaneously measure near-infrared spectral characteristics can be used.
  • the user's body movements can be used as an information input method (data usage purpose 1068).
  • data usage purpose 1068 the user's gestures and finger actions may be used in place of the existing mouse.
  • a gesture interpretation function, a fingertip command interpretation function, and a virtual three-dimensionalization function may be provided within the input device 1060 or within the host 50 connected to the input device.
  • the composition analysis in the blood of the end user 1080 may be performed from outside in a non-invasive or non-contact manner. For example, it becomes possible to automatically input the user's physical condition using blood sugar level measurement results. Furthermore, the excited state of the end user 1080 can be input in real time based on the adrenaline content in the blood.
  • Output device classification 1062 based on the functions that can be realized by the output device 1070 includes a three-dimensional display function, a modeling function, an audio information output function, a tactile stimulation device function, etc.
  • end user 1080 movement inhibition functionality may be included. For example, when the end user 1080 carries heavy luggage in the real world, the degree of freedom of movement is restricted. This freedom of movement constraint imposed on the end user 1080 upon entering the predetermined service provision domain 1058 in cyberspace may be implemented as part of this tactile stimulation device functionality.
  • a thin stationary display screen or a portable display device such as VR (virtual reality) or AR (augmented reality) may be used.
  • a 3D printer may be used as an output device with a modeling function.
  • a speaker can be used to output audio information, and the skin epidermis pressurizer has a tactile stimulation function.
  • FIG. 30B and 30C show an input device 1060 and an output device 1070 used in this embodiment.
  • a thin stationary display screen (wall-mounted stereoscopic display or computer display stereoscopic display) 900 is used as an embodiment of the output device 1070 used as the display unit 18.
  • one light source section 2 and a plurality of measurement sections 8 are disposed in a part of this thin stationary display screen 900 or a part of the outside thereof (outer frame of a wall-mounted 3D display or a 3D display for computer display) 902. There is.
  • This set of one light source section 2 and a plurality of measurement sections 8 corresponds to the input device 1060.
  • FIG. 30B uses the embodiment example of the light source section 2 and measurement section 8 described up to Chapter 8. In particular, by arranging a plurality of measuring units 8 at different positions, stereoscopic images (stereoscopic images) can be collected from multiple angles.
  • Lenticular lenses are arranged horizontally on the surface of the thin stationary display screen (wall-mounted 3D display or computer display 3D display) 900.
  • the measurement unit 8 measures the three-dimensional position of the end user's 1080 eyeball in real time. Then, according to the eyeball position of the end user 1080 (by eye tracking), each pixel of the thin stationary display screen (wall-mounted 3D display or computer display 3D display) 900 is individually divided into an image for the right eye and an image for the left eye of the end user 1080. Create an image.
  • the lenticular lens also controls the direction of the right and left eye images toward the end user's 1080 right and left eyes. Then, a virtual stereoscopic image is displayed to the end user 1080 by utilizing the difference in convergence angle of each image, which has a different virtual position in the front-rear direction captured by the right eye and left eye.
  • the virtual keyboard visible in the foreground in FIG. 30B shows an example of display using the above method.
  • three-dimensional measurement can be performed by the combined operation of the light source section 2 and the measurement section 8. Therefore, the three-dimensional position of each of the ten fingertips of the end user 1080 can be measured in real time. For example, when the end user 1080 places both hands on a three-dimensionally displayed virtual keyboard and moves ten fingers, a virtual key-in operation becomes possible.
  • FIG. 30C shows another example embodiment regarding input device 1060 and output device 1070.
  • AR glasses 820 and 830 capable of stereoscopic display are used. These AR glasses 820 and 830 display a right-eye image (video) and a left-eye image (video) separately, and perform stereoscopic display using the above-mentioned difference in convergence angle. Note that instead of the AR glasses 820 and 830, VR glasses may be used.
  • FIG. 30C(b) shows an example in which a virtual keyboard placed a predetermined distance away from the end user 1080 is displayed as a stereoscopic display screen (image).
  • the end user 1080 looks at this virtual keyboard and keys in the virtual keyboard while moving his ten fingers.
  • a brooch 810 with a built-in 3D color image sensor fixed to the breast pocket of the end user 1080 has a light source section 2 and a measuring section 8, and the emitted light from the light source section 2 is reflected by the fingertip of the end user 1080.
  • the measurement unit 8 uses the reflected light from the fingertips to measure a 3D color image (video) including the three-dimensional positions of the ten fingertips in real time.
  • the end user 1080 carries on his back a backpack 850 that includes a power supply unit, a control unit 50, and a communication function for external communication.
  • This backpack 850 supplies power to the AR glasses 830 (display section 18) via a connection cable 866.
  • the backpack 850 also generates a right eye image (video) and a left eye image (video) and transmits them to the AR glasses 830 via the connection cable 866.
  • this rucksack 850 supplies power to the brooch 810 with a built-in 3D color image sensor via the connection cable 866.
  • the measurement unit 8 also transmits the three-dimensional image (three-dimensional video) collected by the measurement unit 8 to the backpack 850 via the connection cable 876.
  • the signal processing unit 42 in the backpack 850 analyzes this three-dimensional image (three-dimensional video) and estimates the movements of the ten fingertips of the end user 1080. Then, using this estimation result, the position within the virtual keyboard where the end user 1080 keyed in is determined.
  • a backpack 850 with a built-in control unit 50 including connection cables 866 and 876, a brooch 810 with a built-in 3D color camera (built-in light source unit 2 and measurement unit 8), and a display unit 18 are supported.
  • the entire AR glasses 830 constitute the optical device 10 (FIG. 1).
  • the entire AR glasses 820 corresponding to the optical system 18 constitute the optical device 10 (FIG. 1).
  • a pendant or necklace 800 with a built-in 3D color camera includes a light source section 2 and a measurement section 8, and has the function of the measurement section 8.
  • a virtual image 1400 in the virtual space such as a virtual keyboard displayed as if placed a predetermined distance away from the end user 1080, is defined within a predetermined service providing domain 1058 in cyberspace.
  • the virtual image 1400 in this virtual space has a three-dimensional structure, and its position and shape change over time (that is, it becomes a virtual four-dimensional structure).
  • a virtual image 1400 in a virtual space defined (regulated) within a predetermined service providing domain 1058 in cyberspace and displayed to an end user 1080, and a real object in the real world such as the finger of the end user 1080 are used. 1410 operate in conjunction with high precision.
  • a method of ⁇ aligning the position of the virtual image 1400 in the virtual space with the position in the real world in three-dimensional directions based on the real object 1410 in the real world'' may be performed.
  • FIG. 30C(a) shows an example of a scene where the end user 1080 is performing alignment in the three-dimensional direction. Specifically, the position and display size of the virtual image 1400 in the virtual space are adjusted using the left hand (real object 1410) of the end user 1080 as a reference. This alignment uses the movement of the index finger of the end user's 1080 right hand.
  • FIG. 31A shows both the real object 1410 in the real world that the end user 1080 sees during the above alignment and the virtual image 1400 in the virtual space, superimposed.
  • the measurement unit 8 in FIG. 30C(a) has a built-in 3D color image sensor 1280. Therefore, this measurement unit 8 images (photographs) the left hand of the end user 1080 in real time. Then, the display screen within the AR glasses 820 capable of stereoscopic display displays the imaged (photographed) left hand of the end user 1080 as a virtual image 1400 in virtual space.
  • FIG. 31A(a) shows an example in which the virtual image 1400 in the virtual space is displayed larger than the real object 1410.
  • FIG. 31A(b) shows the right hand of end user 1080 existing as real object 1410.
  • the measuring unit 8 in FIG. 30C(a) captures an image of the movement, and the signal processing unit 42 (FIG. 1) interprets it as an "instruction to reduce the size of the virtual image 1400 in the virtual space.”
  • the direction of movement is defined by the direction of the pad of the right index finger (opposite the nail) of the end user 1080
  • the measuring unit 8 in FIG. 30C(a) measures the distance in the front-back direction to the real object 1410 (the left hand of the end user 1080) in real time.
  • FIG. 31A(a) a real object 1410 related to the left hand of the same end user 1080 and a virtual image 1400 in virtual space are displayed in an overlapping manner.
  • the distance in the front-rear direction of the virtual image 1400 in the virtual space may be automatically adjusted to the distance in the front-rear direction to the real object 1410 (the left hand of the end user 1080).
  • an "OK mark” is displayed with the fingertip of the right hand (real object 1410) of the end user 1080 as shown in FIG. 31A(c). good.
  • this "enter information” is transferred to the predetermined service providing domain 1058 in cyberspace.
  • the position and size of the virtual keyboard can be adjusted using both hands of the end user 1080 of the real object 1410 as shown in FIG. 31A(d). May be specified.
  • FIG. 31A shows an example of generating input information to a predetermined service providing domain in cyberspace using a user's finger action.
  • the relationship between the movement of the user's finger, the shape of the finger, and the input information is not limited to the above, and may be set arbitrarily.
  • not only the movement of the user's fingers but also the movement of the user's entire body (gesture), the movement and expression of the user's face, and any other method using the real object 1410 can be used to provide a predetermined service in cyberspace.
  • Information input to the provided domain may also be realized.
  • the input device 1060 for the predetermined service providing domain 1058 in cyberspace shown in FIG. 30A can use near-infrared light.
  • the basic functions 1066 of the input device 1060 using this near-red light include biological measurement and emotion prediction.
  • the data usage purpose 1068 collected by the input device 1060 using this near-infrared light can be compositional analysis forming a living body and personal authentication.
  • FIGS. 31B and 31C show an example of a usage mode when near-infrared light is used as the input device 1060.
  • FIG. 31B shows an example of a usage environment of an input device 1060 that uses near-infrared light.
  • the predetermined light 230 emitted from the light source section 2 includes near-infrared light.
  • the measurement unit 8 measures the reflected light from the measurement target object 22.
  • FIG. 31C shows an enlarged view of the usage environment example shown in FIG. 31B.
  • light with a wavelength exceeding 1.3 ⁇ m is largely absorbed by water in the living body. Therefore, when the palm 23 is irradiated with light with a wavelength exceeding 1.3 ⁇ m, the blood vessel region 600 is observed to stand out. There are individual differences in the patterns of the blood vessel regions 600, and therefore, the differences in the patterns of the blood vessel regions 600 can be utilized for personal authentication.
  • FIG. 32 shows an embodiment of a service provision method for an end user 1080 using a time-operable service provision domain 1500.
  • This time-manipulable service providing domain 1500 is located in a part of the predetermined service providing domain 1058 in cyberspace described above. Furthermore, when using this time-manipulable service providing domain 1500, daily charges may be made. If the service provision domain 1500 that allows time manipulation is used over multiple days, billing for the next day will be required at the time the user uses the service providing domain 1500 that allows time adjustment over multiple days.
  • Internet communication charges are required to enter the cyberspace 1058 using the input device 1060 and the output device 1070. The fee for using the service providing domain 1500 that allows time adjustment on a daily basis may be added to this Internet communication fee.
  • a user 1450 (end user 1080) in real space performs an entry procedure 1506 before entering a service providing domain 1500 that allows time manipulation.
  • personal authentication using the biometric authentication described in the previous section using FIGS. 31B and 31C is performed.
  • the user 1450 (end user 1080) in the real space is asked whether or not the daily admission fee can be automatically debited.
  • the user 1450 (end user 1080) in the real space approves the automatic withdrawal of the daily admission fee, entry into the time-manipulatable service providing domain 1500 is permitted.
  • a user who enters the time-manipulable service provision domain 1500 changes from a user 1450 in real space to a user in cyberspace.
  • the user in this cyberspace is referred to as an end user 1080.
  • a usage menu within the time-manipulable service providing domain 1500 is displayed.
  • a user 1450 end user 1080 in real space makes a desired menu selection 1528.
  • a first channel progression 1530 then begins over time.
  • End users 1080 within time-enabled service provision domain 1500 can transition to second channel progression 1540 via menu selection 1528.
  • the inside of the display section 18 may be divided into multiple screens. By dividing multiple screens, the first channel progress 1530 and the second channel progress 1540 can be displayed simultaneously.
  • the end user 1080 has the effect of being able to operate within the service providing domain 1500 where time can be manipulated efficiently. Since the user 1450 has only one body in real space, multiple experiences cannot be performed at the same time. However, in the service providing method shown in this embodiment, the end user 1080 in cyberspace can receive multiple experiences at the same time. With this multiple screen simultaneous display function, the end user 1080 can be provided with services that are not available in real space.
  • the end user 1080 can enter and leave the service providing domain 1500, which allows time manipulation, multiple times during the same day.
  • the end user 1080 returns to user 1450 in real space.
  • the end user 1080 temporarily leaves 1510 from the service providing domain 1500 in which time can be manipulated when the elapsed time 1498 is tR .
  • the elapsed time 1498 advances within the service providing domain 1500 where time can be manipulated, and the time 1498 is much later than the time tR .
  • the user re-enters 1512 the service providing domain 1500 where the time can be manipulated.
  • the user may return to time tR in the service providing domain 1500 where time can be manipulated. Then, by using the fast forward playback function 1546, the end user 1080 can follow up on events that occurred within the service providing domain 1500 in which the time can be manipulated during the work 1520 period outside the service providing domain. In this way, within the time-manipulable service providing domain 1500 shown in this embodiment, "services that transcend time and space" can be provided to the end user 1080.
  • the input device 1060 used in this embodiment can perform distance measurement (surface unevenness characteristic measurement) with high precision for a measurement target 22 that is sufficiently far away, such as 100 meters away or 1 km away. Therefore, by using the zoom function of the display screen, it is possible to instantly display the object 22 to be measured 1 km away as if it were very close. By using such a zoom display function, it is possible to provide the end user 1080 with the experience of "instantaneous transportation” as a "service that transcends space.”
  • the input device 1060 shown in this embodiment can be made smaller and lighter. Therefore, the input device 1060, which is smaller and lighter, may be built into the drone, for example. This allows input of captured images (videos) from the sky. By using this captured image (video) from the sky, it is also possible to provide "services that transcend gravity" to the end user 1080. In other words, the service providing domain 1500 that allows time manipulation may provide the end user 1080 with the experience of flying in the sky or floating in the air.
  • the service providing domain 1500 that allows time manipulation may charge an additional admission fee at the timing of switching from the first day of admission 1502 to the second day of admission 1504. At this switching timing, the service providing domain 1500 that can manipulate the time displays an "inquiry about next day admission fee payment" to the end user 1080. When the end user 1080 performs "Next Day Admission Fee Payment Approval 1550", the service within the time-operable service providing domain 1500 is continued.
  • Section 9.3 Data format example of 4-dimensional data obtained by measurement
  • Audio information collected during imaging with the 3D color image sensor 1280 may use an existing audio recording format.
  • the playback synchronization with the audio information during playback of the four-dimensional image (video) information may be performed using time-related information 1702 in the data format, which will be described later.
  • FIG. 33A shows an example of local coordinate axes viewed from the 3D color image sensor 1280.
  • a Zl coordinate axis is defined in the perpendicular direction of the 3D color image sensor 1280.
  • coordinate axes of Xl and Yl are defined along the arrangement direction of the image sensors 1262 to 1268.
  • the imaging time is defined as Tl.
  • the position information input from the 3D color image sensor 1280 is defined as a position on a four-dimensional coordinate axis, which is three-dimensional spatial position information plus a time axis.
  • the measurement results in four-dimensional coordinates including the time axis Tl in this way, it is possible to manage a variety of information including the movement and time-series shape changes of the measurement object 22 such as video (video). is born.
  • FIG. 33B shows an example of the data format of four-dimensional data obtained from the surface of the measurement target object 22.
  • the 3D color image sensor 1280 collects information on the four-dimensional coordinates and RGBW intensities (red intensity, green intensity, blue intensity, and white intensity in color information) of each point on the surface of the measurement object 22.
  • RGBW intensities red intensity, green intensity, blue intensity, and white intensity in color information
  • Each point on the surface of the measurement object 22 is defined by a node 1600.
  • the intervals between the nodes 1600 are set narrowly. Further, in the rough portion of the uneven shape, the intervals between the nodes 1600 are set wide. Furthermore, the four-dimensional coordinate values of each node are defined by the coordinate axes in FIG. 33A.
  • the form of expression of the surface shape of the measurement object 22 expressed by this set of nodes 1600 is called a "mesh".
  • the surface image of the measurement object 22 expressed in the form of a mesh at a predetermined time is referred to as a "mesh frame".
  • multiple mesh frames are defined. That is, in the “mesh frame I (image)" shown in FIG. 33B(a), the four-dimensional coordinates and RGBW intensities of all nodes 1600-1 to 1600-8 are managed.
  • the messages frame P (progress) shown in FIG. 33B(b)
  • the information of only the node 1600 that has information different from the mesh frame I described above or the difference of only the node 1600 that has information different from the mesh frame I is shown. Manage only value information.
  • FIG. 33B(b) shows an example in which a positional shift occurs only between node 1_1600-1 and node 4_1600-4.
  • the management information of mesh frame P is managed only by the information of node 1_1600-1 and node 4_1600-4 (or the difference value from the information of node 1_1600-1 and node 4_1600-4 in mesh frame I). be done.
  • FIG. 33B(b) and FIG. 33B(a) a change in the positions of node 1_1600-1 and node 4_1600-4 is illustrated.
  • the information is not limited to this, and even when only a change in color intensity occurs, it is managed as information within the mesh frame P.
  • FIG. 33C shows an example of management information for the node 1600 defined in units of mesh frames 1610 and 1620.
  • This example of management information is based on the mesh structure of each mesh frame 1610 and 1620 shown in FIG. 33B.
  • This management information includes mesh frame information 1700 that is managed in units of mesh frames 1610 and 1620, and node information 1800 that is managed for each node within the same mesh frame.
  • the time information captured by the 3D color image sensor 1280 for each mesh frame 1610 and 1620 is in the format of "year/month/day/hour/minute/second/second decimal point information" as time-related information (measurement information, etc.) 1702. recorded.
  • This information may be used for playback synchronization with audio information performed during playback of this four-dimensional image (video) data.
  • PTS presentation time stamp
  • AV audio and video
  • multiple types of mesh frames are defined.
  • the column of frame type information 1704 for example, is it "I frame”? Is it "P frame”? Identification information is recorded. For example, as explained in the previous section (section 9.2) using FIG. Information 1800 can be obtained. Therefore, this frame type information 1704 can provide processing convenience for fast-forward playback 1546.
  • frame numbers 1706 are set for multiple types of mesh frames. Using this frame number 1706 improves the convenience of temporal access processing such as time search to the elapsed time 1498 specified by the end user 1080.
  • the frame number of the mesh frame referenced by the relevant mesh frame is stored in the reference frame 1708 column.
  • Mesh frame I_1610 may be designated as this "reference mesh frame”.
  • the node information 1800 in mesh frame P_1620 has only difference information from mesh frame I_1610. Therefore, when the relevant mesh frame is mesh frame P_1620, all the node information 1800 can be obtained by combining the node information 1800 corresponding to this relevant mesh frame and the nod information 1800 in the reference frame 1708.
  • Difference time information 1710 with reference time indicates a difference value between time-related information 1702 of the applicable mesh frame and time-related information 1702 of the mesh frame (mesh frame I_1610) referred to by the applicable mesh frame.
  • the number of intensity bits (dynamic range) 1714 represents the number of bits representing the various color information intensities 1812 to 1818 in the nod information 1800. If the value of the number of intensity bits (dynamic range) 1714 is increased, the various color information intensities 1812 to 1818 can be expressed in fine gradations, but the data size of this management information increases.
  • the number of connected nodes, 1720 represents the representation format state of the mesh structure.
  • three nodes 1600 constitute a triangular basic cell.
  • the #1 node 1600-1 is connected to the #2 node 1600-2, the #4 node 1600-4, the #3 node 1600-3, the #6 node 1600-6, and the #5 node 1600-5. It is connected to five nodes 1600. Therefore, in the representation format state of the mesh structure in FIG. 33B, the number of connected nodes 1720 is "5".
  • the basic cell is not limited thereto, and for example, the basic cell may be formed of a rectangle. In this case, the number of connected nodes 1720 takes a different value.
  • the total number of nodes 1720 indicates the number of nodes 1600 managed within the corresponding mesh frames 1610 and 1620.
  • the data size 1718 indicates the data size of the node information 1800 regarding the node 1600 managed within the corresponding mesh frame 1610, 1620.
  • the data size of the node information 1800 regarding one node 1600 is expressed as P and the total number of nodes is expressed as N, then this data size 1718 is given by N ⁇ P.
  • All the nodes 1600 in all the mesh frames 1610 and 1620 are set with serial numbers. Therefore, all nodes 1600 within mesh frames 1610, 1620 are identified and managed by this node number 1802. Therefore, this nod number 1802 is placed at the first position in the nod information 1800. Further, in this embodiment, only the nodes 1600 that have changed from mesh frame I_1610 are managed within mesh frame P_1620. Therefore, by simply searching for the node number 1802 of the node 1600 managed within the mesh frame P_1620, the question ⁇ Which node 1600 has changed?'' will appear in the mesh frame I_1610. ” can be easily searched.
  • #1 node 1600-1 is #2 node 1600-2, #4 node 1600-4, #3 node 1600-3, and #6 node 1600-6. , #5, and the five nodes 1600 of node 1600-5.
  • This connection relationship is then written in the connection node number 1804 connected to the corresponding node 1600. Using this information, it becomes easy to analyze the detailed uneven shape of the surface of the measurement target 22.
  • the X coordinate value 1806, Y coordinate value 1808, and Z coordinate value 1810 indicate the three-dimensional coordinate value of the corresponding node 1600. Further, each of white intensity 1812, red intensity 1814, green intensity 1816, and blue intensity 1818 indicates color information of the reflected light from the corresponding nod 1600.
  • Section 9.4 Mapping processing The virtual space within the service providing domain 1500 where time can be manipulated is often managed using a four-dimensional map. By managing with a four-dimensional map that adds a temporal axis to three spatial dimensions, it becomes easier to search for past history and search for playback using the elapsed time 1498.
  • a common four-dimensional coordinate axis 1828 within the map is set in advance, as shown in the embodiment of FIG. 34A.
  • the Xc axis and Yc axis within this four-dimensional coordinate axis 1828 are aligned with the corner direction of the virtual desk.
  • the Zc axis points above the conference room in the virtual space.
  • the conference holding time is defined along the time axis Tc.
  • a presentation screen 1822 is arranged in this conference room, and an illumination light (lighting condition in the map 1826) is arranged at the lower right of FIG. 34A.
  • FIG. 34B shows the processing steps leading to the generation of a right-eye image (video) and a left-eye image (video) to be displayed on the output device 1070 using a four-dimensional map within the time-operable service providing domain 1500. From the start (ST20) to the end (ST31) of the process in FIG. 34B, processes from synthesizing the three-dimensional image (video) on the map to displaying the three-dimensional image (video) to the end user 1080 are performed. Note that, as shown in FIG. 33B, a management unit for each frame (one image in a video) is called a mesh frame, and a mesh frame spanning multiple frames is collectively called 4-dimensional mesh data from FIG. 34B onwards.
  • FIG. 34B a series of processing steps will be outlined using FIG. 34B. After that, specific processing contents will be explained using FIGS. 34C to 34E. Since the content of this series of processing is complex, it may be difficult to understand the content from only the explanation of FIG. 34B. It becomes easier to understand intuitively by using FIGS. 34C to 34E, which will be described later.
  • imaging can only be performed from one direction of the measurement target 22 to be imaged.
  • a plurality of measurement units 8 disposed at different positions are used simultaneously, as shown in FIG. 30B, for example, it becomes possible to capture images of the measurement target 22 from multiple angles.
  • the obtained plural image (video) information is synthesized as shown in step 21, and the same measurement object 22 is imaged. 4D mesh data can be created.
  • this map selection means selection of the conference room you want to use.
  • this map selection step (ST22) is not limited to conference room selection, but includes all map selections used within the time-operable service providing domain 1500. For example, if a sightseeing trip within the service provision domain 1500 in which time can be manipulated is selected, four-dimensional map information of a tourist spot may be selected.
  • This map selection (ST22) also includes designation of a location (occupied location 1830) that the end user 1080 wants to occupy within the selected map.
  • a common four-dimensional coordinate axis 1828 within the map is defined for each map selected by the end user 1080.
  • the 3D color image sensor 1280 that captures the image of the end user 1080 has the unique coordinate axes described with reference to FIG. 33A defined therein. Therefore, in step 24, coordinate conversion processing is performed from the coordinate axes unique to the 3D color image sensor 1280 to the common four-dimensional coordinate axes 1828 in the map in accordance with the occupied location 1830 designated by the end user 1080. Immediately after this, in step 25, color intensity adjustment is performed in accordance with the intra-map lighting conditions 1826.
  • the four-dimensional mesh data of each end user 1080 is synthesized on a map as a rendering process (ST26). Through the rendering process on this map, a plurality of different end users 1080 can gather at a specific location within the service providing domain 1500 where time can be manipulated.
  • step 27 the viewpoint position when displaying to each end user 1080 on the map is set. Then, all the mesh data rendered on the map is again subjected to coordinate transformation in accordance with this viewpoint position and viewpoint direction (ST28). Then, using all the mesh data after this coordinate transformation, in step 29, images (videos) of the right eye and left eye are generated in accordance with the user's eyeball position.
  • the right eye and left eye images (video) generated in step 29 are displayed on the output device 1070 to display the 4D image (video) to the end user 1080.
  • FIG. 34C is an explanatory diagram of the step of converting coordinates from the four-dimensional coordinate axes 1838 for each location in accordance with the common four-dimensional coordinate axes in the map. This step is a diagram explaining in detail the process of step 24 in FIG. 34B.
  • FIG. 34C shows an example of a state in which a specific end user 1080 is seated facing forward in a seat at occupied location #2_1830-2 in a virtual conference room. It is necessary to perform image processing (rendering processing) so that another meeting attendee (another end user 1080) can see this particular end user 1080 seated facing forward.
  • FIG. 33A in the local Zl coordinate axis preset in this three-dimensional color image sensor 1280, the direction opposite to the direction of the face of this specific end user 1080 represents a "positive" direction. (In other words, the direction in which the face of this particular end user 1080 faces indicates the "negative direction" of the local coordinate axis Zl unique to the end user 1080.) As shown in FIG. 33A, in the local Zl coordinate axis preset in this three-dimensional color image sensor 1280, the direction opposite to the direction of the face of this specific end user 1080 represents a "positive" direction. (In other words, the direction in which the face of this particular end user 1080 faces indicates the "negative direction" of the local coordinate axis Zl unique to the end user 1080.) As shown in FIG.
  • the "negative direction" of the local coordinate axis Zl unique to the end user 1080 coincides with the Yc direction in the common four-dimensional coordinate axes within the map. Therefore, it is necessary to coordinate the coordinate axes of the four-dimensional mesh data obtained by imaging this particular end user 1080 to match the common four-dimensional coordinate axes within the map.
  • the directions of the local coordinate axes Xl, Yl, and Zl in the four-dimensional mesh data collected by imaging the end user 1080 differ for each different occupied location 1830 selected by the end user 1080 within the same map. Therefore, coordinate transformation is performed to match the location of the occupied location 1830 selected by the end user 1080.
  • the coordinate axes that serve as the reference for the four-dimensional mesh data collected by imaging all the end users 1080 are unified into the common four-dimensional coordinate axes Xc, Yc, and Zc within the map.
  • FIG. 34D is an explanatory diagram of the color intensity adjustment method performed in step 25 of FIG. 34B.
  • the light emitting section 2 is placed on the left side of the sphere.
  • the left side of the sphere which is closer to the light emitting section 2, is illuminated by the emitted light from the light emitting section 2 and becomes brighter.
  • the right side of the sphere which is in the shadow of the light emitting part 2, becomes dark.
  • the intensity of each color increases and approaches the color of the emitted light from the light emitting section 2. Then, the intensity of each color of the nod 1600 in the dark part of the shadow decreases.
  • Adjusting the color intensity of the nodes 1600 in the 4-dimensional mesh data according to the lighting conditions 1826 in the map improves the reality of the image (video) displayed to the end user 1080. This has the effect of increasing.
  • FIG. 34E shows a detailed explanatory diagram regarding the setting of the viewpoint position on the map shown in step 27 in FIG. 34B.
  • VR/AR wearable display capable of stereoscopic display
  • front-back position information for each display object becomes very important. Therefore, depending on the user's display viewpoint position 1860 within the four-dimensional map, the front-back position direction of the stereoscopic display screen changes.
  • the user's display viewpoint position 1860 and the direction in which the viewpoint faces can be set at any position within the service providing domain 1500 where time can be manipulated.
  • Providing this function has the effect of dramatically improving the sense of realism of images (videos) provided to the end user 1080. For example, while watching a soccer match, it is possible to virtually run around with the soccer team members, or to watch the progress of the soccer match from the perspective of the referee.
  • a user's display viewpoint 1860 and a four-dimensional coordinate axis 1868 at a display viewpoint that matches the direction of the user's viewpoint are defined.
  • the depth direction (back and forth direction) for stereoscopic display to the end user 1080 is set to the Zd axis.
  • the horizontal direction as viewed from the end user 1080 is set to the Xd direction.
  • step 28 of FIG. 34B all the four-dimensional mesh data constructed on the common four-dimensional coordinate axes in the map are coordinate-transformed so that they become four-dimensional mesh data on the four-dimensional coordinate axes 1868 at the display viewpoint.
  • the stereoscopic image (video) generated ST29 in FIG. 34B
  • the end user 1080 may request an enlargement (zoom) function for the displayed stereoscopic image (video).
  • an effect is created in which it becomes easier to quickly respond to the user's enlargement (zoom) request.
  • Non-optical sensors 60...Application field (various optical application fields) adaptation section, 62...Characteristic analysis/analysis processing section, 64... Manufacturing suitability control/processing section, 66... Monitoring control/management section, 68... Treatment suitability control/processing section, 70...Medical/Welfare Related Inspection Processing Department, 72...Information Providing Department, 74...Collected Information Storage Department, 76...Other various application compatible parts, 78...Long axis direction, 80...Service provision method, 82... Predetermined light utilization method, 84... Optical measurement unit, 86... Predetermined light generation method, 88... Minor axis direction, 90... Predetermined optical member, 92... Input surface of the predetermined optical member, 94...
  • Output surface of the predetermined optical member 96... Perpendicular to the incident surface side, 98... Perpendicular to the exit surface side, 110...
  • Waveguide element optical fiber/optical waveguide/light guide
  • 112 ... Core region
  • 114 ...Clad region
  • 116 ...Gravity center position of intensity distribution
  • 118 Light reflecting surface
  • 120 ...collimating lens or cylindrical lens
  • 122 ... Macroscopic entrance plane of a predetermined optical member, 124... Position within the optical fiber cross section, 126... Perpendicular to the macroscopic entrance surface, 128... Equiphase surface, 130...
  • Wedge-shaped prism 132...Position in the core region, 134...Position on the light collecting surface, 136...Light amplitude distribution, 138... Refractive index distribution, 140... Multi-segment Fresnel prism (predetermined optical member), 142...Fresnel lens or fly's eye lens, 144...imaging lens, 146... Stray light component, 148... Multi-divided light reflecting element (Fresnel type reflector), 152... Electric field distribution, 200...Initial light, 202...First light, 204, 208...Second light, 206...Third light, 210... Optical property conversion element, 212... First region, 214... Second region, 216...
  • Second optical path 226...Third optical path, 230...Predetermined light
  • 240...Optical operation location 300... line sensor (one-dimensional array detection cell array), 310... pinhole, 312...Half mirror plate, 314, 316...Folding mirror plate, 318... Collimating lens, 320... Spectroscopic element (reflection type blazed grating), 322, 324... F ⁇ lens or collimating lens, 330...
  • condensing lens 400...Initial wave sequence, 402...Phase asynchronization, 406...After wave field division, 408...Delay after division, 410...Photosynthesis processing, 420...Light intensity averaging, 470...Light emitting unit, 480...Optical property changing unit.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Optics & Photonics (AREA)
  • Photometry And Measurement Of Optical Pulse Characteristics (AREA)
  • Testing Of Optical Devices Or Fibers (AREA)

Abstract

The present invention causes the same light emission unit to radiate first light and second light through a prescribed optical member such that: an incident surface-side perpendicular line orthogonal to the incident surface of the prescribed optical member is defined; the travel direction of the first light has an inclination angle with respect to the incident surface-side perpendicular line; and the travel direction of the second light is set to be inclined with respect to the travel direction of the first light or the optical path of the second light and the optical path of the first light are set to be different inside the prescribed optical member to modify optical characteristics between the first light and the second light.

Description

所定光生成方法、光学特性変更部、光源、所定光利用方法、検出方法、イメージング方法、表示方法、光学的計測部、光学装置、サービス提供方法、サービス提供システムPredetermined light generation method, optical property changing unit, light source, predetermined light utilization method, detection method, imaging method, display method, optical measurement unit, optical device, service provision method, service provision system

 本実施形態は、光自体の特性を制御する技術分野や光を用いた計測分野、光の利用分野、または光を利用したサービス提供分野に関係する。 This embodiment relates to the technical field of controlling the characteristics of light itself, the field of measurement using light, the field of utilizing light, or the field of providing services using light.

 光自体の特性としては波長特性や強度分布特性、位相分布特性(波面特性(wave front characteristics)を含む)に限らず、指向性(directivity)や可干渉性(coherence)などの各種属性を持つ事が知られている。 The characteristics of light itself are not limited to wavelength characteristics, intensity distribution characteristics, and phase distribution characteristics (including wave front characteristics), but also have various attributes such as directivity and coherence. It has been known.

 また光を用いた応用分野として、対象体の結像位置に撮像素子を配置して行うイメージング技術や、計測する対象体の分光特性測定技術、測長技術、表示技術などを利用した応用分野が知られている。さらに上記イメージング技術と上記分光特性測定技術を組み合わせたイメージング分光(imaging spectrum)や、上記イメージング技術と上記測長技術を組み合わせた3次元計測などの応用分野が最近発展している。また光の反射量や透過量、吸収量、散乱量の計測結果あるいはその時間的変化を利用した応用分野も存在する。 In addition, as applied fields using light, there are imaging technologies that are performed by placing an image sensor at the imaging position of the target object, and application fields that utilize spectral characteristic measurement technology, length measurement technology, display technology, etc. of the target object to be measured. Are known. Furthermore, application fields such as imaging spectrum, which combines the above-mentioned imaging technology and spectral characteristic measurement technology, and three-dimensional measurement, which combines the above-mentioned imaging technology and the above-mentioned length measurement technology, have recently developed. There are also application fields that utilize measurement results of the amount of reflection, transmission, absorption, and scattering of light, or their temporal changes.

 上記の光を用いた検出結果や計測結果を利用する以外の応用分野として、光を利用した表示技術や光加工技術、光制御技術、光通信技術なども知られている。 In addition to the application fields that utilize the detection and measurement results using light as described above, display technology, optical processing technology, optical control technology, optical communication technology, etc. that utilize light are also known.

 光を利用したサービス提供分野として、上記光を用いた応用分野で得られた各種情報を活用してユーザにサービスを提供する技術分野などが知られている。このユーザへのサービス提供に関しては、ユーザへの情報提供や最適なユーザ環境の提供や制御に限らず、ユーザとの間の双方向での(相互的な)サービス提供に至るあらゆるサービス提供方法が含まれる。 Known fields of service provision using light include technical fields that provide services to users by utilizing various information obtained in the application fields using light. Regarding the provision of services to users, not only the provision of information to users and the provision and control of an optimal user environment, but also the provision of services in both directions (reciprocal) with users, there are various methods of providing services. included.

 このサービス提供分野として、ネットワーク上で形成する仮想空間上の活動を利用してユーザにサービスを提供する方法も有る。例えば現実世界に関する光学的計測データを利用し、サイバー空間上に疑似的現実世界を模倣する。そしてサイバー空間上で発生するアトラクションなどの活動の内容をユーザに光学的に表示するサービス提供形態も知られている。 In this field of service provision, there is also a method of providing services to users using activities in a virtual space formed on a network. For example, optical measurement data related to the real world is used to imitate a pseudo-real world in cyberspace. There is also known a form of service provision in which the content of activities such as attractions occurring in cyberspace is optically displayed to the user.

特開2014-222239号公報Japanese Patent Application Publication No. 2014-222239 特開2000-206449号公報Japanese Patent Application Publication No. 2000-206449 特許第4657379号公報Patent No. 4657379

 上記の光に関係する技術分野毎に、各種の(望ましい)光学特性が要求される。従って光に関係する技術分野毎あるいは光応用分野毎の要求に適した光学特性の提供が望ましい。 Various (desirable) optical characteristics are required for each of the above light-related technical fields. Therefore, it is desirable to provide optical characteristics that are suitable for the requirements of each technical field related to light or each optical application field.

 例えば表示技術やイメージング技術、光制御技術、光通信技術(光を媒介した情報伝達技術)などの各分野では、高品質な光学特性が要求される。近年では表示分野やイメージングでは、今までより高い臨場感を持った表現が望まれる傾向にある。その臨場感の高い表現方法として、3次元表現や鮮明な画像表現が利用される傾向が強くなっている。この高品質な光学特性または鮮明な表現が可能な光学特性を生成する一つの手段として、光学的干渉雑音の少ないあるいは光学的干渉雑音が発生し辛い光を提供する方法も有る。ここでこの3次元表現や鮮明な画像表現を実現するには、膨大なデータ量(情報量)に対する高速処理や伝送処理(転送処理)が必要となる。そしてこの大容量なデータ処理やデータ伝送を可能とする高い信号品質の提供を行う必要もある。 For example, high-quality optical properties are required in various fields such as display technology, imaging technology, light control technology, and optical communication technology (information transmission technology using light). In recent years, there has been a trend in the display and imaging fields to desire expressions with a higher sense of reality than ever before. There is a growing tendency to use three-dimensional expressions and clear image expressions as methods of expressing images with a high sense of realism. As one means for producing high-quality optical characteristics or optical characteristics that can be expressed clearly, there is also a method of providing light with little optical interference noise or in which optical interference noise hardly occurs. In order to realize this three-dimensional representation and clear image representation, high-speed processing and transmission processing (transfer processing) for a huge amount of data (amount of information) are required. It is also necessary to provide high signal quality to enable this large-capacity data processing and data transmission.

 光を利用した高精度な検出または計測やイメージング、あるいは光制御を実現するには、検出や計測、イメージング、光制御に対する高信頼性や高い検出精度/計測精度が望まれる。この光の高信頼性や高精度が実現可能な一つの手段として、光学的干渉雑音の低減化方法の提供をしてもよい。 In order to realize highly accurate detection, measurement, imaging, or light control using light, high reliability and high detection accuracy/measurement accuracy are required for detection, measurement, imaging, and light control. As one means by which high reliability and high precision of this light can be realized, a method for reducing optical interference noise may be provided.

 あるいは光学装置や光学的計測部としての小形化や軽量化が望まれる場合もある。さらには上記に関係して、ユーザへの高品質なサービス提供が望まれても良い。 Alternatively, there may be cases where it is desired to reduce the size and weight of an optical device or optical measurement unit. Furthermore, in relation to the above, it may be desired to provide high quality services to users.

 また上記に限らず、光を用いた各種の応用分野で望ましいあるいは比較的適正な特性を持った所定光生成方法の提供や光学特性変換部の提供あるいはそれを用いた光源や所定光利用方法、サービス提供方法、サービス提供システムの提供を行っても良い。 In addition, the above is not limited to the provision of a method for generating a predetermined light having desirable or relatively appropriate characteristics in various application fields using light, the provision of an optical characteristic conversion unit, a light source using the same, a method for utilizing a predetermined light, A service providing method and a service providing system may also be provided.

 また上記所定光を利用したイメージング方法や検出方法、検出方法、表示方法の提供、あるいはそれらの方法を用いた光学的計測部や光学装置の提供を行っても良い。 It is also possible to provide an imaging method, a detection method, a detection method, and a display method using the above-mentioned predetermined light, or to provide an optical measurement unit or an optical device using these methods.

 特許文献1は対象物(半導体ウェハ)表面上の検査精度を向上させるため、複数光源からの放射光毎に照射する傾斜角を変化させる。複数光源を使用すると、装置の複雑化と大型化を招き易い。一方で単一光源を使用すると異なる傾斜角の照射光間の位相差が常に固定化されるため、光学的干渉雑音が増加する課題が発生する。 In Patent Document 1, in order to improve inspection accuracy on the surface of a target object (semiconductor wafer), the inclination angle of irradiation is changed for each emitted light from a plurality of light sources. Using multiple light sources tends to make the device more complicated and larger. On the other hand, when a single light source is used, the phase difference between the irradiated lights at different tilt angles is always fixed, which causes the problem of increased optical interference noise.

 特許文献2では、領域毎に厚みの異なる透明光学素子の通過光を共通の光ファイバ内に通過させて光学的干渉雑音を低減させる方法が記載されている。しかし高精度な検出または計測やイメージングを実現させるには、さらなる光学的干渉雑音の低減化が望まれる。 Patent Document 2 describes a method of reducing optical interference noise by passing light passing through a transparent optical element having a different thickness for each region into a common optical fiber. However, in order to realize highly accurate detection, measurement, and imaging, further reduction of optical interference noise is desired.

 特許文献3では、固体撮像素子内に画素メモリを持たせ、画素毎の露光時刻を独自設定可能にして超高速撮像が出来る。しかし超高速撮像をする計測対象物への照射光や計測対象物からの反射光に光学的干渉雑音が混入すると、撮像画像に大幅に品質低下が生じる。 In Patent Document 3, a solid-state image sensor is provided with a pixel memory, and the exposure time of each pixel can be independently set, thereby allowing ultra-high-speed imaging. However, if optical interference noise mixes into the light irradiated onto the measurement object that is imaged at ultra-high speed or the light reflected from the measurement object, the quality of the captured image will significantly deteriorate.

 上記までの説明では、光応用分野毎の要求に適した光学特性を持った光を提供して、臨場感の高い表現や高精度な検出/計測/イメージング/光制御などの提供の必要性を説明した。しかしそれに限らず良好な光学特性を持った光の提供以外の方法で、光応用分野やサービス提供分野においてユーザへの利便性や高付加価値、高信頼性(高精度)、高い表現力(たとえば臨場感やリアリティ)を提供してもよい。 The above explanation has emphasized the need to provide light with optical characteristics suitable for the requirements of each optical application field, and to provide highly realistic expressions and highly accurate detection/measurement/imaging/light control. explained. However, it is not limited to this, and it is possible to provide convenience to users, high added value, high reliability (high precision), high expressiveness (for example, It may also provide a sense of presence or reality).

 すなわちユーザにハンドフリー環境の提供や、ユーザの視覚から入る情報量の追加機能や自動選択機能を提供してユーザの利便性を向上させてもよい。例えばユーザの眼動(眼球の動き)や瞬き(まばたき)/ウィンクなどの瞼(まぶた)や眉毛の動き、顔の表情や音声内容、ジェスチャなどの体の動きや指や手(腕)の動きを入力あるいは操作に利用するなどして、利便性の高いサービスの提供/サービスシステムの提供や操作性の高い光学装置を提供してもよい。 In other words, the user's convenience may be improved by providing the user with a hands-free environment, and by providing an additional function and an automatic selection function for the amount of information entered visually by the user. For example, the user's eye movements (movements of the eyeballs), movements of the eyelids and eyebrows such as blinking/winking, facial expressions and audio content, body movements such as gestures, and movements of fingers and hands (arms). It is also possible to provide a highly convenient service/service system or an optical device with high operability by using the information for input or operation.

 またリアルタイムかつ高精度しかも簡易的にユーザの生体情報を収集する方法(機能または装置、要素部)を提供して、ユーザへの利便性と高付加価値、高信頼性を実現してもよい。例えば収集したユーザの生体情報を識別機能や認証機能(生体認証)を不正防止やユーザの意図しない不適正動作の防止に利用し、信頼性の高いサービスの提供を行ってもよい。それに限らず、顔の表情や音声、動作の特徴、呼吸や脈動、血液中成分変化などからユーザの気持ちや健康状態を推測し、その推測結果に基付いた適正環境をユーザに提供するなどして、快適なサービス提供またはサービスシステムの提供を行ってもよい。 Additionally, a method (function, device, element) for collecting biometric information of a user in real time, with high precision, and simply may be provided to realize convenience for the user, high added value, and high reliability. For example, a highly reliable service may be provided by using the collected biometric information of the user with an identification function or an authentication function (biometric authentication) to prevent fraud or to prevent inappropriate actions that are not intended by the user. It is not limited to this, but it can also infer the user's mood and health condition from facial expressions, voice, movement characteristics, breathing, pulsation, changes in blood components, etc., and provide the user with an appropriate environment based on the estimation results. It is also possible to provide comfortable services or service systems.

 あるいはユーザに対して高い表現力(たとえば臨場感やリアリティ)を提供する方法の一例として、鮮明かつ精細な3次元表現が可能な環境(サービスシステム)を提供してもよい。 Alternatively, as an example of a method for providing users with high expressiveness (for example, a sense of presence and reality), an environment (service system) that allows clear and detailed three-dimensional expression may be provided.

 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 この所定光学部材の入射面と直交する入射面側垂線と、この所定光学部材の出射面と直交する出射面側垂線が規定され、
 第1の光の進行方向が、少なくとも上記入射面側垂線と上記出射面側垂線のいずれかとの間に傾き角を持ち、
 第2の光の進行方向を上記第1の光の進行方向に対して傾けるか、または上記所定光学部材内での第2の光の光路を上記第1の光の光路と異ならせて、この第1の光と第2の光との間の光学特性を変化させる。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
A perpendicular to the entrance surface of the predetermined optical member is defined, and a perpendicular to the exit surface of the predetermined optical member is defined.
The traveling direction of the first light has an inclination angle between at least one of the normal to the incident surface side and the normal to the exit surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. The optical characteristics between the first light and the second light are changed.

図1は、全体のシステム概要の一例を示す構成図である。FIG. 1 is a configuration diagram showing an example of an overall system outline. 図2は、各種の技術分野に要求される(望ましい)光学特性の関係説明図である。FIG. 2 is an explanatory diagram of the relationship between (desirable) optical characteristics required in various technical fields. 図3は、異なる波長光の集合が波連を構成する仕組みの説明図である。FIG. 3 is an explanatory diagram of a mechanism in which a collection of lights of different wavelengths constitutes a wave train. 図4は、光学的干渉現象を用いた波連特性測定実験系の説明図である。FIG. 4 is an explanatory diagram of an experimental system for measuring wave sequence characteristics using an optical interference phenomenon. 図5は、一方が遅延した単一波連間の干渉特性を説明している。FIG. 5 illustrates the interference characteristics between single wave trains, one of which is delayed. 図6Aは、波連特性を測定した実験結果を現す。FIG. 6A shows the experimental results of measuring the wave train characteristics. 図6Bは、図6Aの実験結果を基に予測した前後する波連間の関係説明図である。FIG. 6B is an explanatory diagram of the relationship between successive wave sequences predicted based on the experimental results of FIG. 6A. 図7Aは、光学特性変換素子を使った基本的な光学系配置図の説明図である。FIG. 7A is an explanatory diagram of a basic optical system layout diagram using an optical characteristic conversion element. 図7Bは、光学特性変換素子の具体的構造例を説明している。FIG. 7B illustrates a specific structural example of the optical property conversion element. 図7Cは、前後する波連間での位相非同期特性を光学特性変換素子が利用する方法の説明図である。FIG. 7C is an explanatory diagram of a method in which the optical characteristic conversion element utilizes the phase asynchronous characteristic between successive wave trains. 図7Dは、光学特性変換素子の具体的構造に関する他の実施例を説明している。FIG. 7D describes another example regarding the specific structure of the optical property conversion element. 図7Eは、光学特性変換素子の具体的構造に関する他の応用例を説明している。FIG. 7E illustrates another application example regarding the specific structure of the optical property conversion element. 図8は、光学特性変換素子が分光特性内のノイズを軽減する効果説明図である。FIG. 8 is an explanatory diagram of the effect of the optical characteristic conversion element in reducing noise in the spectral characteristics. 図9Aは、本実施形態内で多用される基本的な光学構成の説明図である。FIG. 9A is an explanatory diagram of a basic optical configuration frequently used in this embodiment. 図9Bは、基本的な光学構成の具体的な実施形態に関する説明図である。FIG. 9B is an explanatory diagram regarding a specific embodiment of the basic optical configuration. 図9Cは、基本的な光学構成の具体的な他の実施形態に関する説明図である。FIG. 9C is an explanatory diagram regarding another specific embodiment of the basic optical configuration. 図9Dは、進行波断面位置毎に波面連続性を切断する簡易的方法を説明する。FIG. 9D illustrates a simple method of cutting wavefront continuity at each traveling wave cross-section position. 図9Eで、一般的結像レンズとフレネルレンズ間の波面連続性の違いを説明する。FIG. 9E explains the difference in wavefront continuity between a general imaging lens and a Fresnel lens. 図9Fは、波面不連続性を応用した実施形態説明図である。FIG. 9F is an explanatory diagram of an embodiment in which wavefront discontinuity is applied. 図9Gは、波面不連続性を応用した他の実施形態説明図である。FIG. 9G is an explanatory diagram of another embodiment in which wavefront discontinuity is applied. 図9Hは、多分割光反射素子における微視的反射方向と巨視的反射方向の関係を説明する。FIG. 9H explains the relationship between the microscopic reflection direction and the macroscopic reflection direction in the multi-division light reflection element. 図9Iは、図9Gまたは図9Hの応用形態説明図である。FIG. 9I is an explanatory diagram of an applied form of FIG. 9G or FIG. 9H. 図10Aは、生体系構成成分と対応する吸収波長との関係説明図である。FIG. 10A is an explanatory diagram of the relationship between biological system components and corresponding absorption wavelengths. 図10Bは、人間の眼球構造の説明図である。FIG. 10B is an explanatory diagram of the human eyeball structure. 図11Aは、ハイブリッド形発光部内構造に関する実施形態例説明図である。FIG. 11A is an explanatory diagram of an embodiment example regarding the internal structure of a hybrid type light emitting section. 図11Bは、近赤外発光蛍光体内部の構造説明図である。FIG. 11B is an explanatory diagram of the structure inside the near-infrared emitting phosphor. 図11Cは、近赤外発光蛍光体の生成方法に関する説明図である。FIG. 11C is an explanatory diagram regarding a method for producing a near-infrared emitting phosphor. 図12Aは、光源部と計測部の一体形構造に関する実施形態例説明図である。FIG. 12A is an explanatory diagram of an embodiment regarding an integrated structure of a light source section and a measurement section. 図12Bは、光源部と計測部の一体形構造に関する他の実施形態例説明図。FIG. 12B is an explanatory diagram of another embodiment of an integrated structure of a light source section and a measurement section. 図13は、本実施形態の光学的ノイズ発生原因に関する別視点から見た原理説明図を示す。FIG. 13 shows an explanatory diagram of the principle of the cause of optical noise generation in this embodiment seen from a different perspective. 図14Aは、光学的ノイズ低減方法に関する実施例説明図である。FIG. 14A is an explanatory diagram of an example of an optical noise reduction method. 図14Bは、光学的ノイズ低減方法に関する他の実施例説明図である。FIG. 14B is an explanatory diagram of another embodiment of the optical noise reduction method. 図14Cは、光学的ノイズ低減方法に関する応用例説明図である。FIG. 14C is an explanatory diagram of an application example regarding the optical noise reduction method. 図14Dは、図14Cで説明の応用例使用時の光学的ノイズ低減効果を示す。FIG. 14D shows the optical noise reduction effect when using the application described in FIG. 14C. 図15Aは、シングルモード光ファイバとマルチモード光ファイバ間の特徴比較説明図である。FIG. 15A is a diagram illustrating a comparison of characteristics between a single mode optical fiber and a multimode optical fiber. 図15Bは、マルチモード光ファイバの種類と特徴の説明図である。FIG. 15B is an explanatory diagram of types and characteristics of multimode optical fibers. 図15Cは、光ファイバのコア領域内を通過する光のモード説明図である。FIG. 15C is an explanatory diagram of the mode of light passing through the core region of the optical fiber. 図16Aは、光ファイバ内のモード加算を利用した強度重心ずれ生成説明図。FIG. 16A is an explanatory diagram of intensity gravity center shift generation using mode addition within an optical fiber. 図16Bは、強度重心ずれを利用した光学的ノイズ低減方法の説明図である。FIG. 16B is an explanatory diagram of an optical noise reduction method using intensity gravity center shift. 図16Cは、光学特性変換素子と光学的ノイズ低減との関係説明図である。FIG. 16C is an explanatory diagram of the relationship between the optical characteristic conversion element and optical noise reduction. 図17Aは、所定光学部材入射面の垂線に対する入射角と光学的ノイズ低減効果との間の関係説明図である。FIG. 17A is an explanatory diagram of the relationship between the incident angle with respect to the perpendicular to the entrance surface of a predetermined optical member and the optical noise reduction effect. 図17Bは、光学特性変換素子の角度分割数と光学的ノイズ低減効果との関係説明図である。FIG. 17B is an explanatory diagram of the relationship between the number of angular divisions of the optical characteristic conversion element and the optical noise reduction effect. 図18Aは、光源部内の光学配置に関する実施例説明図である。FIG. 18A is an explanatory diagram of an example regarding the optical arrangement within the light source section. 図18Bは、光源部内の主に電子回路配置に関する実施例説明図である。FIG. 18B is an explanatory diagram of an embodiment mainly relating to the arrangement of electronic circuits within the light source section. 図19Aは、時変発光パターンを示すデータフォーマット説明図である。FIG. 19A is a data format explanatory diagram showing a time-varying light emission pattern. 図19Bは、ホストと光源部間の通信制御シーケンスに関する説明図である。FIG. 19B is an explanatory diagram regarding the communication control sequence between the host and the light source unit. 図19Cは、発光開始タイミングを制御する制御信号フォーマット例の説明図である。FIG. 19C is an explanatory diagram of an example of a control signal format for controlling light emission start timing. 図20Aは、本実施形態における情報の抽出と流れを示す説明図である。FIG. 20A is an explanatory diagram showing information extraction and flow in this embodiment. 図20Bは、本実施形態で抽出される情報内容を示す分類説明図である。FIG. 20B is a classification diagram showing information contents extracted in this embodiment. 図20Cは、計測対象物内の測定場所/内容毎の外乱ノイズ除去方法を示す。FIG. 20C shows a method for removing disturbance noise for each measurement location/content within the measurement object. 図21Aは、時系列的に変化する分光特性や画像信号に対する本実施形態における基本的なデータ処理方法を示す。FIG. 21A shows a basic data processing method in this embodiment for spectral characteristics and image signals that change over time. 図21Bは、時系列的に変化する分光特性や画像信号のデータ処理方法に関する他の実施形態を示す。FIG. 21B shows another embodiment regarding a data processing method for spectral characteristics and image signals that change over time. 図21Cは、パルス発光による露光を利用した分光特性や画像信号のデータ処理方法に関する応用例を示す。FIG. 21C shows an application example regarding a data processing method of spectral characteristics and image signals using exposure by pulsed light emission. 図22は、電荷蓄積形信号受信部の特徴説明図である。FIG. 22 is a diagram illustrating the characteristics of the charge accumulation type signal receiving section. 図23Aは、直流成分除去後の基準信号生成に至る信号処理(データ処理)例の説明図である。FIG. 23A is an explanatory diagram of an example of signal processing (data processing) leading to the generation of a reference signal after DC component removal. 図23Bは、波長毎または画素毎の第2の情報抽出方法例説明図である。FIG. 23B is an explanatory diagram of an example of the second information extraction method for each wavelength or each pixel. 図24Aは、電荷蓄積形信号受信部を利用した信号処理実験に使用した実験光学系の説明図である。FIG. 24A is an explanatory diagram of an experimental optical system used in a signal processing experiment using a charge accumulation type signal receiving section. 図24Bは、計測対象物に対する照射光と検出光の分光特性説明図である。FIG. 24B is an explanatory diagram of the spectral characteristics of the irradiation light and the detection light for the measurement target. 図25Aは、異なる波長光毎の検出光量時間変化特性を示す。FIG. 25A shows the detected light amount temporal change characteristics for each different wavelength light. 図25Bは、別の異なる波長光毎の検出光量時間変化特性を示す。FIG. 25B shows the detected light amount time change characteristics for each different wavelength light. 図26Aは、本実施形態に使用される撮像素子内の拡大図を示す。FIG. 26A shows an enlarged view inside the image sensor used in this embodiment. 図26Bは、本実施形態に使用の撮像素子内駆動回路の一部説明図である。FIG. 26B is a partial explanatory diagram of the internal drive circuit of the image sensor used in this embodiment. 図26Cは、図26Bで説明した駆動回路内の動作タイミング説明図である。FIG. 26C is an explanatory diagram of the operation timing in the drive circuit described in FIG. 26B. 図27Aは、光源部と計測部を一体化した実施形態例の説明図である。FIG. 27A is an explanatory diagram of an embodiment in which a light source section and a measurement section are integrated. 図27Bは、計測部に3Dカラー撮像素子を使用した場合の光学装置内構造説明図である。FIG. 27B is an explanatory diagram of the internal structure of the optical device when a 3D color image sensor is used in the measurement section. 図28Aは、3Dカラー画像(映像)収集手順の説明図である。FIG. 28A is an explanatory diagram of a 3D color image (video) collection procedure. 図28Bは、計測距離範囲全体の光源波長光での反射光パターン撮像方法の説明図である。FIG. 28B is an explanatory diagram of a reflected light pattern imaging method using light source wavelength light over the entire measurement distance range. 図28Cは、計測距離範囲毎の光源波長光での反射光パターン撮像方法の説明図である。FIG. 28C is an explanatory diagram of a reflected light pattern imaging method using light source wavelength light for each measurement distance range. 図28Dは、詳細距離計測時の光源部発光と計測部露光のタイミング説明図。FIG. 28D is a timing explanatory diagram of light source unit light emission and measurement unit exposure during detailed distance measurement. 図28Eは、複数画素を組み合わせた距離計測方法の説明図である。FIG. 28E is an explanatory diagram of a distance measurement method using a combination of multiple pixels. 図28Fは、スペックルノイズの影響を低減する発光と露光の組み合わせ方法を示す。FIG. 28F shows a method of combining light emission and exposure to reduce the effects of speckle noise. 図28Gは、スペックルノイズ量をモニタする計測部内での信号検出状態の説明図である。FIG. 28G is an explanatory diagram of a signal detection state within the measurement unit that monitors the amount of speckle noise. 図28Hは、スペックルノイズの影響を低減する計測部内での信号検出状態の説明図である。FIG. 28H is an explanatory diagram of a signal detection state within the measurement unit that reduces the influence of speckle noise. 図29Aは、リニアバリアブルバンドパスフィルタの構造説明図である。FIG. 29A is a structural explanatory diagram of a linear variable bandpass filter. 図29Bは、光源部と撮像素子を組み合わせた他の実施形態説明図である。FIG. 29B is an explanatory diagram of another embodiment in which a light source section and an image sensor are combined. 図30Aは、サイバー空間内所定サービス提供ドメインに対する入力デバイスと出力デバイスの関係説明図である。FIG. 30A is an explanatory diagram of the relationship between input devices and output devices for a predetermined service providing domain in cyberspace. 図30Bは、サイバー空間内所定サービス提供ドメインへの入出力デバイス形態例を説明する。FIG. 30B explains an example of the form of an input/output device to a predetermined service providing domain in cyberspace. 図30Cは、サイバー空間内所定サービス提供ドメインへの他の入出力デバイス形態例を説明する。FIG. 30C illustrates another example input/output device configuration to a predetermined service providing domain in cyberspace. 図31Aは、実空間上とサイバー空間内表示内容との間のサイズ合わせ方法の説明図である。FIG. 31A is an explanatory diagram of a method for adjusting the size of display content in real space and content displayed in cyberspace. 図31Bは、生体計測を利用した所定サービス提供ドメイン参加時の認証環境例を示す。FIG. 31B shows an example of an authentication environment when participating in a predetermined service providing domain using biometry. 図31Cは、生体計測を利用した詳細な認証例の説明図である。FIG. 31C is an explanatory diagram of a detailed authentication example using biometry. 図32は、サイバー空間内の時刻操作可能なサービス提供実施形態説明図。FIG. 32 is an explanatory diagram of an embodiment of providing a service that allows time manipulation in cyberspace. 図33Aは、撮像素子から見た4次元座標軸方向説明図である。FIG. 33A is an explanatory diagram of four-dimensional coordinate axis directions as seen from the image sensor. 図33Bは、本実施形態における計測対象物表面形状変化を捉える4次元メッシュ構造の説明図である。FIG. 33B is an explanatory diagram of a four-dimensional mesh structure that captures changes in the surface shape of a measurement target in this embodiment. 図33Cは、本実施形態における4次元メッシュのデータ構造説明図である。FIG. 33C is an explanatory diagram of the data structure of the four-dimensional mesh in this embodiment. 図34Aは、サイバー空間内の時刻操作可能なサービス提供ドメインで使用されるマッピング概念の説明図である。FIG. 34A is an explanatory diagram of a mapping concept used in a time-manipulable service provision domain in cyberspace. 図34Bは、マッピング技術を用いたユーザに表示する4次元画像(映像)生成までの手順説明図を示す。FIG. 34B shows a procedure explanatory diagram for generating a four-dimensional image (video) to be displayed to the user using mapping technology. 図34Cは、マップ上の個別4次元メッシュ構造のレンダリングと座標変換方法例を示す。FIG. 34C shows an example of a method for rendering and coordinate transformation of an individual four-dimensional mesh structure on a map. 図34Dは、マップ内採光条件に合わせた色強度調整方法の説明図である。FIG. 34D is an explanatory diagram of a method of adjusting color intensity according to lighting conditions within the map. 図34Eは、ユーザへの3次元表示対応座標変換方法の説明図である。FIG. 34E is an explanatory diagram of a coordinate conversion method for three-dimensional display to the user.

 本実施形態における所定光生成方法および光学特性変更部、光源、所定光利用方法、検出方法、イメージング方法、表示方法、光学的計測部、光学装置、サービス提供方法、サービス提供システムに関して、以下に図面を参照して説明する。 The predetermined light generation method, optical characteristic changing unit, light source, predetermined light utilization method, detection method, imaging method, display method, optical measurement unit, optical device, service providing method, and service providing system in this embodiment are described below in the drawings. Explain with reference to.

 まず説明の全容を把握し易いように、本実施形態の説明内容に関する目次を示す。
第1章 本実施形態で使用されるシステム概要と光応用分野毎に要求される光学特性
1.1節 本実施形態で使用されるシステム概要説明
1.2節 光応用分野毎に要求される光学特性の概要
第2章 本実施形態で使用される基本的光学特性
2.1節 異なる波長光の集合で構成する波連
2.2節 異なる波連間の関係と光の干渉特性 
2.3節 波連特性を利用した特殊な光学特性の生成 
2.4節 分光特性上に現れる光学的雑音低減化効果 
第3章 同一発光部から放射された2波間の光学特性を変化させる実施形態例
3.1節 本実施形態内で利用される基本的な光学構成
3.2節 基本的な光学構成の具体例 
3.3節 他の光学構成の実施形態例 
第4章 ハイブリッド形光源部とその利用形態
4.1節 生体構成成分と吸収波長との関係 
4.2節 ハイブリッド光源部内構造例
4.3節 近赤外発光蛍光体の材質および構造とその生成方法 
4.4節 光源部と計測部とを一体小形化した実施形態
第5章 光学雑音低減化への他の実施形態例
5.1節 スペックルノイズパターンの特徴 
5.2節 各種光ファイバの特徴とコア領域内を通過する光特性
5.3節 光ファイバ内モード特性を利用したスペックルノイズパターン低減化方法
5.4節 光学雑音低減化効果
第6章 時系列方向で任意波形の発光が可能な光源部 
6.1節 発光量の高速制御機能を有する光学系配置
6.2節 光源部内構造例
6.3節 発光波形設定のフォーマット例
6.4節 発光の通信制御例 
第7章 光学的雑音低減化と電気的雑音低減化との組み合わせ
7.1節 光応用分野における高精度計測方法
7.2節 ロックイン増幅技術を応用した各種の実施形態例
7.3節 電荷蓄積形信号受信部の構造 
7.4節 電荷蓄積形信号の信号処理形態例
7.5節 実証実験結果の一例 
第8章 光源部と撮像素子が内蔵された計測部との組み合わせ実施形態
8.1節 本実施形態における撮像素子内構造とデータ取り込みタイミング
8.2節 撮像素子を内蔵した計測部と光源部との組み合わせ実施形態
8.3節 距離計測手順 
8.4節 レーザに拠るスペックルノイズの測定への影響低減方法 
8.5節 照射波長制御を利用したハイパースペクトル検出方法
第9章 サービス提供
9.1節 サービス提供時の入出力デバイス形態例
9.2節 サービス提供の形態例
9.3節 計測して得られた4次元データのデータフォーマット例 
9.4節 マッピング処理 
 第1章 本実施形態で使用されるシステム概要と光応用分野毎に要求される光学特性
1.1節 本実施形態で使用されるサービス提供システム概要説明
図1は本実施形態で使用されるサービス提供システム14を示す。光源部2から放射された光は光伝搬経路6を経由して対象物20に照射される。そしてこの対象物20から得られた光は再び光伝搬経路6を経由して計測部8に入射される。またそれに限らず、光源部2から放射された光は光伝搬経路6を経由して直接、計測部8に入射されても良い。また他の実施形態として、光源部2から放射された光が光伝搬経路6を経由して表示部18に到達し、この表示部18で所定情報を表示しても良い。
First, a table of contents related to the explanation of this embodiment will be shown to make it easier to understand the entire explanation.
Chapter 1 Overview of the system used in this embodiment and optical characteristics required for each optical application field Section 1.1 Overview of the system used in this embodiment Section 1.2 Optical characteristics required for each optical application field Overview of characteristics Chapter 2 Basic optical characteristics used in this embodiment Section 2.1 Wave train composed of a set of different wavelength lights Section 2.2 Relationship between different wave trains and light interference characteristics
Section 2.3 Generation of special optical properties using wave chain properties
Section 2.4 Optical noise reduction effect appearing on spectral characteristics
Chapter 3 Embodiment example of changing optical characteristics between two waves emitted from the same light emitting part Section 3.1 Basic optical configuration used in this embodiment Section 3.2 Specific example of basic optical configuration
Section 3.3 Embodiments of other optical configurations
Chapter 4 Hybrid light source section and its usage forms Section 4.1 Relationship between biological components and absorption wavelengths
Section 4.2 Example of internal structure of hybrid light source Section 4.3 Material and structure of near-infrared emitting phosphor and its production method
Section 4.4 Embodiment in which the light source section and measurement section are integrated and miniaturized Chapter 5 Other embodiment examples for reducing optical noise Section 5.1 Features of speckle noise pattern
Section 5.2 Characteristics of various optical fibers and characteristics of light passing through the core region Section 5.3 Method for reducing speckle noise patterns using mode characteristics within optical fibers Section 5.4 Optical noise reduction effect Chapter 6 Time Light source section that can emit arbitrary waveform light in the serial direction
Section 6.1 Arrangement of optical system with high-speed control function for light emission amount Section 6.2 Example of internal structure of light source Section 6.3 Example of format for light emission waveform setting Section 6.4 Example of light emission communication control
Chapter 7 Combination of optical noise reduction and electrical noise reduction Section 7.1 High-precision measurement methods in optical application fields Section 7.2 Various embodiment examples applying lock-in amplification technology Section 7.3 Charge Structure of storage type signal receiver
Section 7.4 Example of signal processing format for charge accumulation type signal Section 7.5 Example of demonstration experiment results
Chapter 8 Embodiment of combination of light source unit and measurement unit with built-in image sensor Section 8.1 Internal structure of image sensor and data acquisition timing in this embodiment Section 8.2 Combination of measurement unit with built-in image sensor and light source unit Combination embodiment Section 8.3 Distance measurement procedure
Section 8.4 Method for reducing the influence of laser speckle noise on measurement
Section 8.5 Hyperspectral detection method using irradiation wavelength control Chapter 9 Service provision Section 9.1 Examples of input/output device formats when providing services Section 9.2 Examples of service provision formats Section 9.3 Obtained by measurement Data format example of 4D data
Section 9.4 Mapping processing
Chapter 1 Overview of the system used in this embodiment and optical characteristics required for each optical application field Section 1.1 Overview of the service providing system used in this embodiment Figure 1 shows the services used in this embodiment A provision system 14 is shown. The light emitted from the light source section 2 is irradiated onto the object 20 via the light propagation path 6. Then, the light obtained from this object 20 enters the measuring section 8 via the light propagation path 6 again. Furthermore, the present invention is not limited to this, and the light emitted from the light source section 2 may be directly incident on the measurement section 8 via the light propagation path 6. In another embodiment, the light emitted from the light source section 2 may reach the display section 18 via the light propagation path 6, and predetermined information may be displayed on the display section 18.

 本実施形態における測定装置12は、光源部2と計測部8、システム内制御部50から構成される。またこの測定装置12の外部にはアプリケーション分野(各種光応用分野)適合部60が存在する。そしてこのアプリケーション分野(各種光応用分野)適合部60内の各部分62~76は、個々にシステム内制御部50との情報交換が可能となっている。 The measuring device 12 in this embodiment includes a light source section 2, a measuring section 8, and an internal system control section 50. Further, an application field (various optical application fields) adaptation section 60 exists outside the measuring device 12. Each of the sections 62 to 76 in the application field (various optical application fields) adaptation section 60 can individually exchange information with the system control section 50.

 例えば計測部4での計測結果得られた情報と、アプリケーション分野(各種光応用分野)適合部60内の各部分62~76が連携利用されて、ユーザに対するサービス提供される。 For example, the information obtained from the measurement results in the measurement section 4 and the sections 62 to 76 in the application field (various optical application fields) adaptation section 60 are used in conjunction to provide services to the user.

 本実施形態におけるサービス提供システム14は、上記測定装置12と上記アプリケーション分野(各種光応用分野)適合部60、外部システム16から構成され、ユーザに対するあらゆるサービスを提供できる仕組みになっている。ここで上記サービス提供システム14から外部システム16を除いた残りの部分が、光学装置10として単独機能する。 The service providing system 14 in this embodiment is composed of the measuring device 12, the application field (various optical application fields) adapting section 60, and the external system 16, and is configured to be able to provide all kinds of services to users. Here, the remaining portion of the service providing system 14 excluding the external system 16 functions independently as the optical device 10.

 具体的なアプリケーションとして例えば、遠隔医療に関してサービス提供を行う例を説明する。この場合には、医療/福祉関連検査処理部70が動作して、計測部8から得られた情報を遠隔診断の補助に利用できる。具体的には計測部8からの収集データを解析して得られた血糖値を、糖尿病の診断に利用できる。またこの時に同時に得られる脈動波形から、心臓病に関係する不整脈の診断に利用しても良い。 As a specific application, an example of providing services related to telemedicine will be explained. In this case, the medical/welfare-related test processing unit 70 operates, and the information obtained from the measurement unit 8 can be used to assist in remote diagnosis. Specifically, the blood sugar level obtained by analyzing the data collected from the measurement unit 8 can be used for diagnosis of diabetes. Furthermore, the pulsation waveform obtained at the same time may be used to diagnose arrhythmia related to heart disease.

 例えば特定ユーザの血糖値を測定中に、脈動波形内に不整脈が検出された場合の処理例を説明する。特定ユーザの脈動波形は信号処理部42内で抽出され、信号/情報変換部(復号化/復調処理含む)44とシステム内制御部50を経由して特性分析/解析処理部62に転送される。するとこの特性分析/解析処理部62が脈動波形を分析し、標準波形および病変波形とのパターンマッチングを行う。その結果として、不整脈の検出と共に心臓内の欠陥部予測ができる。そして不整脈検出結果と心臓部内欠陥予測情報が、システム内制御部50を介して医療/福祉関連検査処理部70に伝達される。 For example, a processing example will be described when an arrhythmia is detected in a pulsation waveform while measuring a specific user's blood sugar level. The pulsating waveform of a specific user is extracted within the signal processing section 42 and transferred to the characteristic analysis/analysis processing section 62 via the signal/information conversion section (including decoding/demodulation processing) 44 and the system internal control section 50. . Then, this characteristic analysis/analysis processing section 62 analyzes the pulsation waveform and performs pattern matching with the standard waveform and the lesion waveform. As a result, arrhythmia can be detected and defects within the heart can be predicted. The arrhythmia detection result and intracardiac defect prediction information are then transmitted to the medical/welfare-related examination processing section 70 via the in-system control section 50.

 次に医療/福祉関連検査処理部70は情報伝達経路4を経由して外部システム16内の掛かり付け医への情報提供(例えばメール送信)を行う。またこの特定ユーザが所定の保険会社(損害保険会社)と事前契約を締結している場合には、自動的に医療/福祉関連検査処理部70が上記保険会社(損害保険会社)への情報提供(例えばメール送信)を行う。その結果としてユーザに負担を掛けずに、入院手配や治療費の減額処理などの煩わしい手続きを代行するサービスを提供できる。 Next, the medical/welfare-related test processing unit 70 provides information (for example, sends an e-mail) to the family doctor in the external system 16 via the information transmission path 4. Additionally, if this specific user has concluded a prior contract with a predetermined insurance company (non-life insurance company), the medical/welfare-related inspection processing unit 70 automatically provides information to the insurance company (non-life insurance company). (e.g. send email). As a result, it is possible to provide a service that handles troublesome procedures such as arranging hospitalization and reducing treatment costs on behalf of the user, without placing a burden on the user.

 また病気療養中または特定病気の治療中の場合には、治療適合制御/処理部68が動作し、医者が遠隔から治療の進み具合をモニタしても良い。すなわち血糖値や脈動波形の時間的変化を追跡する事で、遠方の医師が病状進行や治癒経過が分かる。 Furthermore, when the patient is undergoing treatment for an illness or a specific disease, the treatment adaptation control/processing unit 68 may be activated, and the doctor may remotely monitor the progress of the treatment. In other words, by tracking temporal changes in blood sugar levels and pulsation waveforms, a distant doctor can understand the progress of the disease and the progress of healing.

 上記に限らず、ユーザの健康情報を他の任意のサービス提供に利用しても良い。例えば自動車保険や失業保険などの損害保険への契約時に、損保会社側で光学装置10を用いて契約対象ユーザの健康状態を調べても良い。そして光学装置10から得られる情報に基付いて損害賠償額を設定するサービスを提供しても良い。 The user's health information is not limited to the above, and the user's health information may be used to provide any other service. For example, when signing a contract for non-life insurance such as automobile insurance or unemployment insurance, the non-life insurance company may use the optical device 10 to check the health condition of the user who is the subject of the contract. A service for setting the amount of damages based on the information obtained from the optical device 10 may also be provided.

 それに限らず、例えば銀行へのユーザの預金時や銀行からユーザ(が経営する会社)に融資する場合の利息額や融資条件設定に、光学装置10から得られる情報を利用しても良い。 However, the information obtained from the optical device 10 may be used, for example, to set the interest amount and loan conditions when the user makes a deposit at a bank or when the bank lends the user (a company managed by the user).

 他のサービス提供形態例として、教育現場に光学装置10から得られる情報を利用しても良い。例えば脈拍数や呼吸数、眼動や瞼の動きから、受講生の集中度や眠気が予想できる。光学装置10から得られた受講生の集中度や眠気情報から、講義内容に適宜変更を加える事が出来る。それにより、教育効率が向上する。 As another service provision example, the information obtained from the optical device 10 may be used in educational settings. For example, a student's level of concentration and drowsiness can be predicted based on pulse rate, breathing rate, eye movements, and eyelid movements. The content of the lecture can be changed as appropriate based on the student's concentration level and drowsiness information obtained from the optical device 10. This will improve educational efficiency.

 また他のサービス提供形態の応用例として、公共施設での異常監視への応用も可能となる。人は“緊張状態”や“興奮状態”になると鼓動(脈拍数)が増加する傾向になる。そして事件を起こす直前のテロリストは内心で“緊張状態”や“興奮状態”になっており、緊張から顔がこわばっている場合が多い。従って監視カメラを遠隔操作して不特定多数者の脈拍数を同時に測定すると、脈拍数が異常に高く顔面の表情筋が収縮している人を抽出可能となる。 Furthermore, as an example of application of other service provision formats, it is also possible to apply it to abnormality monitoring in public facilities. When people become nervous or excited, their heart rate (pulse rate) tends to increase. Terrorists who are about to commit an incident are secretly in a state of ``nervousness'' or ``excitement,'' and their faces often become stiff from the tension. Therefore, by remotely controlling a surveillance camera and simultaneously measuring the pulse rates of a large number of unspecified people, it is possible to identify people whose pulse rates are abnormally high and whose facial muscles are contracting.

 本実施形態では情報伝達経路4を利用し、光学装置10がサイバー空間への入り口の役割を果たしても良い。(すなわち情報伝達経路4を経由して、光学装置10が直接サイバー空間に接続可能となっている。)このサイバー空間への入り口の役割に相当するサービス提供例として、サイバー空間に入る時の個人認証や、サイバー空間に入った後のユーザ個々に最適な居場所の検索や誘導、サイバー空間内でのユーザの能動的動作の代行、セキュリティ保護などに至るサイバー空間内でのあらゆるサービス提供を行っても良い。 In this embodiment, the optical device 10 may serve as an entrance to cyberspace by using the information transmission path 4. (In other words, the optical device 10 can be directly connected to the cyberspace via the information transmission path 4.) As an example of providing a service corresponding to the role of the entrance to the cyberspace, an individual when entering the cyberspace We provide all kinds of services in cyberspace, including authentication, searching and guiding each user to the optimal location after entering cyberspace, acting on behalf of the user's active actions within cyberspace, and protecting security. Also good.

 本実施形態では、光学装置10(あるいはその中のサービス提供システム14)を利用するユーザ体内任意部位での血管パターンや眼底パターンの自動入力と識別判定、あるいは計測部8に内蔵された可視光カメラを用いた顔認証や体形認証が行える。そのため本実施形態では光学装置10が収集するユーザ関連情報を利用して、サイバー空間に入る時の個人認証サービスの提供が可能となる。また上記以外の任意な方法(例えば声紋検出)を用いて個人認証サービスを提供してもよい。 In the present embodiment, automatic input and identification of blood vessel patterns and fundus patterns at any part of the user's body using the optical device 10 (or the service providing system 14 therein), or a visible light camera built into the measurement unit 8 is performed. Facial recognition and body recognition can be performed using . Therefore, in this embodiment, by using the user-related information collected by the optical device 10, it is possible to provide a personal authentication service when entering cyberspace. Further, the personal authentication service may be provided using any method other than the above (for example, voiceprint detection).

 このサイバー空間への入り口としての光学装置10の物理形態の一例として、パーソナルコンピュータや携帯端末(スマートフォンやタブレットなど)のカメラ部を利用しても良い。 As an example of the physical form of the optical device 10 as an entrance to this cyberspace, a camera section of a personal computer or a mobile terminal (smartphone, tablet, etc.) may be used.

 さらに光学装置10内の表示部18の物理的形態として、ユーザが装着可能な装着端末を利用しても良い。このユーザが装着可能な装着端末として、メガネ形や帽子形、ヘルメット形、カバン形など任意の物理的形態を取っても良い。 Further, as the physical form of the display unit 18 in the optical device 10, a wearable terminal that can be worn by a user may be used. The wearable terminal that can be worn by the user may take any physical form such as glasses, a hat, a helmet, or a bag.

 例えばVR(Virtual Reality)やAR(Augmented Reality)を実現するメガネ形やユーザが直接被るタイプでは、ユーザの皮膚に直接接触する場所が有る。このユーザの皮膚に直接接触する領域に、上記の光学装置内の計測部8の少なくとも一部を配置しても良い。 For example, in glasses-shaped glasses that realize VR (Virtual Reality) and AR (Augmented Reality), and types that the user wears directly, there are places that come into direct contact with the user's skin. At least a portion of the measuring section 8 in the optical device described above may be placed in a region that directly contacts the user's skin.

 血液分析により血液内のノルアドレナリンなどの特定成分含有量を計測すると、例えば“緊張状態”や“興奮状態”などの端末を装着したユーザの心理状態が推定できる。それ以外にユーザの顔面に存在する表情筋の収縮場所からも、ユーザの心理状態が推定できる。さらに前述したように遠隔カメラなどに映った人の脈拍数からも、“緊張状態”や“興奮状態”になっている計測対象者の抽出が可能となる。またそれに限らず本実施形態では、ユーザ頭部内の神経細胞(neuron)個々の活動をモニターできる。従って光学装置10を利用して、ユーザからサイバー空間への効率の良いアプローチが可能となる。 By measuring the content of specific components such as noradrenaline in the blood through blood analysis, it is possible to estimate the psychological state of the user wearing the terminal, such as a "nervous state" or "excited state". In addition, the user's psychological state can be estimated from the contraction locations of facial muscles on the user's face. Furthermore, as mentioned above, it is also possible to extract measurement subjects who are in a "nervous state" or "excited state" from the pulse rate of the person captured on a remote camera. Furthermore, the present embodiment is not limited to this, and the activity of individual neurons within the user's head can be monitored. Therefore, using the optical device 10, users can efficiently approach cyberspace.

 ユーザがサイバー空間に対して従来の技術で能動的行為をする方法として例えば、発声やキーインなどの指操作が必要だった。そのため従来技術を用いたサイバー空間へのアプローチには、多大な時間が掛かっていた。それに比べて本実施形態では、光学装置10内で自動的かつ高速でユーザの心理状態や意志を予測し、サイバー空間に対して迅速かつ適正に対処できる。従って本実施形態ではユーザに対して発声や指動作などの煩わしい動作を要求する事無く、高速でユーザが望む情報提供72やサイバー空間への対処が可能となる。 For example, methods for users to take active actions in cyberspace using conventional technology required finger operations such as vocalization and key-in. Therefore, it took a great deal of time to approach cyberspace using conventional technology. In contrast, in the present embodiment, the user's psychological state and intention can be predicted automatically and quickly within the optical device 10, and a prompt and appropriate response to cyberspace can be achieved. Therefore, in this embodiment, it is possible to provide the information 72 desired by the user and deal with cyberspace at high speed without requiring the user to make cumbersome actions such as vocalization or finger movements.

 またそれに限らず、光学装置10内の非光学的各種センサ52を利用して、サイバー空間への対処に関する高いユーザ利便性が提供できる。例えば非光学的各種センサ52としてジャイロスコープや加速度センサを配置し、ユーザ頭部またはユーザの体の一部(例えば手や指など)の動きを検知する場合を例として説明する。VRやARなどのメガネ形装着端末などを用いて表示部18に画像(動画)を表示中にユーザが首を振ると、表示画面がそれに応じて回転する。ユーザが身を乗り出したりのけ反ると、表示画面上で前に進んだり後ろに下がる。ここで例えばゲームなどでサイバー空間内で高速に移動を試みた場合、上記ジャイロスコープや加速度センサでの応答速度に限界が生じる。この場合にユーザの心理状態や意志を予測し、サイバー空間に対して迅速かつ適正に対処することで、ユーザへのサイバー空間内での利便性が大幅に向上する。 Furthermore, the present invention is not limited to this, and various non-optical sensors 52 within the optical device 10 can be used to provide high user convenience in dealing with cyberspace. For example, a case will be described in which a gyroscope or an acceleration sensor is arranged as the various non-optical sensors 52 to detect the movement of the user's head or a part of the user's body (for example, a hand or a finger). When a user shakes his or her head while displaying an image (video) on the display unit 18 using a glasses-type wearable terminal such as VR or AR, the display screen rotates accordingly. When the user leans forward or leans back, the user moves forward or backward on the display screen. For example, when attempting to move at high speed in cyberspace while playing a game, there is a limit to the response speed of the gyroscope or acceleration sensor. In this case, by predicting the user's psychological state and will and responding quickly and appropriately in cyberspace, the convenience for the user in cyberspace can be greatly improved.

 サービス提供システム14内の情報提供部72と収集情報保存74、信号処理部42間で連携したユーザへのサービス提供例を下記に示す。例えばユーザが装着した装着端末(例えばメガネやヘルメット)のVR画面やAR画面に、メニュー画面を表示するサービス提供例を考える。ユーザの視線検出と同時に光学装置10でユーザの『好感度』(あるいは不快感の度合い)を推定すれば、瞬時に(短時間で)ユーザが好む画面を表示できる。 An example of service provision to the user in collaboration between the information providing unit 72, collected information storage 74, and signal processing unit 42 in the service providing system 14 is shown below. For example, consider an example of providing a service in which a menu screen is displayed on a VR screen or an AR screen of a wearable terminal (such as glasses or a helmet) worn by a user. By estimating the user's "likeability" (or degree of discomfort) using the optical device 10 at the same time as detecting the user's line of sight, it is possible to instantly (in a short period of time) display a screen that the user prefers.

 また例えば
1.VRやARなどの装着端末が表示部18に組み込まれ、
2.非光学的各種センサ52内のジャイロスコープや加速度センサがユーザの頭部や指(あるいは手)の動きを検出し、
3.計測部8が計測したユーザの生体信号を利用し、信号処理部42がユーザの生体に関する情報を出力し、
4.システム内制御部50が上記情報を統合して利用した場合、
光学装置10を利用したユーザに対応した、サイバー空間内のアイデンティティが形成される。そしてこのサイバー空間内のアイデンティティに対する任意のサービスが提供可能となる。またそれに限らず、サイバー空間を介して実空間上に配置されたロボットを操作して、ユーザへの更なるサービス提供が可能となる。
For example, 1. A wearable terminal such as VR or AR is incorporated into the display unit 18,
2. The gyroscope and acceleration sensor in the various non-optical sensors 52 detect the movement of the user's head and fingers (or hands),
3. Using the user's biological signals measured by the measurement unit 8, the signal processing unit 42 outputs information regarding the user's biological body,
4. When the system internal control unit 50 integrates and uses the above information,
An identity in cyberspace corresponding to the user using the optical device 10 is formed. Any service can then be provided to the identity within this cyberspace. Furthermore, the present invention is not limited to this, and by operating a robot placed in real space via cyberspace, it is possible to provide further services to users.

 例えば遠隔地に設置した自動歩行可能なロボットを操作して、ユーザに対する観光サービスが提供できる。また病院や施設内に設置した自動歩行可能なロボットを操作して、遠方からの介護サービスなどを提供できる。従来技術では上記サイバー空間内のアイデンティティ操作や実空間上のロボット操作には、音声入力やユーザの指(や手)の動作が必要だった。本実施形態における光学装置を使用すると、煩わしい発声や指動作が不要となり、高速操作が可能になる。それにより本実施形態におけるサービス提供の利便性が大幅に向上する。 For example, tourism services can be provided to users by operating robots that can walk automatically and installed in remote locations. In addition, robots that can walk automatically installed in hospitals and other facilities can be operated to provide nursing care services from a distance. With conventional technology, voice input and user finger (or hand) movements are required for identity manipulation in cyberspace and robot manipulation in real space. When the optical device of this embodiment is used, cumbersome vocalizations and finger movements are not required, and high-speed operation becomes possible. This greatly improves the convenience of providing services in this embodiment.

 サイバー空間を利用したサービス提供の他の実施形態として、マーケティング応用に利用しても良い。例えば情報提供部72を経由してVR画面やAR画面に所定の画像や映像を表示しながら、光学装置10内で逐一ユーザの感情や意志を推定しても良い。そしてユーザが好感度または興味を持った時に表示される画像や映像、音声を収集情報保存部74に適宜保存する。外部システム16は適正なタイミングで、情報伝達経路4を経由して収集情報保存部74内に保存された前記情報(画像や映像、音声)を収集する。次に外部システム16内で収集した情報を解析して、購買力の有る商品の抽出を行い、その情報を対応商品の販売会社に有料で提供しても良い。 As another embodiment of providing services using cyberspace, it may also be used for marketing applications. For example, while displaying a predetermined image or video on a VR screen or an AR screen via the information providing unit 72, the user's emotions and intentions may be estimated one by one within the optical device 10. Images, videos, and sounds that are displayed when the user is liked or interested are appropriately stored in the collected information storage section 74. The external system 16 collects the information (images, video, audio) stored in the collected information storage section 74 via the information transmission path 4 at appropriate timing. Next, the information collected within the external system 16 may be analyzed to extract products with purchasing power, and the information may be provided for a fee to a sales company of the corresponding product.

 本実施形態におけるサイバー空間上のサービス提供では、個人情報管理が非常に重要となる。そのため本実施形態におけるサービス提供の中で、個人情報管理サービス自体が非常に重要なサービスとなる。特定ユーザがサイバー空間に入った後にサイバー空間内で活動する場合、ユーザ個々を特定するためのアカウントID(identification)を使用する。そして光学装置10から得られるユーザの健康情報や嗜好情報が上記アカウントIDと結び付くと、個人情報に繋がる。 Personal information management is extremely important in providing services in cyberspace in this embodiment. Therefore, among the services provided in this embodiment, the personal information management service itself is a very important service. When a specific user performs activities in cyberspace after entering cyberspace, an account ID (identification) is used to identify each user. When the user's health information and preference information obtained from the optical device 10 are linked to the account ID, it becomes personal information.

 本実施形態におけるサービス提供例として、収集情報保存部74内または特性分析/解析処理部62内に個人情報管理エージェントを常駐させても良い。“ユーザのどの表情筋が収縮されているか”や“血液中の成分毎の含有比”、あるいは“どの神経細胞が活動(発火(nerve impulse))したか”などの情報は、信号処理部42内で解析される。そしてその情報を利用した“ユーザ感情の推定”や“ユーザの嗜好推定”、“ユーザ意志推定”などの高度な判断は、特性分析/解析処理部62内で行われる。そしてこの特性分析/解析処理部62で得られた情報は適宜、収集情報保存部74内に保存される。そして外部システム16からの要請に応じて、必要な情報が情報伝達経路4を経由して外部システム16に伝送される。 As an example of service provision in this embodiment, a personal information management agent may be resident within the collected information storage section 74 or within the characteristic analysis/analysis processing section 62. Information such as “which facial muscles of the user are contracting,” “the content ratio of each component in the blood,” or “which nerve cells are active (nerve impulses)” is collected by the signal processing unit 42. parsed within. Advanced judgments such as "estimation of user emotion", "estimation of user preference", and "estimation of user intention" using this information are performed within the characteristic analysis/analysis processing unit 62. The information obtained by the characteristic analysis/analysis processing unit 62 is stored in the collected information storage unit 74 as appropriate. Then, in response to a request from the external system 16, necessary information is transmitted to the external system 16 via the information transmission path 4.

 本実施形態におけるサービス提供例では、個人情報管理エージェントが特性分析/解析処理部62で得られた情報毎に伝達可能な外部範囲情報をリンク付けする。従って収集情報保存部74内に保存された全ての情報に対して、伝達可能な外部範囲情報が設定されている。そして外部システム16からの情報伝達要請毎に、個人情報管理エージェントが外部への伝達可否を判定する。このように個人情報管理サービスを光学装置10内で行うことで、信頼度の高い個人情報保護が可能となる。 In the service provision example of this embodiment, the personal information management agent links transmittable external range information to each piece of information obtained by the characteristic analysis/analysis processing unit 62. Therefore, transmittable external range information is set for all information stored in the collected information storage section 74. Then, for each information transmission request from the external system 16, the personal information management agent determines whether transmission to the outside is possible. By performing the personal information management service within the optical device 10 in this manner, highly reliable personal information protection is possible.

 本実施形態における他のサービス提供応用例として、人工知能を作る(人工知能に学習させる)ツールとして利用してもよい。ここでの人工知能として例えばディープラーニング(deep learning)技術や量子コンピュータ(quantum computer)技術で使われる『多入力多出力の学習機能付き並列処理方式』を利用してもよい。 As another service provision application example of this embodiment, it may be used as a tool to create artificial intelligence (to make artificial intelligence learn). As the artificial intelligence here, for example, a ``multi-input, multiple-output parallel processing method with a learning function'' used in deep learning technology or quantum computer technology may be used.

 多入力多出力の並列処理が適した複雑な解析/処理例として例えば、画像解析や画像理解、あるいは言語処理や言語理解、複雑状況に適応した高度な判断などが上げられる。計測対象物22の人間と人工知能の両方同時に、それらの課題を与える。そして人間が出した答えを正解として、その正解に近付くように人工知能に学習のフィードバックを掛けてもよい。 Examples of complex analysis/processing for which multiple-input, multiple-output parallel processing is suitable include image analysis and image understanding, language processing and language understanding, and advanced judgments adapted to complex situations. Both the human and artificial intelligence of the measurement object 22 are given their tasks at the same time. Then, the answer given by the human can be regarded as the correct answer, and learning feedback can be applied to the artificial intelligence so that it approaches the correct answer.

 これらのツールをサイバー空間上で実行しても良い。この場合には、学習すべき人工知能が外部システム16上に予め設置され、人間が出す正解を光学装置10(あるいはアプリケーション分野適合部60)から情報伝達経路4を経由して上記人工知能に通知できる。 These tools may be executed in cyberspace. In this case, the artificial intelligence to be learned is installed in advance on the external system 16, and the correct answer given by the human is notified to the artificial intelligence via the information transmission path 4 from the optical device 10 (or the application field adaptation section 60). can.

 サービス提供例として上記に限らず、光学装置10が情報伝達経路4を介して外部システム16上に構築されたサイバー空間に接続された形で実施される任意のサービス提供を行っても良い。 Examples of service provision are not limited to those described above, and any service provision may be provided in which the optical device 10 is connected to a cyberspace built on the external system 16 via the information transmission path 4.

 1.2節 光応用分野毎に要求される光学特性の概要
図2は、光応用分野100毎に要求される(望ましい)光学特性内容102を一覧表で示してある。特に本実施形態では特に、四角の枠内に囲まれた要求される(望ましい)光学特性内容102に適合できる。
Section 1.2 Outline of Optical Properties Required for Each Optical Application Field FIG. 2 shows a list of (desirable) optical properties 102 required for each optical application field 100. In particular, in this embodiment, the required (desired) optical characteristic contents 102 surrounded by a rectangular frame can be met.

 本実施形態として適用される光応用分野100は、図2に示すように多義に亘っている。しかしそれに限らず、(光を利用した表示を含めた)何らかの形で光に関わる全ての応用分野100が、本実施形態の対象となる。 The optical application field 100 to which this embodiment is applied has many meanings, as shown in FIG. However, the present embodiment is not limited to this, and all application fields 100 related to light in some way (including display using light) are applicable to this embodiment.

 第2章 本実施形態で使用される基本的光学特性
2.1節 異なる波長光を含む光の波連 
光に含まれる波長の状態で、一般的に多波長光(panchromatic light)と単一波長光(monochromatic light)に区別される場合がある。特にレーザ光は単一波長光と思われている。しかし厳密には、どの光の中にも異なる波長光が含まれる。例えばガスレーザ(gas laser)や固体レーザ(solid state laser)では、放射光の波長幅は比較的狭い。しかし同じレーザでも半導体レーザ(laser diode)からの放射光の波長幅は比較的広く、2nm程度の波長半値幅(half width of wavelength)を持つ場合も多い。
Chapter 2 Basic optical characteristics used in this embodiment Section 2.1 Wave chain of light containing light of different wavelengths
Depending on the wavelengths contained in light, it is generally classified into multi-wavelength light (panchromatic light) and single-wavelength light (monochromatic light). In particular, laser light is considered to be a single wavelength light. However, strictly speaking, all light includes light of different wavelengths. For example, in gas lasers and solid state lasers, the wavelength width of emitted light is relatively narrow. However, even with the same laser, the wavelength width of emitted light from a semiconductor laser (laser diode) is relatively wide, and often has a half width of wavelength of about 2 nm.

 図3は、波長幅Δλの範囲内に含まれる波長光が集まった時の特性を示す。ここで 図3(c)は中心波長λの光の進行を表わし、図3(a)と(e)は波長λ-Δλ/2とλ+Δλ/2の光の進行を表わす。また図3(b)と(d)は、波長λ-Δλ/4とλ+Δλ/4の光の進行を表わす。そして 図3(f)は、全ての波長光を組み合わせた結果を示している。波長幅Δλの範囲内に含まれる波長光が集まって形成する光の集合は、波連(wave train)と呼ばれる。 FIG. 3 shows the characteristics when wavelength light included within the range of wavelength width Δλ is gathered. Here, FIG. 3(c) shows the progress of light with a center wavelength λ 0 , and FIGS. 3(a) and 3(e) show the progress of light with wavelengths λ 0 −Δλ/2 and λ 0 +Δλ/2. Furthermore, FIGS. 3(b) and 3(d) represent the progress of light with wavelengths λ 0 −Δλ/4 and λ 0 +Δλ/4. And FIG. 3(f) shows the result of combining all the wavelength lights. A collection of light formed by the collection of wavelength lights within the range of wavelength width Δλ is called a wave train.

 図3の中心位置で全ての光で山の位置が一致した場合には、それを組み合わせた図3(f)では中心位置で最大の山を形成する。また図3の中心位置から左右に移動するに従い、異なる波長光間での位相(phase)がずれて来る。そして図3の左右端では、異なる波長光間での位相が完全にバラバラになる。このように図3(a)~(e)間で位相ずれが発生すると、それらを組み合わせた波連全体の振幅が周辺方向で減少する。 If the peak positions of all the lights match at the center position in FIG. 3, the largest peak will be formed at the center position in FIG. 3(f) where they are combined. Furthermore, as the light beam moves left and right from the center position in FIG. 3, the phases of the light beams of different wavelengths shift. At the left and right ends of FIG. 3, the phases of lights of different wavelengths are completely different. When a phase shift occurs between FIGS. 3(a) to 3(e) in this way, the amplitude of the entire wave train combining them decreases in the peripheral direction.

 上記の波長幅Δλとは、光源部2から放出された光内に含まれる波長光の波長幅を意味する。例えば光源部2から放出された光の分光強度特性(波長毎の光強度特性)が上記波長幅内で非均一な分光分布特性を持つ場合が多い。この場合には、波長半値幅(波長方向で最大強度に対する半分の強度を持つ波長範囲)またはe-2幅(波長方向で最大強度に対するe-2の強度を持つ波長範囲)を波長幅Δλと呼んでも良い。 The above-mentioned wavelength width Δλ means the wavelength width of the wavelength light included in the light emitted from the light source section 2. For example, the spectral intensity characteristics (light intensity characteristics for each wavelength) of the light emitted from the light source section 2 often have non-uniform spectral distribution characteristics within the above wavelength range. In this case, the wavelength half-width (wavelength range with an intensity that is half of the maximum intensity in the wavelength direction) or the e -2 width (the wavelength range that has an intensity of e -2 with respect to the maximum intensity in the wavelength direction) is defined as the wavelength width Δλ. You can call me.

 またそれに限らず光源部2が多波長光(panchromatic light)を放出した場合には、計測部8内の波長分解能が上記の波長幅Δλに相当すると考えてもよい。例えば図22で後述する計測部8では、1個のプリアンプ1150(検出セル)が異なる波長光を同時に検出する。そして1個のプリアンプ1150(検出セル)で検出する波長光の波長範囲が、波長幅Δλに相当する。ところでこの1個のプリアンプ1150(検出セル)が検出する分光感度特性(信号検出感度特性の波長依存性)が非均一な場合もある。この場合には、波長半値幅(波長方向での最大検出感度に対する半分の検出感度を持つ波長範囲)またはe-2幅(波長方向での最大検出感度に対するe-2の検出感度を持つ波長範囲)を波長幅Δλと呼んでも良い。 Furthermore, the present invention is not limited to this, and when the light source section 2 emits multi-wavelength light (panchromatic light), it may be considered that the wavelength resolution within the measurement section 8 corresponds to the wavelength width Δλ. For example, in the measurement unit 8 described later in FIG. 22, one preamplifier 1150 (detection cell) simultaneously detects light of different wavelengths. The wavelength range of wavelength light detected by one preamplifier 1150 (detection cell) corresponds to the wavelength width Δλ. However, the spectral sensitivity characteristics (wavelength dependence of signal detection sensitivity characteristics) detected by this one preamplifier 1150 (detection cell) may be non-uniform. In this case, the wavelength half-width (wavelength range with a detection sensitivity half of the maximum detection sensitivity in the wavelength direction) or the e -2 width (the wavelength range with a detection sensitivity of e -2 relative to the maximum detection sensitivity in the wavelength direction) is used. ) may be called the wavelength width Δλ.

 この波連特性を、以下に数式で表わす。図3(a)~(e)の個々の波は、振動数ν-Δν/2からν+Δν/2までの異なる振動数νを持った平面波で表わせる。従ってこの振動数範囲で平面波を積分すると、1個の波連特性は This wave series characteristic is expressed by the following formula. The individual waves in FIGS. 3(a) to 3(e) can be represented by plane waves with different frequencies ν from ν 0 −Δν/2 to ν 0 +Δν/2. Therefore, when integrating a plane wave in this frequency range, the wave chain characteristic is

Figure JPOXMLDOC01-appb-M000001
で与えられる。ここで得られたシンク関数(sinc function)が、図3(f)の包絡線に相当する。また数式1から分かるように、図3(f)の波長は図3(c)の波長と一致する。
Figure JPOXMLDOC01-appb-M000001
is given by The sinc function obtained here corresponds to the envelope shown in FIG. 3(f). Further, as can be seen from Equation 1, the wavelength in FIG. 3(f) matches the wavelength in FIG. 3(c).

 図3(f)に示す波連の中心から端部までの物理的距離ΔLは、可干渉距離(coherence length)と呼ばれる。そしてこの可干渉距離は、 The physical distance ΔL 0 from the center to the end of the wave train shown in FIG. 3(f) is called the coherence length. And this coherence distance is

Figure JPOXMLDOC01-appb-M000002
で与えられる事が知られている。また簡単な計算から波長幅Δλと振動数幅Δνとの関係は下記の式が成り立つ。従って数式2の関係を適用すると、
Figure JPOXMLDOC01-appb-M000002
It is known that it is given by Further, from a simple calculation, the following equation holds true for the relationship between the wavelength width Δλ and the frequency width Δν. Therefore, applying the relationship of formula 2, we get

Figure JPOXMLDOC01-appb-M000003
が導かれる。
Figure JPOXMLDOC01-appb-M000003
is guided.

 図3(f)に示す波連の端部周辺特性に関する実験結果を、次に説明する。図4(a)は実験に使用した光学系を示す。この光学系は概略的には光源部2と試料設置部36、計測部8から構成され、各部間は光ファイバを経由して光が伝達される。 Next, the experimental results regarding the characteristics around the end of the wave train shown in FIG. 3(f) will be explained. FIG. 4(a) shows the optical system used in the experiment. This optical system is roughly composed of a light source section 2, a sample setting section 36, and a measuring section 8, and light is transmitted between each section via an optical fiber.

 光源部2内の光源には、タングステンハロゲンランプHLを使用した。光路の反対側に凹面鏡CMを配置し、ハロゲンランプHLから放射される光の利用効率を高めた。すなわちこの凹面鏡CMがハロゲンランプHLの後方(図の左側)に向かって放出された光を反射し、ハロゲンランプHLの内部に再び戻す。そしてハロゲンランプHLの内部を通過した光は、ハロゲンランプHLの前方(図の右側)に向かって進む。 A tungsten halogen lamp HL was used as the light source in the light source section 2. A concave mirror CM is placed on the opposite side of the optical path to increase the efficiency of using the light emitted from the halogen lamp HL. That is, this concave mirror CM reflects the light emitted toward the rear of the halogen lamp HL (toward the left side in the figure) and returns it to the inside of the halogen lamp HL. The light that has passed through the interior of the halogen lamp HL then travels toward the front of the halogen lamp HL (to the right in the figure).

 焦点距離25.4mm のレンズL1がハロゲンランプHLから放射された光を平行光に変換する。その後この平行光を、焦点距離25.4mm のレンズL2がバンドルファイバBFの入り口面に集光させる。バンドルファイバBF内は、1本当たりコア径230μmでNA0.22の光ファイバが320本束ねられている。この2枚のレンズL1とL2間の平行光路途中に、光学特性変更素子210が配置されている。 A lens L1 with a focal length of 25.4 mm converts the light emitted from the halogen lamp HL into parallel light. Thereafter, a lens L2 with a focal length of 25.4 mm focuses this parallel light onto the entrance surface of the bundle fiber BF. Inside the bundle fiber BF, 320 optical fibers each having a core diameter of 230 μm and an NA of 0.22 are bundled. An optical characteristic changing element 210 is placed in the middle of the parallel optical path between these two lenses L1 and L2.

 ハロゲンランプHL内で発光するフィラメントは、2mm×4mm のサイズを持つ。従ってフィラメント内の最外側からの放射光は、2枚のレンズL1とL2で構成される結像光学系内で軸外収差(コマ収差)を発生する。このコマ収差の影響を除去するため、ハロゲンランプHL直後に直径3mm のアパーチャA3を配置した。 The filament that emits light within the halogen lamp HL has a size of 2 mm x 4 mm. Therefore, the emitted light from the outermost side of the filament generates off-axis aberration (coma aberration) within the imaging optical system composed of the two lenses L1 and L2. In order to eliminate the influence of this comatic aberration, an aperture A3 with a diameter of 3 mm was placed immediately after the halogen lamp HL.

 試料設置部36内では、焦点距離50mm のレンズL3がバンドルファイバBFから出射した光を平行光に変換する。そして平行光束が、試料TSに入射する。ここで試料の直前に直径10mm のアパーチャA10を配置し、得られる分光特性データの精度と再現性を向上させた。 Inside the sample installation section 36, a lens L3 with a focal length of 50 mm converts the light emitted from the bundle fiber BF into parallel light. The parallel light beam then enters the sample TS. Here, an aperture A10 with a diameter of 10 mm was placed just in front of the sample to improve the accuracy and reproducibility of the obtained spectral characteristic data.

 図4(a)が示すように今回の実験では、試料の透過光を利用して分光特性データを取得する。焦点距離250mm のレンズL4が、この試料の透過光を単芯ファイバSFの入射面(コア径600μm)に集光させる。分光器SMには、波長分解能7.5nmの近赤外分光器(浜ホト製C11482GA)を使用した。光学特性変更素子210の構造例は、図7B(b)を用いて後述する。 As shown in FIG. 4(a), in this experiment, spectral characteristic data is obtained using the transmitted light of the sample. A lens L4 with a focal length of 250 mm focuses the transmitted light of this sample onto the incident surface of the single-core fiber SF (core diameter 600 μm). As the spectrometer SM, a near-infrared spectrometer (C11482GA manufactured by Hamahoto Co., Ltd.) with a wavelength resolution of 7.5 nm was used. A structural example of the optical characteristic changing element 210 will be described later using FIG. 7B(b).

 実験に使用した試料TSの構造を、図4(b)に示す。すなわち試料設置部38内の試料TSの位置に、屈折率が“ n ”で、機械的厚みが“ d = d0 + δd ”の透明ガラス板を配置した。そして焦点距離“ F ”を持つ瞳半径“ a ”のレンズL4が、この透明ガラス板の通過光を単芯ファイバSFの入口面上に集光させる。この単芯ファイバSFの入口面上のコア領域内が、図4(b)の“P点”に相当する。 The structure of the sample TS used in the experiment is shown in FIG. 4(b). That is, a transparent glass plate having a refractive index of "n" and a mechanical thickness of "d = d0 + δd" was placed at the position of the sample TS in the sample setting section 38. A lens L4 having a focal length "F" and a pupil radius "a" focuses the light passing through the transparent glass plate onto the entrance surface of the single-core fiber SF. The inside of the core region on the entrance surface of the single-core fiber SF corresponds to "point P" in FIG. 4(b).

 上記透明ガラス板の表裏面は無コート状態なので、透明ガラス板の表裏面を通過する光の約4%の光量が表裏面で反射する。従って透明ガラス板の直進光Sと表裏面で2回反射した後の光Sが、“P点”で干渉する。 Since the front and back surfaces of the transparent glass plate are uncoated, about 4% of the light passing through the front and back surfaces of the transparent glass plate is reflected by the front and back surfaces. Therefore, the light S 0 traveling straight through the transparent glass plate and the light S 1 reflected twice on the front and back surfaces interfere at "point P".

 図5は、直進光Sと2回反射した後の光Sとの間の干渉状態を示す。透明ガラス板内を直進する波連の包絡線特性Sの位置を基準位置に固定した。片や2回反射後の波連の包絡線特性を表わす関数Sに関しては、中心波長λを0.9μmから1.7μmまで変化させた時の相対位置変化として記載した。 FIG. 5 shows the interference state between the straight light S 0 and the twice reflected light S 1 . The position of the envelope characteristic S0 of the wave train traveling straight through the transparent glass plate was fixed at a reference position. Regarding the function S 1 representing the envelope characteristic of the wave train after one or two reflections, it was described as a relative position change when the center wavelength λ 0 was changed from 0.9 μm to 1.7 μm.

 図5の横軸は数式2で表わされる可干渉距離ΔLを基準単位で表わして透明ガラス板の表裏面間の機械的平均厚みdは固定値なため、透明ガラス板内を直進する波連Sの中心位置と2回反射後の波連Sの中心位置間の機械的間隔は一定に保たれる。ここでこの機械的一定距離を、可干渉距離ΔLの基準単位で換算した場合を考える。数式2が示すように、可干渉距離ΔLは中心波長λの2乗に比例して変化する。従って図5での2波連間の相対的位置が、中心波長λの値に応じて変化するように見える。 The horizontal axis of FIG. 5 represents the coherence length ΔL 0 expressed by Equation 2 in reference units, and since the mechanical average thickness d 0 between the front and back surfaces of the transparent glass plate is a fixed value, the wave traveling straight through the transparent glass plate is The mechanical distance between the center position of the wave train S 0 and the center position of the wave train S 1 after two reflections is kept constant. Here, consider a case where this mechanical constant distance is converted into a standard unit of coherent distance ΔL 0 . As shown in Equation 2, the coherence length ΔL 0 changes in proportion to the square of the center wavelength λ 0 . Therefore, the relative position between the two wave trains in FIG. 5 appears to change depending on the value of the center wavelength λ 0 .

 そして両方の波連間の重なり領域(図5の斜線領域)<S0S1>の面積が2波連間で発生する光学的干渉縞の大きさに相当する。そして図5が示すように中心波長λが1.7μmや1.5μmでは、波連間が重なる。しかし中心波長λが1.1μm以下では2波連間の重なりが“ 0 ”となり、干渉縞が発生しない。 The area of the overlapping region (shaded area in FIG. 5) <S0S1> between both wave trains corresponds to the size of the optical interference fringes generated between the two wave trains. As shown in FIG. 5, when the center wavelength λ 0 is 1.7 μm or 1.5 μm, the wave trains overlap. However, when the center wavelength λ 0 is 1.1 μm or less, the overlap between the two wave trains becomes “0” and no interference fringes are generated.

 図4の光学系を用いて得られた実験結果を、図6Aに示す。前述したように分光器SMの波長分解能は7.5nmだった、ここで透明ガラス板の厚みdを138.40μmとして計算すると、測定データと既存理論に基付く理論計算結果がほぼ一致した。但しこの測定データと理論計算結果がほぼ一致した領域は、1.4μmより長波長側に限定される。そして1.4μmより短波長側では、次節で説明するような既存理論に基付く理論計算結果と測定データ間の乖離が見られた。 Experimental results obtained using the optical system of FIG. 4 are shown in FIG. 6A. As mentioned above, the wavelength resolution of the spectrometer SM was 7.5 nm, and when the thickness d0 of the transparent glass plate was calculated as 138.40 μm, the measured data and the theoretical calculation results based on existing theory almost matched. However, the region where the measured data and the theoretical calculation results almost match is limited to the wavelength side longer than 1.4 μm. On the wavelength side shorter than 1.4 μm, a discrepancy was observed between the theoretical calculation results based on existing theory and the measured data, as will be explained in the next section.

 2.2節 異なる波連間の関係と光の干渉特性 
図6B(f)は、既存理論で計算した端部βより外側(γ領域周辺)の波連特性結果を示す。図6B(f)に示す既存理論では、波連の中央部αから離れると波連の振幅は減少し、波連は消滅する。つまり既存理論では、波連はパルス光のように空間中を不連続に伝搬する。そしてハロゲンランプからの放射光のように連続的に放射される光は、既存理論では説明できない。
Section 2.2 Relationship between different wave trains and interference characteristics of light
FIG. 6B(f) shows the result of the wave chain characteristics outside the end β (around the γ region) calculated using the existing theory. According to the existing theory shown in FIG. 6B(f), the amplitude of the wave train decreases and the wave train disappears when moving away from the central part α of the wave train. In other words, in the existing theory, waves propagate discontinuously in space like pulsed light. Continuously emitted light, such as that from a halogen lamp, cannot be explained using existing theories.

 また図6B(f)に示す既存理論での計算結果は、
A]波連の端部βより外側領域γでも、波連振幅が見られる
B]波連内α~βの領域と端部より外側領域(γ領域周辺)では、位相が反転する
の特徴が現れるはずである。しかし図6Aの測定データでは、[A]と[B]いずれの特徴も観測されなかった。
In addition, the calculation results using the existing theory shown in Figure 6B(f) are
A] The wave train amplitude is also seen in the region γ outside the end β of the wave train. B] The phase is reversed in the region α to β within the wave train and in the region outside the end (around the γ region). It should appear. However, neither feature [A] nor [B] was observed in the measurement data of FIG. 6A.

 図6B(g)は、上記既存理論の問題点を解決するために新たに提案する物理モデルを示す。図6B(g)が示すように、波連の端部βでパラダイムシフト( 位相角進行方向の反転 )が発生すると、次の波連の生成が可能になると共に、図6Aの測定データを巧く説明できる。波連の端部βで位相角進行方向の反転が発生する物理モデルを、下記に説明する。 FIG. 6B(g) shows a new physical model proposed to solve the problems of the existing theory. As shown in Fig. 6B(g), when a paradigm shift (reversal of the phase angle advancing direction) occurs at the end β of the wave train, it becomes possible to generate the next wave train, and the measurement data in Fig. 6A can be modified. I can explain it well. A physical model in which a reversal of the phase angle advancing direction occurs at the end β of the wave train will be described below.

 端部近傍(図6Bのβ位置近傍)での波連の包絡線の特性は、数式1と数式3から The characteristics of the envelope of the wave chain near the end (near the β position in Figure 6B) are obtained from Equations 1 and 3.

Figure JPOXMLDOC01-appb-M000004
と、近似される。また複素関数論の見地から正弦関数に関して、
Figure JPOXMLDOC01-appb-M000004
It is approximated as follows. Also, regarding the sine function from the perspective of complex function theory,

Figure JPOXMLDOC01-appb-M000005
の関係が成立する。
Figure JPOXMLDOC01-appb-M000005
The relationship holds true.

 そして数式1に数式4と数式5を代入すると、 Then, by substituting formula 4 and formula 5 into formula 1, we get

Figure JPOXMLDOC01-appb-M000006
と変形できる。ここで
Figure JPOXMLDOC01-appb-M000006
It can be transformed into here

Figure JPOXMLDOC01-appb-M000007
の条件を満足する場所では、下記の関係が成り立つ。
Figure JPOXMLDOC01-appb-M000007
In places where the following conditions are satisfied, the following relationship holds true.

Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008

 数式8の上側の右辺は、端部近傍での『先行(先に発生)する波連』を表わす。また数式8の下側の式は、『後行(後で発生)する波連』の始端部近傍を表わす。特に注目すべきポイントは、数式8の上側の右辺と下側の式との間で『位相角進行方向の逆転』が起きている。このように『先行波連』の終端部近傍(図6Bのβ位置近傍)で『位相角進行方向の逆転』が起きると、その直後から各構成波長光間の位相同期化が開始される。その結果として、『後行波連』が発生する。 The upper right-hand side of Equation 8 represents the "wave series that precedes (occurs first)" near the end. Further, the lower equation of Equation 8 represents the vicinity of the starting end of the "trailing (later occurring) wave series". A particularly notable point is that "reversal of the phase angle advancing direction" occurs between the upper right side of Equation 8 and the lower equation. In this way, when the "reversal of the phase angle advancing direction" occurs near the end of the "preceding wave series" (near the β position in FIG. 6B), phase synchronization between the component wavelength lights starts immediately thereafter. As a result, a "trailing wave series" occurs.

 電磁気学では、互いに直交方向に発生する振動電場と振動磁場との相互作用で電磁波が空間内を伝搬する。しかし『先行波連』の終端部近傍では、振動電場位相と振動磁場位相が波長光間でランダムな状況になっている。そしてこの振動電場と振動磁場のランダムな状態が、『位相角進行方向の逆転』を生む土壌を形成している可能性がある。 In electromagnetism, electromagnetic waves propagate in space due to the interaction between an oscillating electric field and an oscillating magnetic field that are generated orthogonally to each other. However, near the end of the "preceding wave series", the oscillating electric field phase and the oscillating magnetic field phase are random among wavelengths of light. It is possible that this random state of the oscillating electric field and oscillating magnetic field forms the basis for the ``reversal of the phase angle direction''.

 上記数式7は、上記パラダイムシフトが発生する条件理論式を示す。ここで上記の数式7の右辺と左辺との間は等式の関係では無く、近似式の関係が成り立つ事が重要となる。すなわち『先行波連』内の特定位相値で『位相角進行方向が逆転』するのではなく、『先行波連』内の“位相が不特定の場所”で『位相角進行方向の逆転』が発生する。その結果として、『先行波連』と『後行波連』間で位相の不連続性(位相非同期)が発生する。 The above equation 7 shows a conditional theoretical equation for the occurrence of the above paradigm shift. Here, it is important that the relationship between the right side and the left side of the above equation 7 is not an equation, but an approximate expression. In other words, instead of ``reversing the direction of phase angle progression'' at a specific phase value in the ``preceding wave series'', ``reversing the direction of phase angle progression'' occurs at an ``place where the phase is unspecified'' within the ``preceding wave series''. Occur. As a result, phase discontinuity (phase asynchrony) occurs between the "leading wave series" and the "following wave series".

 すなわち上記の物理モデルから、
C〕前の波連の終端部近傍βで、後ろの波連が連続的に生まれる( 波連連続生成 )
D〕後ろの波連が生まれるタイミングが不確定なため、前と後ろの波連間での位相の不連続性(位相非同期)が発生する
の特徴が導かれる。〔C〕の波連の連続生成の特性は今まで知られて無く、この物理モデルから初めて導かれる。一方で〔D〕の特徴は既に知られているが、今まで〔D〕の特徴が現れる理由は説明されたことは無かった。今回の物理モデルは、〔D〕の特徴の理由を明確に説明できる。
In other words, from the above physical model,
C] The subsequent wave sequence is continuously generated at β near the end of the previous wave sequence (wave sequence continuous generation)
D] The timing at which the subsequent wave series is generated is uncertain, leading to the characteristic that phase discontinuity (phase asynchrony) occurs between the previous and subsequent wave series. The characteristics of the continuous generation of waves in [C] have not been known until now, and can be derived for the first time from this physical model. On the other hand, although the characteristics of [D] are already known, the reason why the characteristics of [D] appear has never been explained until now. This physical model can clearly explain the reason for the feature [D].

 前の波連をずらして後ろの波連と重ねた場合を考える。前後の波連間の位相連続性(位相継続性または位相同期性)が有れば、前の波連と後ろの波連を重ねた合成光の位相が常に一意的に定まる。 しかし〔D〕の性質から、常に前後の波連間の位相ずれ量が変化する。従って所定期間で観察した場合(時間方向に巨視的に見た場合)には、前後の波連間で干渉する様子は観測できない。この状態を、時間を巨視的に見た場合の“前の波連と後ろの波連間の非干渉性”と呼ぶ。 Consider the case where the front wave sequence is shifted and overlapped with the rear wave sequence. If there is phase continuity (phase continuity or phase synchronization) between the front and rear wave trains, the phase of the composite light obtained by overlapping the previous wave train and the rear wave train is always uniquely determined. However, due to the nature of [D], the amount of phase shift between the previous and subsequent wave trains always changes. Therefore, when observed over a predetermined period of time (when viewed macroscopically in the time direction), it is impossible to observe interference between previous and subsequent wave sequences. This state is called "incoherence between the previous wave series and the following wave series" when looking at time macroscopically.

 そして所定期間で前の波連と後ろの波連との合成光の強度は、前の波連の平均強度と後ろの波連の平均強度を加算した値と等しくなる。この状況を本実施形態では、“強度加算”と呼ぶ。 Then, in a predetermined period, the intensity of the combined light of the previous wave series and the subsequent wave series becomes equal to the sum of the average intensity of the previous wave series and the average intensity of the subsequent wave series. In this embodiment, this situation is called "strength addition."

 2.3節 波連特性を利用した特殊な光学特性の生成 
図7Aは、本実施形態における光学特性変換素子210を使った基本的な光学系配置を示す。すなわち光学特性変換素子210が、初期光200を複数の光202~206に分割する。ここで光学特性変換素子210内の第1の光路222が第1の光学特性を有する第1の光202を形成し、第2の光路224が第2の光学特性を有する第2の光204を形成する。その後光合成場所220が、この第1の光202と第2の光204を合成して所定光230を形成する。ところでこの第1の光路222と第2の光路224間の少なくとも一部は、異なる空間的場所に配置される。さらにこの第1の光202が持つ第1の光学特性と、この第2の光204が持つ第2の光学特性が互いに異なる。この“光学特性の異なり”とは、前節で説明した両者間の“位相の不連続性(位相非同期性)”を示してもよい。あるいは前節末尾の説明に関連して、第1の光202の少なくとも一部と第2の光204の少なくとも一部との間の“非干渉性”が上記光学特性の異なりを意味してもよい。
Section 2.3 Generation of special optical properties using wave chain properties
FIG. 7A shows the basic arrangement of an optical system using the optical characteristic conversion element 210 in this embodiment. That is, the optical characteristic conversion element 210 divides the initial light 200 into a plurality of lights 202 to 206. Here, a first optical path 222 in the optical property conversion element 210 forms a first light 202 having a first optical property, and a second optical path 224 forms a second light 204 having a second optical property. Form. A photosynthesis site 220 then combines the first light 202 and the second light 204 to form a predetermined light 230. By the way, at least a portion between the first optical path 222 and the second optical path 224 is arranged at different spatial locations. Furthermore, the first optical characteristic that this first light 202 has and the second optical characteristic that this second light 204 has are different from each other. This "difference in optical properties" may also refer to the "phase discontinuity (phase asynchrony)" between the two described in the previous section. Alternatively, in relation to the explanation at the end of the previous section, "incoherence" between at least a portion of the first light 202 and at least a portion of the second light 204 may mean the difference in optical properties. .

 またそれに限らずさらに、第3の光路226で第3の光学特性を有する第3の光206を形成しても良い。この場合にこの第3の光路226の少なくとも一部は、第1の光路222と第2の光路224とは異なる空間的場所に配置されても良い。 Furthermore, the third light 206 having the third optical characteristic may be formed in the third optical path 226 without being limited thereto. In this case, at least a portion of this third optical path 226 may be arranged at a different spatial location than the first optical path 222 and the second optical path 224.

 ここで第1の光路222と第2の光路224、第3の光路226間の少なくとも一部を異なる空間的場所に配置する方法として、初期光200に対して波面分割(wave front division)して各光202~206を個別抽出しても良い。この波面分割とは、入射する初期光200の光断面(初期光200が構成する光束を、初期光200の進行方向に垂直な面で切断した面)上あるいは初期光200の波面(wave front)上の互いに異なる場所に各領域212~216を配置し、各光202~206を個別抽出する。 Here, as a method for arranging at least part of the first optical path 222, second optical path 224, and third optical path 226 at different spatial locations, wave front division is performed on the initial light 200. Each of the lights 202 to 206 may be extracted individually. This wavefront division refers to the optical cross-section of the incident initial light 200 (a surface obtained by cutting the light flux constituted by the initial light 200 by a plane perpendicular to the traveling direction of the initial light 200) or the wavefront of the initial light 200. Each region 212 to 216 is placed at a different location on the top, and each light 202 to 206 is individually extracted.

 上記の技術的内容を、光学的作用を実現する光学特性変換素子210の構造の視点で説明し直す。すなわち本実施形態で使用される光学特性変換素子210は、互いに異なる第1の領域212と第2の領域214が含まれる。そして領域212、214毎の光路長を変化させてもよい。そしてこの第1の領域212内での第1の光202の光路長と第2の領域214内での第2の光204の光路長との差が前述した可干渉距離ΔLより大きい場合には、第1の光202と第2の光204との間で“位相の不連続性(非同期性)”(互いに異なる光学特性)が発生する。さらに光学特性変換素子210の空間的構造は、光合成場所220で容易に第1と第2の光202、204を合成して所定光230を形成し易い構造となっている。 The above technical content will be explained again from the viewpoint of the structure of the optical characteristic conversion element 210 that realizes the optical function. That is, the optical property conversion element 210 used in this embodiment includes a first region 212 and a second region 214 that are different from each other. The optical path length for each region 212 and 214 may also be changed. When the difference between the optical path length of the first light 202 in the first region 212 and the optical path length of the second light 204 in the second region 214 is larger than the above-mentioned coherence length ΔL 0 , In this case, “phase discontinuity (asynchrony)” (mutually different optical characteristics) occurs between the first light 202 and the second light 204. Furthermore, the spatial structure of the optical characteristic conversion element 210 is such that the first and second lights 202 and 204 can be easily combined to form a predetermined light 230 at the light synthesis location 220.

 この第1の光202と第2の光204を合成して所定光230を形成し易い空間的構造の具体例として、入射する初期光200を波面分割して各光202、204に分ける構造を持ってもよい。すなわち入射する初期光200の進行方向に垂直な面で光束を切断して得られる光束断面内の所定領域に、第1の領域212が配置される空間的構造を取っても良い。そして上記光束断面内の他の領域に、第2の領域214が配置される空間的構造を取る。しかしそれに限らず他の方法として、初期光200に対して振幅分割(amplitude division)または光量分割(intensity division)しても良い。 As a specific example of a spatial structure that facilitates combining the first light 202 and the second light 204 to form the predetermined light 230, there is a structure in which the incident initial light 200 is wavefront-divided into each light 202 and 204. You can have it. That is, a spatial structure may be adopted in which the first region 212 is arranged in a predetermined region within a cross section of the light beam obtained by cutting the light beam by a plane perpendicular to the traveling direction of the incident initial light 200. Then, a spatial structure is adopted in which a second region 214 is arranged in another region within the cross section of the light beam. However, the method is not limited to this, and as another method, the initial light 200 may be subjected to amplitude division or intensity division.

 また他の応用例として光学特性変換素子210内に更に第3の領域216を設け、この第3の領域216を経た第3の光206も光合成場所220で他の光202、204と共に合成してもよい。 As another application example, a third region 216 is further provided within the optical property conversion element 210, and the third light 206 that has passed through the third region 216 is also combined with other lights 202 and 204 at a light synthesis location 220. Good too.

 図7B(b)は、この光学特性変換素子210構造の一例を示す。図7B(a)の光学配置は、既に説明した図4(a)と一致する。この場合には、光学特性変換素子210は2個のレンズL1、L2間の平行光の位置に存在する。 FIG. 7B(b) shows an example of the structure of this optical property conversion element 210. The optical arrangement in FIG. 7B(a) matches that in FIG. 4(a) already described. In this case, the optical property conversion element 210 is located at the position of the parallel light between the two lenses L1 and L2.

 厚さ2mm と 3mm の半円ガラスを90度回転させた形で貼り合わせて1ペアを形成する。次に各ペア間を45度回転させて接着すると、角度方向に8分割された光学特性変更素子が完成する。ここで8分割された各領域のガラス厚みは互いに、1mm 以上異なっている。 A pair of semicircular glasses with a thickness of 2 mm and 3 mm are attached by rotating them 90 degrees. Next, each pair is rotated 45 degrees and bonded together, thereby completing an optical property changing element divided into eight parts in the angular direction. Here, the glass thicknesses of the eight divided regions differ from each other by 1 mm or more.

 光学特性変更素子210内左下のA領域では、光学特性変更素子( ガラス )の厚みは0mm となっている。従ってこのA領域を通過する光は、光学特性変更素子内のガラスが存在しない領域を素通りした事になる。このA領域を出発点として時計回り方向にB領域、C領域、‥ と進むと、ガラス厚みは順次 2mm 、4mm 、7mm 、10mm 、8mm 、6mm 、3mm と変化する。 In the lower left area A of the optical property changing element 210, the thickness of the optical property changing element (glass) is 0 mm. Therefore, the light passing through this region A passes through a region in the optical characteristic changing element where no glass exists. When proceeding clockwise from area A to area B, area C, etc., the glass thickness changes sequentially to 2 mm, 4 mm, 7 mm, 10 mm, 8 mm, 6 mm, and 3 mm.

 光はガラス内を通過する時、通過速度が遅くなる特性が有る。従って同一の機械的距離を光が通過する時、真空中とガラス内では光学的な距離(光路長)が変化する。そのためA領域からH領域に至るどの領域内を通過したかで、光学特性変更素子通過後の光の光路長が異なる。本実施形態では、各領域を通過した個々の光を“エレメント”と呼ぶ。すなわち異なるエレメント間では互いに、光学特性変更素子210通過後の光学的距離(光路長)が異なる特性を持つ。 When light passes through glass, it has the characteristic of slowing down. Therefore, when light passes through the same mechanical distance, the optical distance (optical path length) changes in a vacuum and in glass. Therefore, the optical path length of the light after passing through the optical characteristic changing element differs depending on which region from the A region to the H region the light passes through. In this embodiment, each light beam passing through each area is called an "element". That is, different elements have different optical distances (optical path lengths) after passing through the optical property changing element 210.

 図7B(a)のレンズL2の作用で、光学特性変更素子210通過後の全てのエレメントがバンドルファイバBF内で合成される。ここで各エレメント間の光路長差が可干渉距離ΔLより大きな場合(あるいは可干渉距離ΔLの2倍より大きな場合)には、互いに位相不連続/非同期な(非干渉性を持つ)複数のエレメントがバンドルファイバBF内で混在する。 Due to the action of the lens L2 in FIG. 7B(a), all the elements after passing through the optical characteristic changing element 210 are combined within the bundle fiber BF. Here, if the optical path length difference between each element is larger than the coherence length ΔL 0 (or larger than twice the coherence length ΔL 0 ), multiple elements are mixed within the bundle fiber BF.

 この可干渉距離ΔLの値は数式2が示すように、中心波長λと波長幅Δλで一意的に決まる。この中心波長λは、使用する光の波長範囲(あるいは使用光の最大波長)または計測部8で使用する検出光の波長範囲(あるいは検出光の最大波長)で決まる。そして上記の波長幅Δλは、使用する光の波長幅または計測部8の検出性能(例えば波長分解能)で決まる。 As shown in Equation 2, the value of this coherence length ΔL 0 is uniquely determined by the center wavelength λ 0 and the wavelength width Δλ. This center wavelength λ 0 is determined by the wavelength range of the light used (or the maximum wavelength of the light used) or the wavelength range of the detection light used in the measuring section 8 (or the maximum wavelength of the detection light). The wavelength width Δλ is determined by the wavelength width of the light used or the detection performance (for example, wavelength resolution) of the measuring section 8.

 この光学特性変更素子210( ガラス )の材質にはBK7を使用し、光が入射/出射する界面(表裏面)には反射防止コートを形成した。BK7の屈折率をnで示し、図7B(b)の各領域でのガラス厚みをdで表わす。すると各領域内での光路長は“d(n-1)”で計算できる、そして図7B(b)の各領域間のガラス厚みは1mm以上異なる。 BK7 was used as the material of this optical property changing element 210 (glass), and an antireflection coating was formed on the interface (front and back surfaces) where light enters and exits. The refractive index of BK7 is represented by n, and the glass thickness in each region in FIG. 7B(b) is represented by d. Then, the optical path length within each region can be calculated as "d(n-1)", and the glass thickness between each region in FIG. 7B(b) differs by 1 mm or more.

 図8を用いて次節で説明する分光特性の計測条件では、上記の各領域間のガラス厚み差は可干渉距離ΔL(あるいは可干渉距離ΔLの2倍値)より大きな値となっている。 Under the measurement conditions for spectral characteristics, which will be explained in the next section using Figure 8, the difference in glass thickness between the above regions is larger than the coherence length ΔL 0 (or twice the coherence length ΔL 0 ). .

 図7Cは、前節で説明した連続生成される複数波連間の特性を図7Aで説明した光学配置に応用した場合の動作原理を示す。前節で既に説明したように、前後して発生する初期波連(initial wave train)400間は、互いに位相非同期(位相非連続)402の関係(unsynchronized phase relation)が有ると考えられる。 FIG. 7C shows the operating principle when the characteristics of the continuously generated multiple wave trains described in the previous section are applied to the optical arrangement described in FIG. 7A. As already explained in the previous section, it is considered that there is an unsynchronized phase relation 402 between the initial wave trains 400 that occur one after the other.

 図7C(a)で示す初期波連400が継続して発生する形で入射した初期光200は、位相同期特性の操作/制御を行う光学特性変換素子210を通過した時に波面分割される。そして図7C(b)は、図7Aの光学特性変換素子210内の第1の領域212を通過した第1の光202の空間伝搬状態(波連状態406)を示す。初期光200の波面分割後(wave front divided)の結果として第1の光202が抽出されたので、図7C(b)の振幅は図7C(a)の振幅より小さくなっている。 The initial light 200 that has entered in the form in which the initial wave train 400 shown in FIG. 7C(a) is continuously generated is wavefront-split when it passes through the optical characteristic conversion element 210 that operates/controls the phase synchronization characteristic. FIG. 7C(b) shows the spatial propagation state (wave train state 406) of the first light 202 that has passed through the first region 212 in the optical characteristic conversion element 210 of FIG. 7A. The amplitude in FIG. 7C(b) is smaller than the amplitude in FIG. 7C(a) because the first light 202 was extracted as a result of wave front divided of the initial light 200.

 図7C(c)は、第2の領域214を通過して抽出された第2の光204の空間伝搬状態(波連状態408)を示している。図7C(c)の振幅も図7C(b)のそれとほぼ同じになっているが、両者間で光路長差が生じている。そのため図7C(b)と図7C(c)で、波連406、408の中心位置ずれが発生している。 FIG. 7C(c) shows the spatial propagation state (wave train state 408) of the second light 204 extracted after passing through the second region 214. Although the amplitude in FIG. 7C(c) is almost the same as that in FIG. 7C(b), there is a difference in optical path length between the two. Therefore, in FIGS. 7C(b) and 7C(c), the center positions of the wave trains 406 and 408 are shifted.

 図7C(a)が示すように、異なる波連400間で位相非同期(位相の不連続性)402が発生し、1個の波連サイズは2ΔLで与えられる(図3(f)参照)。従って図7Aの光学特性変換素子210内の第1の領域212と第2の領域214間の光路長差を2ΔL以上にすると、図7C(b)と(c)間で同一位置の位相非同期402が起こる。 As shown in FIG. 7C(a), phase asynchrony (phase discontinuity) 402 occurs between different wave trains 400, and the size of one wave train is given by 2ΔL 0 (see FIG. 3(f)). . Therefore, if the optical path length difference between the first region 212 and the second region 214 in the optical characteristic conversion element 210 in FIG. 7A is set to 2ΔL 0 or more, the phase asynchronization at the same position between FIG. 7C (b) and (c) will occur. 402 happens.

 図7C(d)は、光合成場所220で両方の波連406、408が合成処理(synthesizing or combining)410されて所定光230が形成(form)された状況を示す。両者間で生じる光路長差が数式2で示される可干渉距離の2倍値2ΔLより大きい場合には、互いに位相非同期402な関係に有る波連406、408が合成されて光強度平均化(ensemble average effect of intensities)420が起きる。そしてそれに伴い、第1の光202内で発生する光学的雑音と第2の光204内で発生する光学的雑音との間の光学的雑音の平均化効果(平滑化効果または低減化効果)が起きる。 FIG. 7C(d) shows a situation where both wave trains 406, 408 are synthesized or combined 410 to form a predetermined light 230 at a light synthesis location 220. When the optical path length difference between the two is larger than the double value 2ΔL0 of the coherence length shown in Equation 2, the wave trains 406 and 408 that are phase asynchronous 402 are combined and the light intensity is averaged ( ensemble average effect of intensities) 420 occurs. Accordingly, the optical noise averaging effect (smoothing effect or reduction effect) between the optical noise generated within the first light 202 and the optical noise generated within the second light 204 is increased. get up.

 上記の条件に限らず例えば第1の領域212と第2の領域214間の光路長差が可干渉距離より大きな場合でも、所定光230内に互いに非同期(位相が不連続)402なエレメントが混ざり合う。従って光路長差が可干渉距離より大きな場合でも一部に光強度平均化420が発生するため、光学的雑音の平均化(平滑化または低減化)の効果が起きる。 Not limited to the above conditions, for example, even if the optical path length difference between the first region 212 and the second region 214 is larger than the coherence length, elements that are asynchronous (discontinuous in phase) 402 with each other are mixed in the predetermined light 230. Fit. Therefore, even when the optical path length difference is greater than the coherence length, light intensity averaging 420 occurs in a portion, resulting in the effect of optical noise averaging (smoothing or reduction).

 図7Dは、光学特性変換素子210の構造の実施形態に関する応用例を示す。厚み1mmの半円ガラスを30度ずつ回転させて接着し、さらに厚み6mmの半円ガラスを接着する。すると光進行方向348から見た場合、角度方向に等間隔で12分割される。本実施形態では、光進行方向348の光軸を基準として光の波面断面を角度方向に波面分割する分割方法を“角度分割”と呼ぶ。具体的には、図7D(c)の破線での領域分割(波面分割)を意味する。図7Dの実施形態は、角度方向に等間隔で12分割(角度12分割)している。その結果として角度分割された各領域間は、1mm以上のガラス厚みの差が発生する。 FIG. 7D shows an application example of the embodiment of the structure of the optical property conversion element 210. A 1 mm thick semicircular glass is rotated 30 degrees and adhered, and a 6 mm thick semicircular glass is further adhered. Then, when viewed from the light traveling direction 348, the light is divided into 12 parts at equal intervals in the angular direction. In this embodiment, a division method of dividing the wavefront cross section of light into angular directions with respect to the optical axis in the light traveling direction 348 is referred to as "angle division." Specifically, this means the area division (wavefront division) indicated by the broken line in FIG. 7D(c). The embodiment shown in FIG. 7D is divided into 12 parts at equal intervals in the angular direction (12 angular divisions). As a result, a difference in glass thickness of 1 mm or more occurs between each of the angularly divided regions.

 図7Dでは更に、直径の異なる円筒ガラスを重ねて接着した構造を持つ。本実施形態では、光進行方向348の光軸を基準として光の波面断面の半径方向に波面分割する分割方法を“半径分割”と呼ぶ。具体的には、図7D(c)の実線での領域分割(波面分割)を意味し、半径の異なる円周毎に領域分割される。図7Dの実施形態では、半径方向に4分割(半径4分割)している。そして図7Dに示した光学特性変換素子210の構造では、角度方向に12分割、半径方向に4分割している。従って分割領域数は48(12×4)となっている。しかしそれに限らず、分割数を任意に設定してよい。 FIG. 7D further has a structure in which cylindrical glasses of different diameters are stacked and bonded. In this embodiment, the division method of dividing the wavefront cross section of light in the radial direction with reference to the optical axis in the light traveling direction 348 is referred to as "radial division." Specifically, this refers to region division (wavefront division) indicated by the solid line in FIG. 7D(c), and region division is performed for each circumference with a different radius. In the embodiment of FIG. 7D, it is divided into four parts in the radial direction (radius divided into four parts). In the structure of the optical characteristic conversion element 210 shown in FIG. 7D, it is divided into 12 parts in the angular direction and 4 parts in the radial direction. Therefore, the number of divided areas is 48 (12×4). However, the number of divisions is not limited to this, and the number of divisions may be set arbitrarily.

 図7Dに示した実施形態では、半径分割された領域毎の面積が等しくなるように半径分割の境界線の直径を設定している。しかしそれに限らず、任意の間隔で各円筒ガラスの直径を設定してもよい。また光学特性変換素子210内を通過(または反射)する光の強度特性に合わせて分割方法を変化させてもよい。例えば非均一な強度分布を持つ光が、光学特性変換素子210を利用する場合を考える。この光は中心強度が高く、周辺の強度が低下する強度分布を持つ場合がある。この場合には、各分割領域を経たエレメント毎の強度がほぼ等しくなるように半径分割の境界線直径を設定してもよい。 In the embodiment shown in FIG. 7D, the diameter of the radius division boundary line is set so that the area of each radius division area is equal. However, the diameter of each cylindrical glass may be set at arbitrary intervals without being limited thereto. Further, the division method may be changed depending on the intensity characteristics of the light passing through (or reflecting) inside the optical characteristic conversion element 210. For example, consider a case where the optical characteristic conversion element 210 is used for light having a non-uniform intensity distribution. This light may have an intensity distribution in which the central intensity is high and the peripheral intensity is low. In this case, the diameter of the boundary line of the radius division may be set so that the intensity of each element passing through each division area is approximately equal.

 図7Eは、光学特性変換素子210の構造の他の実施形態を示す。ここで光学特性変換素子210を利用する光の強度分布特性に合わせて分割方法を変化させている。 FIG. 7E shows another embodiment of the structure of the optical property conversion element 210. Here, the division method is changed according to the intensity distribution characteristics of the light using the optical characteristic conversion element 210.

 図7B(b)または図7D(c)では、等間隔で角度分割されている。例えば光軸を基準とした角度方法で光の強度分布特性が不均一な場合には、分割領域を経たエレメント毎に光量のばら付きが発生する。この光の強度分布特性が角度方法で不均一な例を、図7E(a)に示す。半導体レーザ素子500の発光場所502から放射されたレーザ光の断面510の形状は、一般的に楕円形を取る場合が多い。 In FIG. 7B(b) or FIG. 7D(c), the angle is divided at equal intervals. For example, if the intensity distribution characteristics of light are non-uniform based on the angle method using the optical axis as a reference, variations in the amount of light will occur for each element passing through the divided regions. An example in which the intensity distribution characteristics of this light are non-uniform in terms of angle is shown in FIG. 7E(a). The cross section 510 of the laser light emitted from the light emitting location 502 of the semiconductor laser device 500 generally has an elliptical shape in many cases.

 図7E(b)に示す光学特性変換素子210内の境界線間の分割角度間隔が、レーザ光断面510の長軸方向で狭くなっている。また一方でレーザ光断面510の短軸方向では、境界線間の分割角度間隔が広くなっている。このように角度分割の分割角度間隔を非均等化すると、各分割領域を経たエレメントの強度間のばら付き量が減少する。このように各分割領域を経たエレメント毎の強度がほぼ等しくなると、合成後の所定光230(図7A)内の光学的雑音量が低減(エレメント毎に発生する光学的雑音の平均化または平滑化)する効果が向上する。 The division angle interval between the boundary lines in the optical characteristic conversion element 210 shown in FIG. 7E(b) is narrowed in the long axis direction of the laser beam cross section 510. On the other hand, in the short axis direction of the laser beam cross section 510, the division angle interval between the boundary lines is wide. When the division angle intervals of the angle division are made non-uniform in this way, the amount of variation between the intensities of elements passing through each division region is reduced. In this way, when the intensity of each element passing through each divided region becomes almost equal, the amount of optical noise in the predetermined light 230 (FIG. 7A) after synthesis is reduced (averaging or smoothing the optical noise generated in each element). ) will be more effective.

 2.4節 分光特性上に現れる光学的雑音低減化効果 
図8(a)は、光学特性変換素子210を利用した場合の光学的雑音低減化効果に関する実験結果を表す。実験には図7B(a)の光学系を用い、試料TSとして表面粗さ平均(averaged roughness)Raが2.08μmの拡散板を配置した。この拡散板表面の微細な凹凸から、光学的雑音が発生する。分光器SMから得られる相対強度([拡散板配置時の直進通過光強度]÷[拡散板配置前の直進通過光強度])の分光強度特性(測定波長依存性)には、上記の光学的雑音が含まれる。
Section 2.4 Optical noise reduction effect appearing on spectral characteristics
FIG. 8A shows experimental results regarding the optical noise reduction effect when using the optical characteristic conversion element 210. The optical system shown in FIG. 7B(a) was used in the experiment, and a diffuser plate with an average surface roughness Ra of 2.08 μm was arranged as the sample TS. Optical noise is generated from the minute irregularities on the surface of the diffuser plate. The spectral intensity characteristics (measurement wavelength dependence) of the relative intensity ([straight-passing light intensity when disposing the diffuser plate] ÷ [straight-passing light intensity before disposing the diffuser plate]) obtained from the spectrometer SM has the above-mentioned optical Contains noise.

 図8の縦軸は、拡散板で発生する光学的雑音に由来する分光強度の変動量(波長方向の分光強度平均値からの強度差分量)を平均値で規格化後の値([変動量]÷[平均値])の標準偏差値を表わす。また図8の横軸は、光学特性変換素子210内の光路分割数(領域分割数)を表す。ここで“光路分割数=1”の条件は、光学特性変換素子210挿入前の従来技術を表わす。 The vertical axis in Figure 8 indicates the value after normalizing the amount of variation in spectral intensity (the amount of intensity difference from the average value of spectral intensity in the wavelength direction) resulting from optical noise generated in the diffuser plate by the average value ([amount of variation ] ÷ [average value]) represents the standard deviation value. Further, the horizontal axis in FIG. 8 represents the number of optical path divisions (the number of area divisions) within the optical characteristic conversion element 210. Here, the condition of "number of optical path divisions=1" represents the conventional technique before the optical characteristic conversion element 210 is inserted.

 光路分割数(領域分割数)が増加するに連れて、明らかに光学的雑音量(標準偏差値)が減少している。光学特性変更素子210内の各分割領域を経た各エレメントは、拡散板通過で光学的雑音が発生する。しかし分光器SMに到達するまでの光路は、エレメント毎に若干変化する。そのため各エレメント内に現れる光学的雑音特性は、互いに若干変化する。そして異なる光学的雑音特性を持った全エレメントを強度加算すると、異なる光学的雑音特性間で光強度平均化420(図7C(d))される。その結果として光学的雑音特性の平滑化(平均化に基付く光学的雑音量の低減化)が起きる。 As the number of optical path divisions (the number of area divisions) increases, the amount of optical noise (standard deviation value) clearly decreases. Each element in the optical characteristic changing element 210 passes through a diffuser plate and generates optical noise. However, the optical path to reach the spectroscope SM varies slightly from element to element. Therefore, the optical noise characteristics appearing within each element differ slightly from each other. When the intensities of all elements having different optical noise characteristics are added, the light intensity is averaged 420 (FIG. 7C(d)) between the different optical noise characteristics. As a result, the optical noise characteristics are smoothed (the amount of optical noise is reduced based on averaging).

 図7B(a)の光源部2内(光学特性変換素子210と集光レンズL2の間)に表面粗さ平均Raが1.51μmの拡散板を挿入した時の光学的雑音量の低減化効果を、図8(b)に示す。光学特性変換素子210と波面の位相特性変換部材に相当する拡散板との相乗効果で、光学的雑音が更に低減化する事が確認できた。 Effect of reducing the amount of optical noise when a diffuser plate with an average surface roughness Ra of 1.51 μm is inserted into the light source section 2 (between the optical property conversion element 210 and the condenser lens L2) in FIG. 7B(a) is shown in FIG. 8(b). It was confirmed that optical noise was further reduced due to the synergistic effect of the optical characteristic conversion element 210 and the diffuser plate corresponding to the wavefront phase characteristic conversion member.

 第3章 同一発光部から放射された2波間の光学特性を変化させる実施形態例
3.1節 本実施形態内で利用される基本的な光学構成
図9Aは、本実施形態内で利用される基本的な光学構成を示す。同一発光部470から、第1の光202と第2の光204または208の2波が放射される。そして所定光学部材90が、この2波間の光学特性を変化させる。ここで同一発光部470から放射された第1の光202と第2の光204または208の2波は、この所定光学部材90内を通過しても良いし、所定光学部材90の入射面92で反射しても良い。いずれにしてもこの所定光学部材90を経た後に、前記第1の光202と前記第2の光204または208との間の光学特性が変化する。
Chapter 3 Embodiment Example of Changing Optical Characteristics between Two Waves Emitted from the Same Light Emitting Section Section 3.1 Basic optical configuration used in this embodiment Figure 9A is used in this embodiment The basic optical configuration is shown. Two waves of first light 202 and second light 204 or 208 are emitted from the same light emitting unit 470. Then, the predetermined optical member 90 changes the optical characteristics between these two waves. Here, the two waves of the first light 202 and the second light 204 or 208 emitted from the same light emitting section 470 may pass through the predetermined optical member 90, or may pass through the entrance surface 92 of the predetermined optical member 90. It may be reflected by In any case, after passing through this predetermined optical member 90, the optical characteristics between the first light 202 and the second light 204 or 208 change.

 特に本実施形態例では、所定光学部材の入射面92に直交する入射面側垂線96が定義される。そしてこの入射面側垂線96に対して傾いた角度θ(θ≠0)で第1の光202が入射する。また第2の光204は、この第1の光202とは異なる角度で入射する。従って所定光学部材90に入射前の光の進行方向は、第1の光202と第2の光204とで互いに異なった方向に向かって進行する。 In particular, in this embodiment, an incident surface side perpendicular line 96 that is perpendicular to the incident surface 92 of the predetermined optical member is defined. Then, the first light 202 is incident at an angle θ (θ≠0) inclined with respect to the normal line 96 on the incident surface side. Further, the second light 204 is incident at a different angle from the first light 202. Therefore, the traveling direction of the light before entering the predetermined optical member 90 is that the first light 202 and the second light 204 travel in different directions.

 第1の光202とは異なる第2の光208として、第1の光202とは異なる方向に進行する光204では無く、第1の光202とは異なる光路を経由する光208を定義しても良い。この場合の第2の光208の進行方向は、第1の光202と平行関係でも(つまり入射面側垂線96に対して第1の光202と同じ傾き角θで所定光学部材の入射面92に第2の光208が入射しても)良い。またそれに限らず上記の第2の光208は、上記第1の光202とは異なる方向に進行すると共に、上記第1の光202とは異なる光路を通過しても良い。 The second light 208 different from the first light 202 is defined as a light 208 that travels through a different optical path than the first light 202, rather than the light 204 that travels in a direction different from the first light 202. Also good. In this case, the traveling direction of the second light 208 is parallel to the first light 202 (that is, the direction of travel of the second light 208 is the same as that of the first light 202 with respect to the normal 96 to the entrance surface of the predetermined optical member. (the second light 208 may be incident on the second light 208). However, the second light 208 may travel in a different direction from the first light 202 and may pass through a different optical path from the first light 202.

 この所定光学部材90内を上記第1の光202が通過する場合には、この所定光学部材90内通過後に、上記第1の光202は所定光学部材の出射面94内を通過する。またこの時は、所定光学部材の出射面94に直交する出射面側垂線98が定義される。そして所定光学部材の出射面94を通過後の第1の光202の進行方向は、この出射面側垂線98に対して所定の傾き角を有しても良いし、この出射面側垂線98と平行関係に有っても良い。 When the first light 202 passes through the predetermined optical member 90, after passing through the predetermined optical member 90, the first light 202 passes through the exit surface 94 of the predetermined optical member. Also, at this time, a perpendicular line 98 on the exit surface side that is perpendicular to the exit surface 94 of the predetermined optical member is defined. The traveling direction of the first light 202 after passing through the exit surface 94 of the predetermined optical member may have a predetermined inclination angle with respect to the exit surface side perpendicular 98, or may have a predetermined inclination angle with respect to the exit surface side perpendicular 98. They may have a parallel relationship.

 しかしこの所定光学部材90内を上記第1の光202が通過する場合には図9Aが示すように、所定光学部材の出射面94を通過後の第2の光204の進行方向が通過後の第1の光202の進行方向に対して傾きを持つ(非平行な関係になる)必要が有る。
 またこの所定光学部材90内を上記第1の光202が通過する場合には、所定光学部材の出射面94を通過後の第2の光208の光路が通過後の第1の光202の光路と異なる必要がある。但し所定光学部材の出射面94を通過後の第2の光208の進行方向は、通過後の第1の光202の進行方向と平行な関係に有ってもよい。
However, when the first light 202 passes through the predetermined optical member 90, as shown in FIG. 9A, the traveling direction of the second light 204 after passing through the exit surface 94 of the predetermined optical member is It is necessary to have an inclination (a non-parallel relationship) with respect to the traveling direction of the first light 202.
Further, when the first light 202 passes through the predetermined optical member 90, the optical path of the second light 208 after passing through the output surface 94 of the predetermined optical member is the optical path of the first light 202 after passing. It needs to be different. However, the traveling direction of the second light 208 after passing through the output surface 94 of the predetermined optical member may be parallel to the traveling direction of the first light 202 after passing.

 ここで重要な事は、
1.所定光学部材90の入射面側垂線96に対して傾いた角度で、第1の光202が所定光学部材の入射面92に入射する。
2.所定光学部材の入射面92への入射前は、第1の光202とは非平行な方向に第2の光204が進行するまたは、第1の光202とは異なる光路で第2の光208が進行する。
The important thing here is
1. The first light 202 is incident on the entrance surface 92 of the predetermined optical member at an angle inclined with respect to the normal line 96 on the entrance surface side of the predetermined optical member 90 .
2. Before entering the entrance surface 92 of the predetermined optical member, the second light 204 travels in a direction non-parallel to the first light 202, or the second light 208 travels on a different optical path from the first light 202. progresses.

 いずれにしても所定光学部材90を経た後の第1の光202と第2の光204または208は、光合成場所220で合成(または混合)されて所定光230を形成する。 In any case, the first light 202 and the second light 204 or 208 after passing through the predetermined optical member 90 are combined (or mixed) at the light synthesis location 220 to form the predetermined light 230.

 図9Aで説明した基本概念を下記にまとめる。同一発光部470から放射されて所定光学部材90を経由する第1の光202と第2の光204または208において、
この所定光学部材90の入射面92と直交する入射面側垂線96と、この所定光学部材90の出射面94と直交する出射面側垂線98が規定され、
第1の光202の進行方向が、少なくとも上記入射面側垂線96と上記出射面側垂線98のいずれかとの間に傾き角を持ち、
第2の光204の進行方向を上記第1の光202の進行方向に対して傾けるか、または上記所定光学部材90内での第2の光208の光路を上記第1の光202の光路と異ならせて、この第1の光202と第2の光204または208との間の光学特性を変化させる。
The basic concept explained in FIG. 9A is summarized below. In the first light 202 and the second light 204 or 208 emitted from the same light emitting unit 470 and passing through the predetermined optical member 90,
An entrance surface side perpendicular 96 perpendicular to the entrance surface 92 of this predetermined optical member 90 and an exit surface side perpendicular 98 perpendicular to the exit surface 94 of this predetermined optical member 90 are defined,
The traveling direction of the first light 202 has an inclination angle between at least one of the normal to the incident surface side 96 and the normal to the exit surface side 98,
The traveling direction of the second light 204 is tilted with respect to the traveling direction of the first light 202, or the optical path of the second light 208 within the predetermined optical member 90 is set to be the same as the optical path of the first light 202. The optical characteristics between the first light 202 and the second light 204 or 208 are changed by making the light different.

 ここで説明した所定光学部材90と前章で説明した光学特性変換素子210との関係を、以下に説明する。本実施形態では、所定光学部材90の実施形態の一例として、図7Aの光学特性変換素子210が含まれる。すなわち前章までの説明から光学特性変換素子210の主な機能として、第1の光202と第2の光204との間で(第3の光206も含めて)光路長差を持たせる所に主眼が置かれていた。そしてこの光路長差を可干渉距離ΔL(の2倍)以上に設定すると、両者内の波連間の位相連続性を遮断(位相を非同期化402)できる。それに比べて本実施形態を用いて説明する所定光学部材92の機能は、光学特性変換素子210を包括する機能を持つ。 The relationship between the predetermined optical member 90 described here and the optical characteristic conversion element 210 described in the previous chapter will be described below. In this embodiment, an example of an embodiment of the predetermined optical member 90 includes the optical characteristic conversion element 210 in FIG. 7A. In other words, from the explanation up to the previous chapter, the main function of the optical characteristic conversion element 210 is to create an optical path length difference between the first light 202 and the second light 204 (including the third light 206). The main focus was on If this optical path length difference is set to greater than or equal to the coherence length ΔL 0 (twice), it is possible to interrupt the phase continuity between the wave trains in both (de-synchronize the phases 402). In comparison, the function of the predetermined optical member 92 described using this embodiment has a function that includes the optical characteristic conversion element 210.

 この所定光学部材92でも第1の光202と第2の光204との間で(第3の光206も含めて)互いに異なる光学特性を持たせる。ここでこの“互いに異なる光学特性”の形態の一例として位相非同期化(位相の不連続化)402が上げられる。またそれ以外の“互いに異なる光学特性”の形態例としては、図9Bを用いて後述する(導波素子110内での)モード変化を発生させてもよい。 This predetermined optical member 92 also provides different optical characteristics between the first light 202 and the second light 204 (including the third light 206). Here, phase desynchronization (phase discontinuity) 402 is cited as an example of the "mutually different optical characteristics". Further, as another example of the "mutually different optical characteristics", a mode change (within the waveguide element 110), which will be described later with reference to FIG. 9B, may be generated.

 本実施形態を用いた説明として第1の光202と第2の光204間で与える異なる光学特性として、位相非同期化(位相の不連続化)402やモード変化を発生させる説明を行う。しかしそれに限らず、所定光学部材92が第1の光202と第2の光204間に任意の光学特性の違いを与えてもよい。 As an explanation using this embodiment, a description will be given of generation of phase desynchronization (phase discontinuity) 402 and mode change as different optical characteristics provided between the first light 202 and the second light 204. However, the present invention is not limited thereto, and the predetermined optical member 92 may provide any difference in optical characteristics between the first light 202 and the second light 204.

 図1を用いて既に説明したサービス提供システム14や光学装置10内での、上記所定光学部材90の位置付けを説明する。本実施形態では、サービス提供方法80が全体を包括する概念に相当する。 The positioning of the predetermined optical member 90 within the service providing system 14 and optical device 10, which have already been explained using FIG. 1, will be explained. In this embodiment, the service providing method 80 corresponds to an overall concept.

 そしてこのサービス提供方法80を実現する手段として、サービス提供システム14が定義される。またこのサービス提供システム14の中に、光学装置10が含まれる。ここで所定光利用方法82は、この光学装置10の動作の一部として位置付けられる。 A service providing system 14 is defined as a means for realizing this service providing method 80. The service providing system 14 also includes an optical device 10. Here, the predetermined light utilization method 82 is positioned as part of the operation of this optical device 10.

 光学的計測部84は光学装置10の一部として光学装置10を構成する。しかしそれだけでなく光学的計測部84は、所定光利用方法82の中で利用される。そしてこの光学的計測部84内に光源部2が含まれる。 The optical measurement unit 84 constitutes the optical device 10 as a part of the optical device 10. However, the optical measurement unit 84 is not only used in a predetermined light utilization method 82 . The light source section 2 is included within this optical measurement section 84.

 この光源部2内は、発光部470と所定光学部材90、光合成場所220から構成される。ここでこの所定光学部材90と光合成場所では、発光部470から放射される第1の光202と第2の光204または208を操作して所定光230を形成する。 The interior of this light source section 2 is comprised of a light emitting section 470, a predetermined optical member 90, and a light synthesis location 220. Here, at the predetermined optical member 90 and the light synthesis location, the first light 202 and the second light 204 or 208 emitted from the light emitting section 470 are manipulated to form the predetermined light 230.

 3.2節 基本的な光学構成の具体例
図9Bは、図9Aで説明した所定光学部材90と光合成場所220を組み合わせた具体的な実施形態例を示す。ここで導波素子110内のコア領域112が、光合成場所220と所定光学部材90の両方の機能を果たす。この導波素子110の具体的形態として、光ファイバあるいは光導波路、光ガイドのいずれでも良い。
Section 3.2 Specific Example of Basic Optical Configuration FIG. 9B shows a specific example of an embodiment in which the predetermined optical member 90 described in FIG. 9A and the light synthesis location 220 are combined. Here, the core region 112 within the waveguide element 110 functions as both the light synthesis location 220 and the predetermined optical member 90. A specific form of this waveguide element 110 may be an optical fiber, an optical waveguide, or a light guide.

 図9Bでは、導波素子110の入口面が、図9Aの所定光学部材の入射面92に相当する。また同様に導波素子110の出口面が、図9Aの所定光学部材の出射面94に相当する。 In FIG. 9B, the entrance surface of the waveguide element 110 corresponds to the entrance surface 92 of the predetermined optical member in FIG. 9A. Similarly, the exit surface of the waveguide element 110 corresponds to the exit surface 94 of the predetermined optical member in FIG. 9A.

 光ファイバ110の入口面の形状として、光軸に対して垂直に切断された構造と所定の角度を付けて切断された構造の2種類存在する。ここでは暫定的に光ファイバ110内の光軸に対して垂直に切断された入口面形状(および出口面形状)を考える。すると所定光学部材の入射面92に直交する入射面側垂線96は、光ファイバ110内光軸方向と平行となる。この場合には同様に、所定光学部材の出射面94に直交する出射面側垂線98も、光ファイバ110内光軸方向と平行となる。 There are two types of shapes for the entrance surface of the optical fiber 110: a structure cut perpendicular to the optical axis and a structure cut at a predetermined angle. Here, we will temporarily consider the shape of the entrance surface (and the shape of the exit surface) cut perpendicularly to the optical axis within the optical fiber 110. Then, the entrance surface side perpendicular line 96 that is perpendicular to the entrance surface 92 of the predetermined optical member becomes parallel to the optical axis direction within the optical fiber 110. In this case, similarly, the exit surface side perpendicular 98 that is perpendicular to the exit surface 94 of the predetermined optical member is also parallel to the optical axis direction within the optical fiber 110 .

 入射面側垂線96に対する第1の光202の入射角θを所定値以上にして、第1の光202を導波素子(光ファイバ/光導波路/光ガイド)110内のコア領域112内に侵入させた場合を考える。この場合には図9Bが示すように、第1の光202がコア領域112とクラッド領域114間の界面で反射する。実際の導波素子(光ファイバ/光導波路/光ガイド)110の長さは充分に長いため、この界面での反射は多数回繰り返される。その結果として第1の光202は、コア領域内で基本モード以外の高次モード(例えば図16C(b)で後述するTE2モードなど)を形成する。 The first light 202 enters the core region 112 in the waveguide element (optical fiber/optical waveguide/light guide) 110 by setting the incident angle θ of the first light 202 with respect to the normal line 96 on the incident surface side to a predetermined value or more. Consider the case where In this case, as shown in FIG. 9B, the first light 202 is reflected at the interface between the core region 112 and the cladding region 114. Since the length of the actual waveguide element (optical fiber/optical waveguide/light guide) 110 is sufficiently long, reflection at this interface is repeated many times. As a result, the first light 202 forms a higher-order mode other than the fundamental mode (for example, a TE2 mode described later in FIG. 16C(b)) within the core region.

 一方で第2の光204を入射面側垂線96とほぼ平行な方向から入射させると共に、この第2の光204をコア領域112内の中央部に入射させた場合を考える。この場合の第2の光204は、コア領域112内のほぼ中央部を直進する。そしてこの第2の光204は第1の光202と比べて、コア領域112とクラッド領域114間の界面での反射回数は圧倒的に少ない。その結果として第2の光204は、コア領域内で基本モード(例えば図16C(a)で後述するTE1モード)を形成する。 On the other hand, consider a case where the second light 204 is made to enter from a direction substantially parallel to the perpendicular line 96 on the side of the incident surface, and the second light 204 is made to enter the central portion of the core region 112. In this case, the second light 204 travels straight through approximately the center of the core region 112 . The second light 204 is reflected much fewer times at the interface between the core region 112 and the cladding region 114 than the first light 202. As a result, the second light 204 forms a fundamental mode (for example, the TE1 mode described later in FIG. 16C(a)) within the core region.

 このように
1.所定光学部材90の入射面側垂線96に対して傾いた角度で、第1の光202が所定光学部材の入射面92に入射する
2.所定光学部材の入射面92への入射前は、第1の光202とは非平行な方向に第2の光204が進行する
のように光学配置を工夫すると、第1の光202と第2の光204との間の異なる光学特性(互いに変化した光学特性)に対応する“コア領域112内のモードの違い”が発生する。この2波(第1の光202と第2の光204)間のモードの違いを利用したスペックルノイズ(光学的雑音)の低減方法に付いては、5.3節で後述する。
In this way 1. 2. The first light 202 is incident on the entrance surface 92 of the predetermined optical member at an angle inclined with respect to the normal line 96 on the entrance surface side of the predetermined optical member 90; If the optical arrangement is devised so that the second light 204 travels in a direction non-parallel to the first light 202 before entering the entrance surface 92 of a predetermined optical member, the first light 202 and the second light 204 are A "difference in modes within the core region 112" occurs, which corresponds to different optical properties (optical properties that have changed from each other) between the light 204 and the light 204. A method for reducing speckle noise (optical noise) using the difference in mode between the two waves (first light 202 and second light 204) will be described later in Section 5.3.

 また図9Bが示すように、導波素子(光ファイバ/光導波路/光ガイド)から出射された第1の光202と出射面側垂線98とで形成する角度ζも大きな角度を持つ。実際の実験で確認すると、導波素子(光ファイバ/光導波路/光ガイド)からの第1の光202の出射パターン(ファーフィールドパターン)は『ドーナツ形パターン』(光軸中心に違い領域での光強度が低下し、光軸中心から離れた領域での光強度が増加するパターン)を示す。 Furthermore, as shown in FIG. 9B, the angle ζ formed by the first light 202 emitted from the waveguide element (optical fiber/optical waveguide/light guide) and the normal line 98 on the exit surface side also has a large angle. When confirmed in actual experiments, the emission pattern (far field pattern) of the first light 202 from the waveguide element (optical fiber/optical waveguide/light guide) is a "doughnut-shaped pattern" (with different regions centered on the optical axis). This pattern shows a pattern in which the light intensity decreases and the light intensity increases in areas away from the center of the optical axis.

 同様に導波素子(光ファイバ/光導波路/光ガイド)から出射された第2の光204の光強度分布は、出射面側垂線98に近付くにつれて光強度が増加する(出射面側垂線98と平行方向に出射される光強度が最大となる)。 Similarly, the light intensity distribution of the second light 204 emitted from the waveguide element (optical fiber/optical waveguide/light guide) increases as it approaches the exit surface side perpendicular 98 (the light intensity increases as it approaches the exit surface side perpendicular 98). (The light intensity emitted in the parallel direction is maximum).

 図9Aの所定光学部材90の入射面92への入射前に第2の光204の進行方向を第1の光202の進行方向に対して傾けて両者間の光学特性を変化させる具体的実施形態例を図9Bに示した。図9Aの他の実施形態として、第1の光202とは異なる光路を第2の光を通過させて両者の光学特性を変化させる具体的実施形態例を図9Cに示す。 A specific embodiment in which the traveling direction of the second light 204 is tilted with respect to the traveling direction of the first light 202 to change the optical characteristics between the two before the second light 204 enters the entrance surface 92 of the predetermined optical member 90 in FIG. 9A. An example is shown in Figure 9B. As another embodiment of FIG. 9A, FIG. 9C shows a specific embodiment in which the second light passes through a different optical path from that of the first light 202 to change the optical characteristics of both.

 図9Cに示す所定光学部材90の実施形態例としては、一部に光反射面118を持たせた透明な平行平板を示す。この透明な平行平板の光反射面118以外の表面(所定光学部材の入射面92)には、反射防止膜が形成されている。そして図9Cに示す所定光学部材90では、前側表面(表面)が所定光学部材の入射面92に対応し、裏側表面(裏面)が所定光学部材の出射面94に対応する。 An example of an embodiment of the predetermined optical member 90 shown in FIG. 9C is a transparent parallel flat plate having a light reflecting surface 118 in part. An antireflection film is formed on the surface of this transparent parallel flat plate other than the light reflecting surface 118 (the incident surface 92 of the predetermined optical member). In the predetermined optical member 90 shown in FIG. 9C, the front surface (front surface) corresponds to the entrance surface 92 of the predetermined optical member, and the back surface (back surface) corresponds to the output surface 94 of the predetermined optical member.

 コリメートレンズ318は、発光部470から放射された発散性の光を平行光に変換する。この発光部470として半導体レーザ素子500を使用した場合には図7Eが示すように、平行光状態のレーザ光断面510は楕円形状を取る。図9Cの実施形態例では、この楕円形状内の短軸方向88が紙面と平行な方向となっている。従ってこの楕円形状内の長軸方向は、紙面と垂直な方向を向く。 The collimating lens 318 converts the diverging light emitted from the light emitting section 470 into parallel light. When a semiconductor laser element 500 is used as the light emitting section 470, as shown in FIG. 7E, a laser beam cross section 510 in a parallel light state takes an elliptical shape. In the embodiment of FIG. 9C, the minor axis direction 88 within this elliptical shape is parallel to the plane of the paper. Therefore, the long axis direction within this elliptical shape is perpendicular to the plane of the paper.

 この時の平行光状態のレーザ光内の電場振動方向は、短軸方向88と一致する。透明な平行平板で構成された所定光学部材90は、この平行光状態のレーザ光進行方向に対して傾いて配置される。ここでこの所定光学部材90は、レーザ光内の電場振動方向を含む平面(紙面を含む平面)内に沿って傾く。従って所定光学部材の入射面92と出射面94の傾く方向は、レーザ光内の電場振動方向に対するP波(parallel wave)の関係となる。一般に入射面92と出射面94では、P波の光透過率が高い事が知られている。従って所定光学部材の入射面92または出射面94の傾き方向をレーザ光内の電場振動方向に対するP波となる方向に傾ける事で、入射面92または出射面94を通過する光の透過効率を高くできる効果が有る。 At this time, the direction of electric field vibration within the laser beam in the parallel light state coincides with the short axis direction 88. A predetermined optical member 90 made of a transparent parallel flat plate is arranged at an angle with respect to the traveling direction of the laser beam in the parallel light state. Here, the predetermined optical member 90 is tilted along a plane that includes the direction of electric field vibration within the laser beam (a plane that includes the plane of the paper). Therefore, the direction in which the entrance surface 92 and the exit surface 94 of the predetermined optical member are inclined has a P-wave (parallel wave) relationship with respect to the electric field vibration direction within the laser beam. It is generally known that the light transmittance of P waves is high at the entrance surface 92 and the exit surface 94. Therefore, by tilting the inclination direction of the entrance surface 92 or the exit surface 94 of a predetermined optical member in the direction of the P wave with respect to the electric field vibration direction in the laser beam, the transmission efficiency of the light passing through the entrance surface 92 or the exit surface 94 can be increased. There is an effect that can be done.

 コリメートレンズ318通過後の平行光の進行方向は、所定光学部材の入射面側垂線96に対して角度θ(θ≠0)だけ傾く。所定光学部材の入射面92と出射面94が平行な関係に有る場合には、出射面を出た光の進行方向と出射面側垂線98との間の角度ζはθと一致する(ζ=θ≠0)。
 本実施形態例では、この所定光学部材90の出射面94に特定の光反射率を持たせる。すると所定光学部材の入射面92を透過光の一部がこの所定光学部材の出射面94を通過し、残りの光はこの出射面94で反射する。図9Cの実施形態例では、この所定光学部材の出射面94を通過した光を第1の光202として扱う。
The traveling direction of the parallel light after passing through the collimating lens 318 is inclined by an angle θ (θ≠0) with respect to the perpendicular 96 on the incident surface side of the predetermined optical member. When the incident surface 92 and the exit surface 94 of a predetermined optical member are in a parallel relationship, the angle ζ between the traveling direction of the light exiting the exit surface and the perpendicular line 98 to the exit surface side coincides with θ (ζ= θ≠0).
In this embodiment, the output surface 94 of the predetermined optical member 90 has a specific light reflectance. Then, a part of the light that passes through the entrance surface 92 of the predetermined optical member passes through the exit surface 94 of this predetermined optical member, and the remaining light is reflected by the exit surface 94. In the embodiment shown in FIG. 9C, the light that has passed through the exit surface 94 of this predetermined optical member is treated as the first light 202.

 この出射面94で反射した光は光反射面118で再度反射し、この反射光の一部が所定光学部材の出射面94を通過する。ここではこの出射面94を通過した光を、第2の光208として扱う。図9Cから明らかなように、この第2の光208は前述した第1の光202とは別光路を通過する。また所定光学部材90内での第2の光208の光路長は、第1の光202の光路長と異なる。両者間の光路長差が数式2で示した可干渉距離ΔL(の2倍値)より大きくなるように設定すると、第1の光202と第2の光208間の位相連続性が遮断される(位相非同期402の関係になる)。つまり位相の連続性(位相同期性)の観点から、第1の光202と第2の光208間の光学特性が変化する。 The light reflected by the output surface 94 is reflected again by the light reflection surface 118, and a part of this reflected light passes through the output surface 94 of the predetermined optical member. Here, the light passing through this exit surface 94 is treated as second light 208. As is clear from FIG. 9C, this second light 208 passes through a different optical path from that of the first light 202 described above. Further, the optical path length of the second light 208 within the predetermined optical member 90 is different from the optical path length of the first light 202. If the optical path length difference between the two is set to be larger than the coherence length ΔL 0 (twice the value) shown in Equation 2, the phase continuity between the first light 202 and the second light 208 will be interrupted. (The relationship is phase asynchronous 402). In other words, the optical characteristics between the first light 202 and the second light 208 change from the viewpoint of phase continuity (phase synchrony).

 この所定光学部材90の出射面94通過直後の第1の光202と第2の光208のそれぞれのレーザ光断面510の形状は、楕円形をしている。しかし出射面94通過直後の第1の光202と第2の光208、第3の光206は、短軸方向88に並ぶ。その結果としてレーザ光断面510の楕円形状が補正される効果が生まれる。すなわち半導体レーザ素子500(発光部470)から放射されたレーザ光断面510の短軸方向88を含む平面内で所定光学部材90を傾けると、レーザ光断面510の楕円形状が補正される効果が有る。 Immediately after passing through the output surface 94 of the predetermined optical member 90, the laser beam cross section 510 of each of the first light 202 and the second light 208 has an elliptical shape. However, the first light 202, second light 208, and third light 206 immediately after passing through the output surface 94 are aligned in the short axis direction 88. As a result, an effect is produced in which the elliptical shape of the laser beam cross section 510 is corrected. That is, when the predetermined optical member 90 is tilted within a plane including the minor axis direction 88 of the laser beam cross section 510 emitted from the semiconductor laser element 500 (light emitting section 470), the elliptical shape of the laser beam cross section 510 is corrected. .

 本3.2節の最後に、本実施形態の光学配置上の特徴をまとめて記載する。図9Aが示すように、所定光学部材90の入射面92と直交する垂線(入射面側垂線96)に対して傾斜角θ(θ≠0)方向から第1の光202が入射する。そして所定光学部材90が、所定光学部材90を経た後の第1の光202と第2の光204または208間の光学特性を変化させる。 At the end of Section 3.2, the features of the optical arrangement of this embodiment will be summarized. As shown in FIG. 9A, the first light 202 is incident from a direction of an inclination angle θ (θ≠0) with respect to a perpendicular line (perpendicular line 96 to the incident surface side) perpendicular to the incident surface 92 of the predetermined optical member 90. Then, the predetermined optical member 90 changes the optical characteristics between the first light 202 and the second light 204 or 208 after passing through the predetermined optical member 90 .

 図9Bの実施形態例では、第2の光204の入射方向が第1の光202の入射方向と異なる。そして第1の光202と第2の光204間の光学特性の違いは、『コア領域112内での光伝搬モードの違い』に相当する。 In the example embodiment of FIG. 9B, the direction of incidence of the second light 204 is different from the direction of incidence of the first light 202. The difference in optical characteristics between the first light 202 and the second light 204 corresponds to a "difference in light propagation mode within the core region 112."

 図9Cの実施形態例では、第2の光208の光路が第1の光202の光路と異なる。そして第1の光202と第2の光208間の光学特性の違いは、『両者間の位相非同期402(位相連続性の断絶)』に相当する。 In the example embodiment of FIG. 9C, the optical path of the second light 208 is different from the optical path of the first light 202. The difference in optical characteristics between the first light 202 and the second light 208 corresponds to "phase asynchronization 402 (discontinuation of phase continuity) between the two".

 なお所定光学部材90を経た後の第1の光202と第2の光204または208間で変化する光学特性は上記に限らず、任意の光学特性の違いあるいは任意の光学特性の変化(不連続性)でも良い。 Note that the optical characteristics that change between the first light 202 and the second light 204 or 208 after passing through the predetermined optical member 90 are not limited to those described above, and may include any difference in optical characteristics or any change in optical characteristics (discontinuous). gender) is also fine.

 3.3節 他の光学構成の実施形態例
図9Aで示した所定光学部材90に関する他の具体的個別実施形態例を、本3.3節で説明する。本3.3節で説明する各種の具体的個別実施形態では、
A)所定光学部材90の入射面側垂線96または出射面側垂線98が、第1の光202の入射方向に対して傾き、
B)所定光学部材90の入射面92または出射面94が、不連続面(微視的な段差形状)を有する。
Section 3.3 Other Examples of Embodiments of Optical Configuration Other specific individual embodiments of the predetermined optical member 90 shown in FIG. 9A will be described in Section 3.3. In the various specific individual embodiments described in this Section 3.3:
A) The incident surface side perpendicular 96 or the exit surface side perpendicular 98 of the predetermined optical member 90 is tilted with respect to the incident direction of the first light 202,
B) The entrance surface 92 or the exit surface 94 of the predetermined optical member 90 has a discontinuous surface (microscopic step shape).

 図9D(a)は、半導体レーザ素子500からのレーザ光断面510の楕円形状を補正する従来光学系例を示す。上記(B)の効果を明確化するため、図9D(a)と上記(B)との違いを以下に説明する。 FIG. 9D(a) shows an example of a conventional optical system that corrects the elliptical shape of the laser beam cross section 510 from the semiconductor laser element 500. In order to clarify the effect of the above (B), the difference between FIG. 9D(a) and the above (B) will be explained below.

 図9D(a)のコリメートレンズまたはシリンドリカルレンズ120は、発光部470(半導体レーザ素子500)から放射された発散光(発散性のレーザ光)を平行光に変換する。そしてこの平行光状態では、光の等位相面128は平坦となっている。またこの平行光状態のレーザ光断面510は、楕円形状になる場合が多い(図7E(a)参照)。ここで前記楕円形状の短軸方向88は、紙面方向と一致する。楔形プリズム130は平行光の短軸方向を引き延ばし、楕円形状を補正して略円形に変換する。 The collimating lens or cylindrical lens 120 in FIG. 9D(a) converts the diverging light (divergent laser light) emitted from the light emitting section 470 (semiconductor laser element 500) into parallel light. In this parallel light state, the equiphase front 128 of the light is flat. Further, the laser beam cross section 510 in this parallel light state often has an elliptical shape (see FIG. 7E(a)). Here, the short axis direction 88 of the elliptical shape coincides with the paper surface direction. The wedge prism 130 stretches the parallel light in the minor axis direction, corrects the elliptical shape, and converts it into a substantially circular shape.

 コリメートレンズまたはシリンドリカルレンズ120通過直後の平行光の進行方向は楔形プリズム130の入射面側垂線96に対して傾きを持つ。従って図9D(a)の光学配置は、上記(A)の特徴を満たす。しかし重要な事は、楔形プリズム130通過後の平行光内の等位相面128が至る所均一な平坦面を形成している。2.3節で説明した光学的雑音低減方法では、平行光内の等位相面128を複数領域に波面分割し、各領域毎の光路長差を可干渉距離ΔL(もしくはその2倍)以上に変化させる。しかし図9D(a)のように楔形プリズム130通過後の平行光内の等位相面128が至る所均一な平坦面を形成する状態では、2.3節で説明した光学的雑音低減効果は表れない。 The traveling direction of the parallel light immediately after passing through the collimating lens or cylindrical lens 120 has an inclination with respect to the perpendicular line 96 on the incident surface side of the wedge prism 130. Therefore, the optical arrangement of FIG. 9D(a) satisfies the feature of (A) above. However, what is important is that the equiphase plane 128 in the parallel light after passing through the wedge prism 130 forms a uniform flat surface everywhere. In the optical noise reduction method explained in Section 2.3, the equiphase surface 128 in parallel light is divided into multiple regions, and the optical path length difference for each region is set to a coherence length ΔL 0 (or twice that) or more. change to However, in a state where the equiphase plane 128 in the parallel light after passing through the wedge prism 130 forms a uniform flat surface as shown in FIG. 9D(a), the optical noise reduction effect described in Section 2.3 does not appear. do not have.

 図9D(b)では図9D(a)と比べて、所定光学部材92の出射面94に微細な段差を持たせて不連続面にしている。そのため所定光学部材92の出射面94通過後の光の等位相面128が崩され、2.3節で説明した光学的雑音低減効果を発揮できる。 In FIG. 9D(b), compared to FIG. 9D(a), the exit surface 94 of the predetermined optical member 92 has a minute step difference, making it a discontinuous surface. Therefore, the equiphase front 128 of the light after passing through the output surface 94 of the predetermined optical member 92 is collapsed, and the optical noise reduction effect described in Section 2.3 can be achieved.

 図9D(b)では所定光学部材92の出射面94を不連続面にしている。しかしそれに限らず、所定光学部材92の入射面92を不連続面にしても良い。この所定光学部材92の入射面92と出射面94の少なくとも一部を微細な段差を持った不連続面にする事が、上記(B)の内容を意味する。それにより、2.3節で説明した光学的雑音低減効果を発揮できる。 In FIG. 9D(b), the output surface 94 of the predetermined optical member 92 is a discontinuous surface. However, the present invention is not limited thereto, and the entrance surface 92 of the predetermined optical member 92 may be a discontinuous surface. Making at least a portion of the entrance surface 92 and the exit surface 94 of the predetermined optical member 92 into discontinuous surfaces with fine steps means the content of (B) above. Thereby, the optical noise reduction effect described in Section 2.3 can be achieved.

 次に図9D(b)を含む本3.3節で説明する各種の具体的個別実施形態と、図9Aを用いた3.1節での説明内容との関係を説明する。同一発光部470から放射されてコリメートレンズまたはシリンドリカルレンズ120を経た後の光の中で、コリメートレンズまたはシリンドリカルレンズ120の中心部を通過した光を第1の光202に対応させる。またコリメートレンズまたはシリンドリカルレンズ120の周辺部を通過した光を第2の光204に対応させる。コリメートレンズまたはシリンドリカルレンズ120内の通過場所が異なるため、第1の光202と第2の光204との間では光路が異なる。 Next, the relationship between various specific individual embodiments described in Section 3.3 of this book including FIG. 9D(b) and the content of explanation in Section 3.1 using FIG. 9A will be described. Among the lights emitted from the same light emitting unit 470 and passing through the collimating lens or cylindrical lens 120, the light passing through the center of the collimating lens or cylindrical lens 120 is made to correspond to the first light 202. Further, the light that has passed through the peripheral portion of the collimating lens or the cylindrical lens 120 is made to correspond to the second light 204. The first light 202 and the second light 204 have different optical paths because they pass through different locations within the collimating lens or cylindrical lens 120.

 所定光学部材90の入射面92と直交する入射面側垂線96に対して、第1の光202の進行方向は角度θ(θ≠0)だけ傾く。この上記(A)で説明した光学配置を持たせると、2.3節で説明した光学的雑音低減効果が発揮できる。 The traveling direction of the first light 202 is inclined by an angle θ (θ≠0) with respect to a perpendicular to the entrance surface 96 that is orthogonal to the entrance surface 92 of the predetermined optical member 90. By providing the optical arrangement described in (A) above, the optical noise reduction effect described in Section 2.3 can be achieved.

 コリメートレンズまたはシリンドリカルレンズ120通過直後の平行光内では、平坦な等位相面128を形成する。その中で第2の光204の等位相面128が最初に、所定光学部材の入射面92に到達する。そして第1の光202の等位相面128が遅れて、所定光学部材の入射面92に到達する。このように所定光学部材90の入射面側垂線96に対して第1の光202の進行方向を角度θ(θ≠0)だけ傾ける事で、異なる光路間で所定光学部材90に到達するまでの光路長が変化する。 Immediately after passing through the collimating lens or cylindrical lens 120, the parallel light forms a flat equiphase surface 128. Among them, the equiphase front 128 of the second light 204 reaches the entrance surface 92 of the predetermined optical member first. Then, the equiphase front 128 of the first light 202 reaches the entrance surface 92 of the predetermined optical member with a delay. In this way, by tilting the traveling direction of the first light 202 by the angle θ (θ≠0) with respect to the perpendicular line 96 on the incident surface side of the predetermined optical member 90, the distance between different optical paths until reaching the predetermined optical member 90 is increased. The optical path length changes.

 そして2.3節で説明した光学的雑音低減効果を発揮するためには、この光路長差を可干渉距離ΔL(の2倍値)より大きく設定する必要がある。この条件を満足するには、この所定光学部材90に入射する光のビームサイズをDとした時、 In order to exhibit the optical noise reduction effect described in Section 2.3, it is necessary to set this optical path length difference to be larger than the coherence length ΔL 0 (twice the value). In order to satisfy this condition, when the beam size of the light incident on this predetermined optical member 90 is D,

Figure JPOXMLDOC01-appb-M000009
または
Figure JPOXMLDOC01-appb-M000009
or

Figure JPOXMLDOC01-appb-M000010
となるように傾き角θを設定するのが望ましい。上記の条件式内のLmaxは、光源部2または光学的計測部84や光学装置10の実装寸法の条件から決まってくる。従ってLmaxの値は10m、望ましくは1mに設定しても良い。
Figure JPOXMLDOC01-appb-M000010
It is desirable to set the inclination angle θ so that Lmax in the above conditional expression is determined from the mounting dimensions of the light source section 2, the optical measurement section 84, and the optical device 10. Therefore, the value of Lmax may be set to 10 m, preferably 1 m.

 この所定光学部材90に入射する光のビームサイズをDを、有効光束径で設定してもよい。すなわちコリメートレンズまたはシリンドリカルレンズ120内を通過できる光の最大径を有効光束径と考える。しかしそれに限らず、所定光学部材90に入射する光の強度分布における中央部の最大強度の半分の強度になる領域の幅(半値幅または半値直径)を光のビームサイズDと見なしても良い。またそれ以外として、所定光学部材90に入射する光の強度分布における中央部の最大強度のe-2の強度になる領域の幅(e-2幅またはe-2直径)を光のビームサイズDと見なしても良い。 The beam size D of the light incident on this predetermined optical member 90 may be set by the effective luminous flux diameter. That is, the maximum diameter of light that can pass through the collimating lens or the cylindrical lens 120 is considered to be the effective luminous flux diameter. However, the invention is not limited thereto, and the width (half-width or half-value diameter) of a region where the intensity is half the maximum intensity at the center in the intensity distribution of light incident on the predetermined optical member 90 may be regarded as the beam size D of light. In addition, in addition to that, the width (e -2 width or e -2 diameter) of the region where the intensity is e -2 of the maximum intensity at the center in the intensity distribution of light incident on the predetermined optical member 90 is the beam size D of light. It may be considered as

 コリメートレンズまたはシリンドリカルレンズ120で発光部470からの放射光を平行光に変換する場合には、このコリメートレンズまたはシリンドリカルレンズ120の焦点距離FとNA(numerical aperture)値との間にはD=2FNAの関係が有る。但しここでは、コリメートレンズまたはシリンドリカルレンズ120の有効光束径を光のビームサイズDと見なす。従ってこの関係を利用すると、上記数式9と上記数式10は When the collimating lens or cylindrical lens 120 converts the emitted light from the light emitting unit 470 into parallel light, the difference between the focal length F of the collimating lens or cylindrical lens 120 and the NA (numerical aperture) value is D=2FNA. There is a relationship between However, here, the effective beam diameter of the collimating lens or cylindrical lens 120 is regarded as the beam size D of light. Therefore, using this relationship, the above formula 9 and the above formula 10 are

Figure JPOXMLDOC01-appb-M000011
または
Figure JPOXMLDOC01-appb-M000011
or

Figure JPOXMLDOC01-appb-M000012
の条件を設定できる。
Figure JPOXMLDOC01-appb-M000012
conditions can be set.

 発光部470として半導体レーザ素子500を使用した場合には、そこから放射されるレーザ光断面510は楕円形状となる(図7E(a)参照)。この楕円の短軸方向88は、レーザ光の電場振動方向に一致する場合が有る。従ってこの短軸方向88を含む面(図9D(b)の紙面方向)内で所定光学部材90を傾けると、所定光学部材90の入射面92の傾き方向がレーザ光のP波入射方向となる。P波入射時は界面(入射面92)での光透過率が高くなる光学的特性を持つ。したがって短軸方向88を含む面内で所定光学部材90を傾けると、所定光学部材90透過光の利用効率が高くなる効果が有る。さらにそれと同時にレーザ光断面510の楕円補正(楕円形状を円形に近付ける)が行える効果もある。 When the semiconductor laser element 500 is used as the light emitting section 470, the cross section 510 of the laser light emitted therefrom has an elliptical shape (see FIG. 7E(a)). The minor axis direction 88 of this ellipse may coincide with the electric field vibration direction of the laser beam. Therefore, when the predetermined optical member 90 is tilted within a plane including this minor axis direction 88 (in the plane of the paper in FIG. 9D(b)), the direction of inclination of the incident surface 92 of the predetermined optical member 90 becomes the P-wave incident direction of the laser beam. . It has an optical characteristic that the light transmittance at the interface (incidence surface 92) becomes high when P waves are incident. Therefore, tilting the predetermined optical member 90 within a plane including the minor axis direction 88 has the effect of increasing the utilization efficiency of the light transmitted through the predetermined optical member 90. Furthermore, at the same time, there is an effect that the ellipse correction of the laser beam cross section 510 (making the elliptical shape closer to a circular shape) can be performed.

 楔形プリズム130の出射面を平面にすると、出射光の等位相面128がレーザ光断面510内の至る所で平坦になるため、光学的雑音は低減できない状況を図9D(a)で説明した。光学的雑音低減化の効果を持たせるため図9Dでは、所定光学部材90の出射面を不連続面(微細な段差形成)にして出射光の等位相面128を波面分割する。すなわち段差内の各平面毎にレーザ光断面510内を波面分割する。 When the output surface of the wedge prism 130 is made flat, the equiphase front 128 of the output light becomes flat everywhere within the laser beam cross section 510, so the situation in which optical noise cannot be reduced was explained with reference to FIG. 9D(a). In order to achieve the effect of optical noise reduction, in FIG. 9D, the output surface of the predetermined optical member 90 is made into a discontinuous surface (fine step formation), and the equal phase plane 128 of the output light is divided into wavefronts. That is, the laser beam cross section 510 is divided into wavefronts for each plane within the step.

 図9D(b)に示した実施形態例では、所定光学部材90の出射面94に微細な段差構造を持たせている。しかしそれに限らず、所定光学部材90の入射面92側に微細な段差構造を持たせても良い。この出射面94または入射面92に微細な段差を持たせた所定光学部材90を、多分割フレネルプリズム140と呼んでも良い。 In the embodiment shown in FIG. 9D(b), the output surface 94 of the predetermined optical member 90 has a fine step structure. However, the present invention is not limited thereto, and a fine step structure may be provided on the entrance surface 92 side of the predetermined optical member 90. The predetermined optical member 90 in which the exit surface 94 or the entrance surface 92 has a fine step may be referred to as a multi-segment Fresnel prism 140.

 このレーザ光断面510内での段差毎の平面数が、波面分割数に相当する。従って隣接段差間距離Pの値が小さくなると、波面分割数が増加する。この所定光学部材90の出射面94または入射面92内での細かな段差形成に、機械的なバイト切削技術が使える。そのため比較的容易に隣接段差間距離Pが小さな段差形成が可能となり、大幅に波面分割数を増加できる。図8または図17B(詳細は後述)が示すように、波面分割数の増加に合わせて光学的雑音低減効果が向上する。つまり所定光学部材の入射面または出射面に微細な段差構造を持たせると、比較的容易な方法で大幅に光学的雑音が低減する効果が生まれる。 The number of planes for each step in this laser beam cross section 510 corresponds to the number of wavefront divisions. Therefore, as the value of the distance P between adjacent steps decreases, the number of wavefront divisions increases. A mechanical tool cutting technique can be used to form fine steps within the exit surface 94 or the entrance surface 92 of the predetermined optical member 90. Therefore, it is possible to relatively easily form a step with a small distance P between adjacent steps, and the number of wavefront divisions can be significantly increased. As shown in FIG. 8 or FIG. 17B (details will be described later), the optical noise reduction effect improves as the number of wavefront divisions increases. In other words, providing a fine step structure on the entrance surface or exit surface of a predetermined optical member produces the effect of significantly reducing optical noise using a relatively easy method.

 所定光学部材90に入射する光のビームサイズD内での光強度分布は均一の場合は少ない。中心部近傍の光強度が最大となり、周辺部に近付くと光強度が低下する場合が多い。そのため隣接段差間距離Pを至る所で均一にすると、中心部近傍で抽出された分割光の光強度が高くなる。そのため所定光学部材90に入射する光強度分布に合わせて、隣接段差間距離Pの値を場所毎に変化させても良い。具体的には中心部近傍で隣接段差間距離Pの値を小さくし、隣接段差間距離Pの値を周辺部に近付くに従って徐々に大きくしても良い。そのように設定すると段差毎に分割抽出された分割光(エレメント)毎の光強度が均一化され、光学的雑音低減効果が向上する。 The light intensity distribution within the beam size D of the light incident on the predetermined optical member 90 is rarely uniform. In many cases, the light intensity is highest near the center, and decreases as it approaches the periphery. Therefore, if the distance P between adjacent steps is made uniform everywhere, the light intensity of the divided light extracted near the center will increase. Therefore, the value of the distance P between adjacent steps may be changed for each location in accordance with the intensity distribution of light incident on the predetermined optical member 90. Specifically, the value of the distance P between adjacent steps may be made small near the center, and the value of the distance P between adjacent steps may be gradually increased as it approaches the periphery. With such a setting, the light intensity of each divided light (element) extracted for each level difference is made uniform, and the optical noise reduction effect is improved.

 このように入射面92または出射面94に微細な段差構造を持たせた場合、隣接する段差間で抽出された波面分割光(エレメント)間で位相非同期402(位相の不連続性)の関係を持たせると、光学的雑音低減効果が向上する。この特性を満たす条件として、 When the entrance surface 92 or the exit surface 94 has a fine step structure in this way, a relationship of phase asynchrony 402 (phase discontinuity) is created between the wavefront split lights (elements) extracted between adjacent steps. If it is included, the optical noise reduction effect will be improved. As a condition to satisfy this characteristic,

Figure JPOXMLDOC01-appb-M000013
望ましくは
Figure JPOXMLDOC01-appb-M000013
Preferably

Figure JPOXMLDOC01-appb-M000014
の関係を満足しても良い。数式13において光路長差ΔL/2では、隣接段差(隣接する領域)を通過した光(エレメント)間で完全な位相非同期(位相の不連続性)402は起きない。しかし光路長差がΔL/2以上になると両者間の干渉性は大幅に低減するので、光学的雑音低減効果は表れる。
Figure JPOXMLDOC01-appb-M000014
You may be satisfied with the relationship. In Equation 13, when the optical path length difference is ΔL 0 /2, complete phase asynchronization (phase discontinuity) 402 does not occur between the lights (elements) that have passed through adjacent steps (adjacent regions). However, when the optical path length difference becomes ΔL 0 /2 or more, the interference between the two is significantly reduced, and an optical noise reduction effect appears.

 上記の数式内で記載されるPmaxの値は、光源部2または光学的計測部84や光学装置10の実装寸法の条件から決まってくる。従ってPmaxの値は10mまたは1m、望ましくは10cmに設定すると良い。 The value of Pmax described in the above formula is determined based on the mounting dimensions of the light source section 2, the optical measurement section 84, and the optical device 10. Therefore, the value of Pmax is preferably set to 10 m or 1 m, preferably 10 cm.

 第1の光202の進行方向に対して所定光学部材90(の入射面92または出射面94)を傾けて数式9~数式14のいずれかを満足するように条件設定すると、光路の違いで光路長差が発生して効果的に光学的雑音が低減できる。従って本3.3節で説明する具体的個別実施形態例または3.1節で説明した基本的光学配置を利用すると、光源部2内の光学系の小形化が容易となる効果が有る。さらに使用する光学部品を大幅に削減できるので、光源部2または光学装置10全体の低価格化が可能になる効果も有る。 If the conditions are set so that the predetermined optical member 90 (its entrance surface 92 or exit surface 94) is tilted with respect to the traveling direction of the first light 202 to satisfy any of Equations 9 to 14, the optical path A length difference occurs and optical noise can be effectively reduced. Therefore, if the specific individual embodiments described in Section 3.3 or the basic optical arrangement described in Section 3.1 are used, the optical system within the light source section 2 can be easily miniaturized. Furthermore, since the number of optical components used can be significantly reduced, there is also the effect that the cost of the light source section 2 or the optical device 10 as a whole can be reduced.

 このように入射面92または出射面94に微細な段差構造を持たせた場合の出射面側垂線98または入射面側垂線96は、『細かく分割された隣接段差間で構成される平面毎の垂線』として規定される。例えばフレネルレンズでは、段差毎の平面が互いに傾斜する。従ってこの場合には、段差毎に出射面側垂線98または入射面側垂線96の角度が異なる。 When the entrance surface 92 or the exit surface 94 has a fine step structure in this way, the exit surface side perpendicular 98 or the entrance surface side perpendicular 96 is defined as ``the perpendicular for each plane formed between finely divided adjacent steps. ”. For example, in a Fresnel lens, the planes of each step are inclined to each other. Therefore, in this case, the angle of the exit surface side perpendicular 98 or the entrance surface side perpendicular 96 differs for each step.

 第1の光202に対する出射面側垂線98とは、第1の光202が通過する微細平面に直交する垂線を意味する。それにより、所定光学部材90を出射した第1の光202と出射面側垂線98との間の角度ζが規定出来る。同様に所定光学部材の入射面92内に微細な段差構造を持たせた場合には、段差境界内の微細平面領域毎の垂線を入射面側垂線96と規定しても良い。 The line perpendicular to the exit surface side 98 to the first light 202 means a perpendicular line perpendicular to the fine plane through which the first light 202 passes. Thereby, the angle ζ between the first light 202 emitted from the predetermined optical member 90 and the perpendicular to the exit surface side 98 can be defined. Similarly, when a fine step structure is provided in the entrance surface 92 of a predetermined optical member, the perpendicular to each fine plane region within the step boundary may be defined as the entrance surface side perpendicular 96.

 図13を用いて5.1節で後述するように、互いに位相非同期(位相の不連続)402の関係を持つエレメント毎の光進行方向がわずかに傾くと、スペックルノイズ低減効果が向上する。そのため段差境界内の微細平面領域の傾き量(それに対応した出射側垂線98または入射面側垂線96の角度)を個々に変化させても良い。その結果として微細平面領域毎の通過光間で、通過後の進行方向が変わる。図9D(b)では所定光学部材90の出射面94通過後の第1の光202の進行方向を実線矢印で示し、第2の光204の進行方向を破線矢印で記載した。進行方向が両者間で異なる。 As will be described later in Section 5.1 with reference to FIG. 13, when the light traveling direction of each element that has a mutually phase-asynchronous (phase discontinuity) 402 relationship is slightly tilted, the speckle noise reduction effect is improved. Therefore, the amount of inclination of the fine plane area within the step boundary (the corresponding angle of the exit side perpendicular 98 or the incident surface side perpendicular 96) may be changed individually. As a result, the traveling direction after passing changes between the light beams passing through each fine plane area. In FIG. 9D(b), the traveling direction of the first light 202 after passing through the output surface 94 of the predetermined optical member 90 is shown by a solid line arrow, and the traveling direction of the second light 204 is shown by a broken line arrow. The direction of travel is different between the two.

 この微細な段差構造を機械切削で作製する場合には、微細平面領域毎にバイトの切削角度をわずかずつ傾けると容易に作成が可能となる。従って段差境界内の微細平面領域間で傾きを持たせると、比較的簡単な方法で効果的にスペックルノイズを低減できる効果が生まれる。 When creating this fine step structure by mechanical cutting, it can be easily created by slightly tilting the cutting angle of the cutting tool for each fine plane area. Therefore, by creating an inclination between the fine plane regions within the step boundary, speckle noise can be effectively reduced using a relatively simple method.

 図9D(b)で示した微細な段差構造は、段差間(段差の境界線に挟まれた間の領域)が平面になっている。しかしそれに限らずフレネルレンズのように、段差間が曲面になっても良い。また微細な段差構造を持つ代わりに、後述するフライアイレンズのように曲面の不連続性(場所に拠る曲率変化または球面中心点の場所に拠る変化)を持たせても良い。さらに上記の不連続曲面の概念を広げた『微細な非均一特性面』を持たせても良い。この場合には、曲面が不連続となる境界線または均一特性面間の境界線を規定し、その境界線間の間隔を隣接領域間距離Pと再定義し、数式13または数式14の条件を満たすように光学配置しても良い。 In the fine step structure shown in FIG. 9D(b), the areas between the steps (the area between the boundaries of the steps) are flat. However, the present invention is not limited to this, and the space between the steps may be a curved surface, as in a Fresnel lens. Further, instead of having a fine step structure, the curved surface may have discontinuity (a change in curvature depending on the location or a change depending on the location of the center point of the spherical surface) like a fly's eye lens described later. Furthermore, it is also possible to have a "fine non-uniform characteristic surface" which expands the concept of the above-mentioned discontinuous curved surface. In this case, define the boundary line where the curved surface is discontinuous or the boundary line between uniform characteristic surfaces, redefine the interval between the boundary lines as the distance P between adjacent regions, and satisfy the condition of Equation 13 or Equation 14. The optical arrangement may be made so as to satisfy the above conditions.

 図9Eは、従来の球面レンズまたは非球面レンズ通過光とフレネルレンズまたはフライアイレンズ142の通過光の波面(等位相面)特性の違いを説明している。 FIG. 9E explains the difference in wavefront (equiphase front) characteristics of light passing through a conventional spherical lens or aspherical lens and light passing through a Fresnel lens or fly's eye lens 142.

 図9E(a)は、球面レンズまたは非球面レンズで結像レンズ144を構成した例を示す。発光部470から結像位置(α位置)に至る所で、等位相面128の連続性が保持されている。そして等位相面128内は至る所で同一位相になるため、位相非同期402現象は発生しない。 FIG. 9E(a) shows an example in which the imaging lens 144 is made of a spherical lens or an aspherical lens. The continuity of the equal phase plane 128 is maintained from the light emitting section 470 to the imaging position (α position). Since the phase is the same everywhere within the equal phase plane 128, the phase asynchrony 402 phenomenon does not occur.

 図9E(b)は、フレネルレンズまたはフライアイレンズ142で結像光学系を構成した例を示す。このフライアイレンズは、複数の球面レンズまたは非球面レンズを2次元平面上に配列した状態で接合された構造を持つ。従って隣接する球面レンズまたは非球面レンズ間で、曲面(曲率)の不連続性が生じる。フレネルレンズまたはフライアイレンズ142を使用した場合でも、発光部470から放射した光はα位置に結像される。 FIG. 9E(b) shows an example in which the imaging optical system is configured with a Fresnel lens or a fly's eye lens 142. This fly's eye lens has a structure in which a plurality of spherical lenses or aspheric lenses are arranged on a two-dimensional plane and bonded together. Therefore, discontinuity of the curved surface (curvature) occurs between adjacent spherical lenses or aspheric lenses. Even when a Fresnel lens or fly's eye lens 142 is used, the light emitted from the light emitting section 470 is imaged at the α position.

 しかしフレネルレンズやフライアイレンズ142では、光入射面または光出射面上に微細な段差や曲面の不連続箇所を持つ。そのためフレネルレンズやフライアイレンズ142からの出射光内では等位相面128の分断が起きる。そして分断された等位相面128間の光路長差が可干渉距離ΔL(またはその2倍)を超えると、分断された等位相面128間の位相非同期(位相の不連続性)402が発生する。 However, the Fresnel lens or the fly's eye lens 142 has fine steps or discontinuous curved surfaces on the light entrance surface or the light exit surface. Therefore, division of the equal phase plane 128 occurs within the light emitted from the Fresnel lens or the fly's eye lens 142. When the optical path length difference between the divided equiphase planes 128 exceeds the coherence length ΔL 0 (or twice that), phase asynchronization (phase discontinuity) 402 occurs between the divided equiphase planes 128. do.

 図9Fは、所定光学部材90として傾いて配置したフレネルレンズまたはフレネルレンズ142を使用している。この所定光学部材90の入射面92に到達直前の第1の光202の進行方向に対して、所定光学部材90の入射側垂線96を角度θだけ傾ける。このように所定光学部材90を傾けて、発光部470から所定光学部材90の入射面92に至るまでの光路長を光路毎に変化させる。そして異なる光路を通過する光(第1の光202と第2の光204、第3の光206)間の光路長差が可干渉距離ΔL(またはその2倍)を超えるように光学配置する。 In FIG. 9F, a Fresnel lens or Fresnel lens 142 arranged at an angle is used as the predetermined optical member 90. The incident side perpendicular 96 of the predetermined optical member 90 is inclined by an angle θ with respect to the traveling direction of the first light 202 just before reaching the entrance surface 92 of the predetermined optical member 90 . By tilting the predetermined optical member 90 in this way, the optical path length from the light emitting section 470 to the entrance surface 92 of the predetermined optical member 90 is changed for each optical path. Then, the optical arrangement is made such that the difference in optical path length between the lights (first light 202, second light 204, and third light 206) passing through different optical paths exceeds the coherence length ΔL 0 (or twice that). .

 図9F(a)のようにコリメートレンズまたはシリンドリカルレンズ120を用いて所定光学部材90への入射光を平行光にする代わりに、図9F(b)のように発散光のままの状態で所定光学部材90へ入射させてもよい。それに拠りコリメートレンズまたはシリンドリカルレンズ120が不要となり部品点数が減少する。そして光学系全体の小形化と低価格化が達成できる効果が生まれる。 Instead of converting the incident light to the predetermined optical member 90 into parallel light using the collimating lens or the cylindrical lens 120 as shown in FIG. 9F(a), the light incident on the predetermined optical member 90 is converted into parallel light as shown in FIG. 9F(b), and the predetermined optical member is The light may be incident on the member 90. Accordingly, the collimating lens or cylindrical lens 120 becomes unnecessary, and the number of parts is reduced. This has the effect of making the entire optical system smaller and cheaper.

 図9F(a)の導波素子(光ファイバ/光導波路/光ガイド)110が、図9Aの光合成場所220に相当する。導波素子(光ファイバ/光導波路/光ガイド)110内の光伝搬を利用して異なるエレメントを合成して所定光230を生成する代わりに図9F(b)のように計測対処物22を利用しても良い。 The waveguide element (optical fiber/optical waveguide/light guide) 110 in FIG. 9F(a) corresponds to the light synthesis location 220 in FIG. 9A. Instead of combining different elements to generate a predetermined light 230 using light propagation within the waveguide element (optical fiber/optical waveguide/light guide) 110, a measurement object 22 is used as shown in FIG. 9F(b). You may do so.

 0.8μm~2.5μmの波長域に含まれる近赤外光は、生体内の侵入距離が深い事が知られている。ここで生体内は複雑な構造(複雑で微細な屈折率分布)を持つため、侵入した近赤外光は生体内で光散乱を受ける。この光散乱過程で、互いに位相非同期402関係にある異なるエレメント間で光合成処理を受ける。 It is known that near-infrared light in the wavelength range of 0.8 μm to 2.5 μm has a deep penetration distance into living organisms. Since the inside of a living body has a complex structure (complex and minute refractive index distribution), the near-infrared light that enters the living body undergoes light scattering. In this light scattering process, a photosynthesis process is performed between different elements that are in a phase-asynchronous relationship 402 with each other.

 図9Gは、入射直前の第1の光202の進行方向に対して傾けて配置する所定光学部材90として光反射形光学部材を使用した実施形態例を示す。この光反射形光学部材に関する具体的実施形態例として図9Gでは、多分割光反射素子またはフレネル形反射板148を使用している。しかしそれに限らず光反射特性を有し、光反射面内が『微細な段差構造』や『微細に分割された曲面構造』、『曲率半径の不連続性や曲面中心の不一致性』などの『微細な非均一特性面を構成』する任意の構造が含まれる光学部材90の使用も可能で有る。 FIG. 9G shows an example of an embodiment in which a light reflective optical member is used as the predetermined optical member 90 that is arranged at an angle with respect to the traveling direction of the first light 202 immediately before incidence. As a specific embodiment of this light-reflecting optical member, in FIG. 9G, a multi-segment light-reflecting element or a Fresnel type reflector 148 is used. However, it is not limited to this, it has light reflection characteristics, and the inside of the light reflection surface has "fine step structure", "finely divided curved surface structure", "discontinuity of the radius of curvature and inconsistency of the center of the curved surface", etc. It is also possible to use an optical member 90 that includes any structure that constitutes a fine non-uniform characteristic surface.

 ここで光反射形光学部材90として多分割光反射素子148を使用した場合には、隣接段差間距離Pが規定できる。またそれ以外の『微細な非均一特性面』が形成された光反射面を使用する場合には、(曲面を含めた)均一特性面が変化する境界線間の間隔を隣接領域間距離Pと規定しても良い。いずれの場合にも、数式13と数式14のいずれかの条件を満足するように光学配置を設定すると、所定光学部材90からの反射光に対する光学的雑音が低減する効果が生まれる。 Here, when the multi-segment light reflecting element 148 is used as the light reflecting optical member 90, the distance P between adjacent steps can be defined. In addition, when using a light reflecting surface on which a "fine non-uniform characteristic surface" is formed, the distance between the boundaries where the uniform characteristic surface (including curved surfaces) changes is defined as the distance between adjacent regions P. It may be stipulated. In either case, if the optical arrangement is set so as to satisfy either of the conditions of Equation 13 and Equation 14, the effect of reducing optical noise with respect to the light reflected from the predetermined optical member 90 is produced.

 また所定光学部材90への入射光の光強度分布に応じて、所定光学部材90上の異なる場所間で上記Pの値を変化させても良い。例えば入射光の光強度分布として、中心近傍での光強度が最大値を取り、周辺部で光強度が低下する場合が有る。この場合には入射光の中心近傍で隣接段差間距離(隣接領域間距離)Pを狭くし、入射光の周辺部で隣接段差間距離(隣接領域間距離)Pを広くしても良い。それにより各領域で反射した光(エレメント)間の光強度を均一に近付け、光学的雑音低減効果が向上する。 Furthermore, the value of P may be changed between different locations on the predetermined optical member 90 depending on the light intensity distribution of the light incident on the predetermined optical member 90. For example, in the light intensity distribution of incident light, the light intensity may take a maximum value near the center, and the light intensity may decrease at the periphery. In this case, the distance between adjacent steps (distance between adjacent regions) P may be narrowed near the center of the incident light, and the distance P between adjacent steps (distance between adjacent regions) may be widened at the periphery of the incident light. As a result, the light intensity between the light (elements) reflected in each region is brought closer to uniformity, and the optical noise reduction effect is improved.

 発光部470からの発散性放射光に対する発散性を変化させる光学素子として図9Gでは、Fθレンズまたはコリメートレンズ324を使用している。Fθレンズは、角度θ傾いて入射する平行光を、焦点面(focal plane)上の異なる位置に集光させる特性を持つ。このFθレンズの焦点距離Fに対してFθずれた位置に集光する特性を持つので、この名前が付いている。コリメートレンズも基本的に同じ特性を持つが、Fθの値(像高)が増加するとコマ収差が大きくなる。従ってFθの値(像高)が小さくなる光学系では、Fθレンズでは無くコリメートレンズが使える。 In FIG. 9G, an Fθ lens or collimating lens 324 is used as an optical element that changes the divergence of the divergent radiation from the light emitting unit 470. The Fθ lens has a characteristic of condensing parallel light incident at an angle θ to different positions on the focal plane. It is given this name because it has the characteristic of focusing light at a position shifted by Fθ with respect to the focal length F of this Fθ lens. A collimating lens basically has the same characteristics, but as the value of Fθ (image height) increases, comatic aberration increases. Therefore, in an optical system where the value of Fθ (image height) is small, a collimating lens can be used instead of an Fθ lens.

 ここでも段差の境界線内の微細平面が、所定光学部材の入射面92として規定される。そしてその平面に直交する垂線が、入射面側垂線96に相当する。ここで示す実施形態例では、所定光学部材90内で全ての入射面側垂線96が平行な関係に有る(全ての細かく分割された所定光学部材の入射面92が互いに平行関係に有る)。 Here again, the fine plane within the boundary line of the step is defined as the entrance surface 92 of the predetermined optical member. A perpendicular line perpendicular to the plane corresponds to the entrance surface side perpendicular line 96. In the embodiment shown here, all the normals 96 on the incident surface side are in a parallel relationship within the predetermined optical member 90 (the incident surfaces 92 of all the finely divided predetermined optical members are in a parallel relationship with each other).

 所定光学部材90への入射直前の第1の光202の進行方向と入射面側垂線96との間で、傾き角θを形成する。ここでθ=0とすると、所定光学部材の入射面92で反射した光が発光部470内の発光点に戻る。従って本実施形態ではθ≠0と設定する事で、所定光学部材の入射面92で反射した光が発光部470とは異なる位置に集光する。そしてこの集光位置に、導波素子(光ファイバ/光導波路/光ガイド)110の入口面(のコア領域112内)を配置する。導波素子(光ファイバ/光導波路/光ガイド)110内を通過して、互いに異なるエレメント間が合成される。従ってこの導波素子(光ファイバ/光導波路/光ガイド)110内図9Aの光合成場所220に相当する。 An inclination angle θ is formed between the traveling direction of the first light 202 immediately before entering the predetermined optical member 90 and the normal line 96 on the incident surface side. Here, when θ=0, the light reflected by the entrance surface 92 of the predetermined optical member returns to the light emitting point in the light emitting section 470. Therefore, in this embodiment, by setting θ≠0, the light reflected by the entrance surface 92 of the predetermined optical member is focused at a different position from the light emitting section 470. The entrance surface (within the core region 112) of the waveguide element (optical fiber/optical waveguide/light guide) 110 is arranged at this light condensing position. The light passes through a waveguide element (optical fiber/optical waveguide/light guide) 110, and different elements are combined. Therefore, this waveguide element (optical fiber/optical waveguide/light guide) 110 corresponds to the light synthesis location 220 in FIG. 9A.

 所定光学部材90の光入射面92(光反射面)は微細な段差構造が形成されている。この微細な段差構造を巨視的に平均化して得られる平面(または曲面)を、所定光学部材の巨視的入射面122と規定する。この所定光学部材の巨視的入射面122の規定方法として代わりに、微細な段差構造の上側または下側包絡面で規定しても良い。そしてこの所定光学部材の巨視的入射面122に直交する垂線を、巨視的入射面の垂線126と規定する。
そして所定光学部材90への入射直前の第1の光202の進行方向とこの巨視的入射面の垂線126との間の角度ξが定義できる。
A light incident surface 92 (light reflecting surface) of the predetermined optical member 90 has a fine step structure formed therein. A plane (or curved surface) obtained by macroscopically averaging this fine step structure is defined as a macroscopic entrance surface 122 of the predetermined optical member. Alternatively, the macroscopic entrance surface 122 of the predetermined optical member may be defined by an upper or lower envelope surface having a fine step structure. A perpendicular line perpendicular to the macroscopic entrance surface 122 of this predetermined optical member is defined as a perpendicular line 126 to the macroscopic entrance surface.
Then, the angle ξ between the traveling direction of the first light 202 immediately before entering the predetermined optical member 90 and the perpendicular line 126 to this macroscopic entrance plane can be defined.

 図9Gの光学系における第1の光202と第2の光204との間で発生する光路長差は、発光部470から(図9Aの光合成場所220に相当する)導波素子(光ファイバ/光導波路/光ガイド)110に至る光路内で発生する。特に反射形の所定光学部材90(多分割光反射素子(フレネル形反射板)148)を使用した場合には、反射前後の往路と復路の2倍の光路で光路長差が生まれる。従って反射形の所定光学部材90の使用時には、機械的配置寸法の2倍の光路長差が得られる。第1の光202の進行方向に対して所定光学部材90を傾けて配置して所定の光路長差を設定する場合には、反射形の所定光学部材90を使用すると光学系が小形化できる効果が有る。 The optical path length difference that occurs between the first light 202 and the second light 204 in the optical system of FIG. This occurs within the optical path leading to the optical waveguide (optical waveguide/light guide) 110. In particular, when a reflective predetermined optical member 90 (multi-segment light reflecting element (Fresnel type reflecting plate) 148) is used, an optical path length difference is created between the outgoing path and the twice the incoming optical path before and after reflection. Therefore, when the predetermined reflective optical member 90 is used, an optical path length difference twice as large as the mechanical arrangement dimension can be obtained. When setting a predetermined optical path length difference by arranging the predetermined optical member 90 at an angle with respect to the traveling direction of the first light 202, using the reflective predetermined optical member 90 has the effect that the optical system can be made smaller. There is.

 この角度ξの値を大きく設定すると、発光部470から所定光学部材90の巨視的入射面126に至るまでの光路長が、光路に拠り大きく変化する。従って上記で既に説明した理由から、所定光学部材90への入射直前の光断面サイズDに対して When the value of this angle ξ is set large, the optical path length from the light emitting section 470 to the macroscopic entrance surface 126 of the predetermined optical member 90 changes greatly depending on the optical path. Therefore, for the reason already explained above, for the light cross-sectional size D immediately before entering the predetermined optical member 90,

Figure JPOXMLDOC01-appb-M000015
または
Figure JPOXMLDOC01-appb-M000015
or

Figure JPOXMLDOC01-appb-M000016
の関係を満足させても良い。数式15と数式16に含まれたLmaxの値は既に説明した理由から、10m、望ましくは1mに設定しても良い。
Figure JPOXMLDOC01-appb-M000016
It is okay to satisfy the relationship of For the reasons already explained, the value of Lmax included in Equations 15 and 16 may be set to 10 m, preferably 1 m.

 ところで所定光学部材90への入射直前の光断面サイズDを、Fθレンズまたはコリメートレンズ324内を通過できる発光部470からの放射光の有効光束径で規定してもよい。しかしそれに限らず、発光部470からの放射された光の強度分布の中で、最大強度の半分の強度に減少する場所の幅(半値幅または半値直径)を光断面サイズDと見なしても良い。またそれ以外の考えかとして、発光部470からの放射された光の強度分布の中で、最大強度のe-2の強度に減少する場所の幅(e-2幅またはe-2値直径)を光断面サイズDと見なしても良い。 Incidentally, the light cross-sectional size D immediately before entering the predetermined optical member 90 may be defined by the effective beam diameter of the emitted light from the light emitting unit 470 that can pass through the Fθ lens or the collimating lens 324. However, the present invention is not limited thereto, and the width (half-width or half-value diameter) of a place where the intensity decreases to half of the maximum intensity in the intensity distribution of the light emitted from the light emitting section 470 may be regarded as the optical cross-sectional size D. . Another idea is the width (e -2 width or e -binary diameter) of the area where the intensity of the light emitted from the light emitting section 470 decreases to e -2 , which is the maximum intensity. may be regarded as the optical cross-sectional size D.

 Fθレンズまたはコリメートレンズ324の焦点距離をFとする。所定光学部材90への入射直前の平行光の光断面サイズD(有効光束径)とFθレンズまたはコリメートレンズ324のNA値との間には、D=2FNAの関係が成り立つ。従ってこの関係式を利用すると、上記の数式15と数式16は、 Let F be the focal length of the Fθ lens or collimating lens 324. The relationship D=2FNA holds between the optical cross-sectional size D (effective luminous flux diameter) of the parallel light just before it enters the predetermined optical member 90 and the NA value of the Fθ lens or the collimating lens 324. Therefore, using this relational expression, Equation 15 and Equation 16 above become

Figure JPOXMLDOC01-appb-M000017
または
Figure JPOXMLDOC01-appb-M000017
or

Figure JPOXMLDOC01-appb-M000018
と書き換えられる。従ってFθレンズまたはコリメートレンズで平行光に変換される光学系を使用する場合には、数式17または数式18の条件を利用しても良い。この条件を満足した場合には、多分割光反射素子148(所定光学部材90)を経た後の光では光学的雑音が低下する効果が生じる。
Figure JPOXMLDOC01-appb-M000018
It can be rewritten as Therefore, when using an optical system that converts into parallel light using an Fθ lens or a collimating lens, the conditions of Equation 17 or Equation 18 may be used. When this condition is satisfied, the effect of reducing optical noise in the light after passing through the multi-divided light reflecting element 148 (predetermined optical member 90) is produced.

 次に隣接段差(隣接領域)毎の反射光(エレメント)間での光学的干渉性が低減し、導波素子(光ファイバ/光導波路/光ガイド)110内で合成された所定光230で発生する光学的雑音が低減する条件を説明する。隣接段差(隣接領域)毎の反射光(エレメント)間での光路長差がΔL/2以上になると、両者間の光学的干渉性が低減する。従って隣接段差間距離(隣接領域間距離)Pの値として、 Next, the optical interference between the reflected lights (elements) of each adjacent step (adjacent area) is reduced, and a predetermined light 230 synthesized within the waveguide element (optical fiber/optical waveguide/light guide) 110 is generated. The conditions for reducing optical noise will be explained. When the optical path length difference between the reflected lights (elements) for each adjacent step (adjacent region) becomes ΔL 0 /2 or more, the optical interference between the two is reduced. Therefore, as the value of the distance between adjacent steps (distance between adjacent areas) P,

Figure JPOXMLDOC01-appb-M000019
望ましくは
Figure JPOXMLDOC01-appb-M000019
Preferably

Figure JPOXMLDOC01-appb-M000020
の条件を満足しても良い。上記の数式内で記載されるPmaxの値は前述した理由から、10mまたは1m、望ましくは10cmに設定しても良い。
Figure JPOXMLDOC01-appb-M000020
may satisfy the following conditions. The value of Pmax described in the above formula may be set to 10 m or 1 m, preferably 10 cm, for the reasons mentioned above.

 発光部470として半導体レーザ素子500使用時には、放射されるレーザ光断面510が楕円形状を取る場合が多い。この楕円形状の長軸方向78を含む面(図9Gの紙面)内に沿った方向で反射形所定光学部材90を傾けても良い。このように配置すると光学系全体の小形化が達成されるばかりで無く、長軸方向での(段差数による)波面分割数が増大する。その結果として、光学的雑音低下の効果が向上する(図8参照)。 When the semiconductor laser element 500 is used as the light emitting section 470, the emitted laser light cross section 510 often takes an elliptical shape. The reflective predetermined optical member 90 may be tilted in a direction along a plane (the paper plane of FIG. 9G) that includes the long axis direction 78 of this elliptical shape. With this arrangement, not only can the entire optical system be made smaller, but also the number of wavefront divisions in the major axis direction (depending on the number of steps) increases. As a result, the optical noise reduction effect is improved (see FIG. 8).

 図9Gの実施形態例ではFθレンズまたはコリメートレンズ324を使用し、平行光を所定光学部材90に入射させている。しかしそれに限らず、Fθレンズまたはコリメートレンズ324を使用しない光学系を使用しても良い。この場合には所定光学部材90の光反射面をフレネル形凹面鏡(多分割された凹面鏡)または凹状のフレネル形楕円曲面鏡(多分割された楕円曲面鏡)を使用する。Fθレンズまたはコリメートレンズ324を使用しない場合には、発光部470から放射された発散光は直接反射形所定光学部材90に入射する。従って直接反射形所定光学部材90に入射した発散光が反射後に一点(導波素子110の入口など)に集光できれば、反射形所定光学部材90の反射面形状は上記に限らず任意に設定しても良い。 In the embodiment shown in FIG. 9G, an Fθ lens or a collimating lens 324 is used to make parallel light incident on the predetermined optical member 90. However, the present invention is not limited thereto, and an optical system that does not use the Fθ lens or the collimating lens 324 may be used. In this case, a Fresnel concave mirror (multi-segmented concave mirror) or a concave Fresnel-shaped elliptic curved mirror (multi-segmented elliptic curved mirror) is used as the light reflecting surface of the predetermined optical member 90. When the Fθ lens or the collimating lens 324 is not used, the diverging light emitted from the light emitting section 470 enters the direct reflection type predetermined optical member 90. Therefore, as long as the diverging light incident on the direct reflection type optical member 90 can be focused at one point (such as the entrance of the waveguide element 110) after reflection, the reflection surface shape of the reflection type optical member 90 can be arbitrarily set, not limited to the above. It's okay.

 図9Gの実施形態例では、微視的な所定光学部材の入射面92を傾ける方向と、所定光学部材の巨視的な入射面122を傾ける方向を一致させている。それに拠り、多分割光反射素子148(所定光学部材90)の光利用効率を高める効果が有る。この両者を傾ける方向を一致させる条件を数式で書き表すと、ξ×θ≧0となる。 In the embodiment shown in FIG. 9G, the direction in which the microscopic entrance surface 92 of the predetermined optical member is tilted is the same as the direction in which the macroscopic entrance surface 122 of the predetermined optical member is tilted. Accordingly, there is an effect of increasing the light utilization efficiency of the multi-division light reflecting element 148 (predetermined optical member 90). When the condition for making the directions of inclination of both of them match is expressed in a mathematical expression, ξ×θ≧0.

 図9Hは、上記の条件に関する効果説明図である。上記条件を逸脱した条件を数式で書き表すと、ξ×θ<0となる。この条件下だと図9Hが示すように段差側面が入射光の進行方向から露出する。そしてこの露出された段差側面での反射光が迷光成分146となり、導波素子(光ファイバ/光導波路/光ガイド)110へ戻る光の利用効率が低下する。 FIG. 9H is an explanatory diagram of the effect regarding the above conditions. When a condition that deviates from the above condition is expressed in a mathematical formula, ξ×θ<0. Under this condition, as shown in FIG. 9H, the stepped side surface is exposed from the direction in which the incident light travels. The reflected light from the exposed side surface of the step becomes a stray light component 146, and the utilization efficiency of the light returning to the waveguide element (optical fiber/optical waveguide/light guide) 110 decreases.

 図9Iは、図9Gの応用実施形態例を示す。従って基本的には、図9Gを用いて説明した内容がそのまま有効となる。発光部470として半導体レーザ素子500を使用すると、スペックルノイズが発生し易い。これに対して位相非同期402関係に有る(互いに干渉しない)エレメント毎に計測対象物22に対する照射方向をわずかにずらすと、スペックルノイズが大幅に低減する。詳細は、図13を用いて5.1節で説明する。 FIG. 9I shows an example application embodiment of FIG. 9G. Therefore, basically, the content explained using FIG. 9G remains valid. When the semiconductor laser element 500 is used as the light emitting section 470, speckle noise is likely to occur. On the other hand, if the irradiation direction toward the measurement object 22 is slightly shifted for each element that has a phase-asynchronous relationship (does not interfere with each other), speckle noise is significantly reduced. Details will be explained in Section 5.1 using FIG. 13.

 そのため段差で分割された個々の所定光学部材の入射面92-1~3間で傾き量θ~θをわずかにずらす。第1の光202が入射する所定光学部材の入射面92-1に直交する入射面側垂線96-1と、入射直前の第1の光202の進行方向との間の角度をθとする。同様に第2の光204が入射する所定光学部材の入射面92-2に直交する入射面側垂線96-2と、入射直前の第2の光204の進行方向との間の角度をθとする。そして第3の光206が入射する所定光学部材の入射面92-3に直交する入射面側垂線96-3と、入射直前の第3の光206の進行方向との間の角度をθとする。入射直前の第1の光202と第2の光204、第3の光206の進行方向は、互いに平行の関係になっている。従って各入射面92-1、2、3間で傾けると例えばθ<θ<θまたはθ>θ>θの関係となる。 Therefore, the inclination amounts θ 1 to θ 3 are slightly shifted between the incident surfaces 92-1 to 92-3 of the individual predetermined optical members divided by the steps. Let θ 1 be the angle between the normal line 96-1 on the side of the entrance surface that is perpendicular to the entrance surface 92-1 of the predetermined optical member on which the first light 202 enters, and the traveling direction of the first light 202 immediately before the entrance. . Similarly, the angle between the normal line 96-2 on the incident surface side perpendicular to the incident surface 92-2 of the predetermined optical member on which the second light 204 is incident and the traveling direction of the second light 204 immediately before the incident is θ 2 shall be. Then, the angle between the normal line 96-3 on the incident surface side that is perpendicular to the incident surface 92-3 of the predetermined optical member on which the third light 206 is incident and the traveling direction of the third light 206 immediately before the incident is θ 3 . do. The traveling directions of the first light 202, the second light 204, and the third light 206 immediately before incidence are parallel to each other. Therefore, when the incident surfaces 92-1, 2, and 3 are tilted, the relationship is, for example, θ 123 or θ 123 .

 Fθレンズまたはコリメートレンズ324の焦点面上に、透過形または反射形の拡散板460を配置する。上記の条件に合わせて所定光学部材の各入射面92-1、2、3をそれぞれ傾ける。すると第1の光202は所定光学部材の入射面92-1で反射後、拡散板460上のρ位置に集光する。また第2の光204は所定光学部材の入射面92-2で反射後、拡散板460上のρ位置に集光し、第3の光206は所定光学部材の入射面92-3で反射後、拡散板460上のρ位置に集光する。 A transmissive or reflective diffuser plate 460 is placed on the focal plane of the Fθ lens or collimating lens 324. Each of the entrance surfaces 92-1, 2, and 3 of the predetermined optical member is tilted in accordance with the above conditions. Then, the first light 202 is reflected at the entrance surface 92-1 of the predetermined optical member and then focused at the ρ 1 position on the diffuser plate 460. Further, the second light 204 is reflected by the entrance surface 92-2 of a predetermined optical member and then focused at the ρ 2 position on the diffuser plate 460, and the third light 206 is reflected by the entrance surface 92-3 of the predetermined optical member. After that, the light is focused on the ρ 3 position on the diffuser plate 460.

 図9Iの光学系では、計測対象物22への光照射にケラー照明系(Koehler illumination system)を利用する。そのため異なる集光点ρ、ρ、ρを経てコリメートレンズまたは結像レンズ450を通過した各光(エレメント)は互いに若干異なる方向に進行すると共に、互いに合成された形で計測対象物22へ光照射する。さらに集光面上に配置した拡散板460の働きで、それぞれの光(エレメント)は互いに混ざり合う(さらなる合成が促進される)。 The optical system in FIG. 9I uses a Koehler illumination system to irradiate the measurement object 22 with light. Therefore, each light (element) that passes through the collimating lens or the imaging lens 450 through different condensing points ρ 1 , ρ 2 , ρ 3 travels in slightly different directions, and is synthesized with the object 22 to be measured. Irradiate light. Further, due to the function of the diffuser plate 460 disposed on the light condensing surface, the respective lights (elements) are mixed with each other (further synthesis is promoted).

 集光面上に配置した拡散板460の直前に光量の一部を反射する反射面254が設置され、ここで反射した光は光検出素子250内に集光する。この光検出素子250で検出された検出光量は、発光部470での発光量にフィードバックされる。 A reflecting surface 254 that reflects part of the light is installed immediately in front of the diffuser plate 460 placed on the light collecting surface, and the light reflected here is focused into the photodetecting element 250. The amount of light detected by this photodetector element 250 is fed back to the amount of light emitted by the light emitting section 470.

 集光面上に配置された拡散板460として光反射形を利用し、この光反射形拡散板460を傾けても良い。この傾き量を調整して、光反射形拡散板460反射後の光を紙面の手前方向に向けて進行させても良い。この場合にはコリメートレンズまたは結像レンズ450を、紙面の手前方向に向けて進行する光路途中に配置する。このように光学配置すると、光源部2の大幅な薄形化が可能になる効果が生まれる。 A light reflecting type diffuser plate 460 may be used as the diffuser plate 460 disposed on the light condensing surface, and this light reflecting type diffuser plate 460 may be tilted. The amount of inclination may be adjusted to cause the light reflected by the light reflecting diffuser plate 460 to travel toward the front of the page. In this case, a collimating lens or an imaging lens 450 is placed in the middle of the optical path traveling toward the front of the page. This optical arrangement has the effect of making it possible to significantly reduce the thickness of the light source section 2.

 また図27Aで後述するように傾けて配置した光反射形拡散板460に圧電素子526、528を取り付け、光反射形拡散板460の傾斜角を時間経過と共に変化させても良い。また光反射形拡散板460の代わりに同じ場所に、圧電素子526、528と連結した光反射板520を設置しても良い。そして計測部8で収集した異なる傾斜角毎のデータ(または画像)を時間加算または時間平均してスペックルノイズをさらに低減させても良い(詳細なスペックルノイズ低減化に関しては8.2節で後述する)。 Furthermore, piezoelectric elements 526 and 528 may be attached to a light-reflecting diffuser plate 460 arranged at an angle as described later in FIG. 27A, and the inclination angle of the light-reflecting diffuser plate 460 may be changed over time. Further, instead of the light reflecting diffuser plate 460, a light reflecting plate 520 connected to the piezoelectric elements 526 and 528 may be installed at the same location. Speckle noise may be further reduced by time-summing or time-averaging the data (or images) for each different tilt angle collected by the measurement unit 8 (see Section 8.2 for detailed speckle noise reduction). (described later).

 なお図9Iに示した実施形態例に関しても図9Gで説明したように、Fθレンズまたはコリメートレンズ324の配置を省いても良い。その場合には反射形所定光学部材90の入射面92-1~3全体に巨視的な凹タイプの曲面を持たせると、反射光が集光する。 Note that also in the embodiment shown in FIG. 9I, the arrangement of the Fθ lens or the collimating lens 324 may be omitted, as described in FIG. 9G. In that case, if the entire incident surfaces 92-1 to 92-3 of the reflective predetermined optical member 90 are provided with a macroscopic concave curved surface, the reflected light will be condensed.

 第4章 ハイブリッド形光源部とその利用形態
4.1節 生体構成成分と吸収波長との関係
発光部470を構成する既存の発光源として万能な発光源は無く、それぞれ一長一短が有る。例えば既存の1個のLED(light-emitting diode)素子は発光波長幅が比較的狭いため、単独での分光特性測定用光源には向かない。一方でハロゲンランプなどの熱発光源は発光波長幅が広いが、発光強度が相対的に低いと言う問題が有る。
Chapter 4 Hybrid Light Source Unit and Its Utilization Form Section 4.1 Relationship between biological components and absorption wavelengths There is no universal light source as an existing light source constituting the light emitting unit 470, and each has its advantages and disadvantages. For example, one existing LED (light-emitting diode) element has a relatively narrow emission wavelength width, so it is not suitable as a stand-alone light source for measuring spectral characteristics. On the other hand, thermal light emitting sources such as halogen lamps have a wide emission wavelength range, but have a problem in that their emission intensity is relatively low.

 本実施形態では、複数の発光素子を組み合わせたハイブリッド形光源部を使用する。それにより光源部2としての使用用途が広がり、ユーザに対して多様なサービスを提供できる効果が生まれる。このユーザに対する多様なサービス提供時には、安全性が非常に重要となる。半導体レーザ光やLED光は大きな光強度が得られるが、可視光域での大きな発光強度は、人間の目に損傷を与えるリスクが有る。 In this embodiment, a hybrid light source unit that combines a plurality of light emitting elements is used. As a result, the usage of the light source section 2 is expanded, and a variety of services can be provided to users. Safety is extremely important when providing a variety of services to users. Semiconductor laser light and LED light can provide high light intensity, but high light emission intensity in the visible light range has the risk of damaging human eyes.

 図10Aは、0.9μm~1.8μm波長域の近赤外光における、生体系構成成分988毎の吸収波長の関係を示す。1.35μm~1.80μmの波長域は第1倍音領域と呼ばれ、比較的大きな光吸収量を持つ。この波長域内では、短波長側から蛋白質、糖質、脂質の順に比較的大きな光吸収をする。また0.90μm~1.25μmの波長域は第2倍音領域と呼ばれ、光吸収量は相対的に小さい。この波長域内での生体系構成成分988毎の吸収波長は、短波長側から糖質、蛋白質、脂質の順に並ぶ。ここで第1倍音領域と結合音領域に相当する1.35μm以上の波長域では、水分の吸収量が非常に大きい特性が重要となる。一方で第2倍音領域に相当する1.35μm以下の波長域では水分の吸収量が小さい。 FIG. 10A shows the relationship between the absorption wavelength of each biological system component 988 in near-infrared light in the wavelength range of 0.9 μm to 1.8 μm. The wavelength range from 1.35 μm to 1.80 μm is called the first overtone region and has a relatively large amount of light absorption. Within this wavelength range, proteins, carbohydrates, and lipids absorb relatively large amounts of light in the order of short wavelengths. Furthermore, the wavelength range from 0.90 μm to 1.25 μm is called the second overtone region, and the amount of light absorption is relatively small. The absorption wavelengths of each biological system component 988 within this wavelength range are arranged in the order of carbohydrates, proteins, and lipids from the short wavelength side. Here, in the wavelength range of 1.35 μm or more, which corresponds to the first overtone region and the combined sound region, it is important to have a characteristic that the amount of moisture absorbed is extremely large. On the other hand, the amount of moisture absorbed is small in the wavelength range of 1.35 μm or less, which corresponds to the second overtone region.

 図10Bは、人間の目の断面構造を示す。外界から侵入する光は、水晶体158と硝子体154を経て網膜150に到達する。この網膜部分150が最も、光照射で損傷を受け易い。ここで水晶体158と硝子体154内には、多量に水分を含む。従って水分の吸収量が非常に大きい1.35μm以上の波長域光は、水晶体158と硝子体154内で吸収されて網膜150まで到達しない。 FIG. 10B shows the cross-sectional structure of the human eye. Light entering from the outside world reaches the retina 150 via the crystalline lens 158 and the vitreous body 154. This retinal portion 150 is most likely to be damaged by light irradiation. Here, the crystalline lens 158 and the vitreous body 154 contain a large amount of water. Therefore, light in the wavelength range of 1.35 μm or more, where the amount of moisture absorbed is extremely large, is absorbed within the crystalline lens 158 and the vitreous body 154 and does not reach the retina 150.

 以上の理由から発光強度の高い半導体レーザ光などを使用する場合には、半導体レーザ光の使用波長を1.35μm以上に設定すると目への損傷リスクを大幅に軽減できる。他方で波長1.8μm(特に2.4μm)を超える近赤外光では水分の影響が大き過ぎる。そのため波長1.8μm(特に2.4μm)を超える近赤外光を使用した場合には、計測対象物22表面に付着した水滴や濡れで光が吸収されて計測精度が大幅に低下するリスクがある。 For the above reasons, when using semiconductor laser light with high emission intensity, the risk of damage to the eyes can be significantly reduced by setting the wavelength of the semiconductor laser light to 1.35 μm or more. On the other hand, near-infrared light with a wavelength exceeding 1.8 μm (particularly 2.4 μm) is affected by moisture too much. Therefore, when near-infrared light with a wavelength exceeding 1.8 μm (particularly 2.4 μm) is used, there is a risk that the light will be absorbed by water droplets or wetness adhering to the surface of the measurement target 22, resulting in a significant decrease in measurement accuracy. be.

 従って本実施形態の中で人間を含めた計測対象物22からの反射光を計測部8が利用し、さらにハイブリッド形光源部内にレーザ発光素子(半導体レーザ素子500など)を配置する場合には、レーザ発光素子(半導体レーザ素子500など)の中心発光波長λの値を1.35μm以上で2.4μm以下(望ましくは1.8μm)に設定する。それにより目に与える損傷リスクを低減しつつ高い計測精度を確保できる効果が有る。 Therefore, in this embodiment, when the measuring section 8 uses reflected light from the measurement object 22 including a human being, and furthermore, a laser emitting element (such as the semiconductor laser element 500) is arranged in the hybrid light source section, The value of the center emission wavelength λ 0 of the laser emitting device (semiconductor laser device 500, etc.) is set to 1.35 μm or more and 2.4 μm or less (preferably 1.8 μm). This has the effect of ensuring high measurement accuracy while reducing the risk of damage to the eyes.

 図10Aに示した光学特性は、生体内部を計測する上での重要な情報を与えてくれる。例えば指先など厚み1cm程度の生体内透過光の分光特性を調べた例を説明する。使用波長1.35μm以下の光(生体内散乱光)は厚み1cm程度の生体内を透過し、透過光の特性を信号として検出できる。それに比べて使用波長1.35μmを超える光は生体内水分に吸収され、透過光としてはほとんど信号検出できない。この状況は、図10Aで示した水の光吸収波長依存性とほぼ合致している。従って生体内部の特性を計測する本実施形態では、ハイブリッド光源部から放射される光の波長を1.35μm以下に設定する。 The optical characteristics shown in FIG. 10A provide important information for measuring the inside of a living body. For example, an example will be explained in which the spectral characteristics of light transmitted through a living body such as a fingertip with a thickness of about 1 cm were investigated. Light with a wavelength of 1.35 μm or less (light scattered within a living body) passes through a living body approximately 1 cm thick, and the characteristics of the transmitted light can be detected as a signal. In comparison, light with a wavelength exceeding 1.35 μm is absorbed by the water in the living body, and almost no signal can be detected as transmitted light. This situation almost matches the optical absorption wavelength dependence of water shown in FIG. 10A. Therefore, in this embodiment for measuring the characteristics inside a living body, the wavelength of the light emitted from the hybrid light source section is set to 1.35 μm or less.

 また図10Aに示す第2倍音領域の光を使って計測すると、比較的高い精度の測定が可能となる。図10Aから分かるように、第2倍音領域に適合した測定波長は0.9μm以上となる。従って本実施形態では生体内部の特性を計測する場合には、ハイブリッド光源部から放射される光の波長を0.9μm以上で1.35μm以下に設定する。 Furthermore, when measuring using light in the second overtone region shown in FIG. 10A, relatively high precision measurement is possible. As can be seen from FIG. 10A, the measurement wavelength suitable for the second overtone region is 0.9 μm or more. Therefore, in this embodiment, when measuring the characteristics inside a living body, the wavelength of the light emitted from the hybrid light source section is set to 0.9 μm or more and 1.35 μm or less.

 次にレーザ発光素子(半導体レーザ素子500など)を内部に配置したハイブリッド形光源部を用いて生体内部の特性を計測する場合の本実施形態におけるレーザ光の役割を説明する。レーザ光は発光強度が高い代わりに発光波長幅が狭いため、単独での分光特性計測には向かない。従って本実施形態における生体内部での特定波長光を用いた計測対象として、例えば脈動や呼吸計測などの時間的変化を伴う計測に利用しても良い。 Next, the role of laser light in this embodiment will be described when measuring characteristics inside a living body using a hybrid light source section in which a laser light emitting element (semiconductor laser element 500, etc.) is disposed. Laser light has a high emission intensity but a narrow emission wavelength width, so it is not suitable for measuring spectral characteristics alone. Therefore, in this embodiment, the object to be measured using light of a specific wavelength inside a living body may be used, for example, for measurements involving temporal changes such as pulsation and respiration measurement.

 またハイブリッド形光源部内部に設置した別の発光源から放射される光を使って生体内部の分光特性を同時に計測する場合、レーザ光波長がこの分光特性計測を阻害しない工夫が必要となる。この別の発光源から放射される光を用いた生体内部の分光特性では前述したように、第2倍音領域での計測が向く。図10Aが示すように、第2倍音領域内での脂質が吸収する最長波長は1.25μm前後となる。従ってこの分光特性計測を阻害しないレーザ光波長として本実施形態では、1.25μm以上に設定しても良い。以上の検討結果を纏めると、次の通りとなる。すなわち本実施形態でハイブリッド形光源部を用いて生体内部の特性を計測する場合には、ハイブリッド形光源部内に設置したレーザ発光素子(半導体レーザ素子500など)の発光波長範囲を1.25μm以上かつ1.35μm以下に設定しても良い。 Additionally, when simultaneously measuring the spectral characteristics inside a living body using light emitted from another light source installed inside the hybrid light source, it is necessary to devise a way to prevent the laser light wavelength from interfering with the spectral characteristics measurement. As described above, measurement in the second overtone region is suitable for the spectral characteristics inside the living body using light emitted from this other light source. As shown in FIG. 10A, the longest wavelength absorbed by lipids in the second overtone region is around 1.25 μm. Therefore, in this embodiment, the laser light wavelength that does not impede this spectral characteristic measurement may be set to 1.25 μm or more. The results of the above study can be summarized as follows. That is, when measuring the characteristics inside a living body using the hybrid light source in this embodiment, the emission wavelength range of the laser light emitting device (such as the semiconductor laser device 500) installed in the hybrid light source should be 1.25 μm or more and It may be set to 1.35 μm or less.

 4.2節 ハイブリッド発光部内構造例
図11Aは、本実施形態におけるハイブリッド発光部470の内部構造例を示す。生体内部の分光特性計測には前節で説明したように、第2倍音領域を利用した0.9μm~1.25μmの測定波長範囲が要求される。従ってこの測定波長範囲にマージンを加えた0.9μm~1.3μmの広い波長域での発光が望ましい。しかしこれだけ広い波長域の発光を現存する単一のLED光源で実現するのは、難しい。
Section 4.2 Example of Internal Structure of Hybrid Light Emitting Unit FIG. 11A shows an example of internal structure of the hybrid light emitting unit 470 in this embodiment. As explained in the previous section, measurement of spectral characteristics inside a living body requires a measurement wavelength range of 0.9 μm to 1.25 μm using the second overtone region. Therefore, it is desirable to emit light in a wide wavelength range of 0.9 μm to 1.3 μm, which is the measurement wavelength range plus a margin. However, it is difficult to achieve light emission in such a wide wavelength range with a single existing LED light source.

 上記の広い波長域での発光を実現するため本実施形態では、近赤外光を発光する蛍光体162、164を利用する。そして中心蛍光波長が短波長側に持つ近赤外光を発光する蛍光体162と、中心蛍光波長が長波長側に持つ近赤外光を発光する蛍光体164を積層する。そして個々の近赤外光発光蛍光体162、164からの発光特性の合計として、0.9μm~1.3μmの波長域での蛍光発光近赤外光178が得られる。 In order to realize light emission in the above-mentioned wide wavelength range, this embodiment uses phosphors 162 and 164 that emit near-infrared light. Then, a phosphor 162 that emits near-infrared light whose center fluorescence wavelength is on the short wavelength side and a phosphor 164 that emits near-infrared light whose center fluorescence wavelength is on the long wavelength side are stacked. Then, as the sum of the emission characteristics from the individual near-infrared light-emitting phosphors 162 and 164, near-infrared fluorescence light 178 emitting fluorescence in the wavelength range of 0.9 μm to 1.3 μm is obtained.

 そしてこの積層構造は透明樹脂製の透明シーリング領域166内で固定される。さらにこの透明シーリング領域166の外側は透湿性の低いポリエチレンや透明シリコーン、透明テフロン(登録商標)などの材質を含む防水コート層168で囲まれている。この防水コート層168の作用で、これら蛍光体162、164内への外界に含まれる水分の侵入を防止している。 This laminated structure is then fixed within a transparent sealing area 166 made of transparent resin. Furthermore, the outside of this transparent sealing area 166 is surrounded by a waterproof coating layer 168 containing a material with low moisture permeability such as polyethylene, transparent silicone, transparent Teflon (registered trademark), or the like. The action of this waterproof coating layer 168 prevents moisture contained in the outside world from entering into these phosphors 162 and 164.

 このハイブリッド発光部470内は、2個の発光源160、170が配置されており、電力共有用に共通電極174が共有される。第1の発光源160としてLED光源からの放射光が、近赤外光発光蛍光体162、164を励起する。従ってこの励起用LED光源の発光波長は、0.9μm~1.3μmの範囲で発光する蛍光波長より短い必要がある。近赤外光発光蛍光体162、164の特性に合わせた、600nm~900nm範囲内の適正な発光波長を持つLED光源が選択される。 Inside this hybrid light emitting section 470, two light emitting sources 160 and 170 are arranged, and a common electrode 174 is shared for power sharing. Emitted light from an LED light source as a first light source 160 excites near-infrared light emitting phosphors 162 and 164. Therefore, the emission wavelength of this excitation LED light source must be shorter than the fluorescence wavelength emitted in the range of 0.9 μm to 1.3 μm. An LED light source having an appropriate emission wavelength within the range of 600 nm to 900 nm is selected in accordance with the characteristics of the near-infrared light emitting phosphors 162 and 164.

 第2の発光源170として、半導体レーザ光源(半導体レーザ素子500)を使用しても良い。近赤外光発光蛍光体162、164の蛍光波長(発光波長)より長い発光波長を、この第2の発光源170に持たせても良い。その結果として第2の発光源170からの放射光176が近赤外光発光蛍光体162、164に何ら影響を与えずに近赤外光発光蛍光体162、164を透過し、ハイブリッド発光部470の外で有効利用できる。 A semiconductor laser light source (semiconductor laser element 500) may be used as the second light emitting source 170. The second light emitting source 170 may have a longer emission wavelength than the fluorescence wavelength (emission wavelength) of the near-infrared light emitting phosphors 162 and 164. As a result, the emitted light 176 from the second light emitting source 170 passes through the near infrared light emitting phosphors 162 and 164 without affecting the near infrared light emitting phosphors 162 and 164, and the hybrid light emitting section 470 It can be effectively used outside of.

 図11A内の記載を省いたが、ハイブリッド発光部470内に2個の光検出器を配置し、第1と第2の発光源160、170から放射される個々の光量をモニタしても良い。この光検出器を利用して個々の発光量にフィードバックを掛ける事で、第1と第2の発光源160、170からの発光量を安定化できる。 Although not shown in FIG. 11A, two photodetectors may be placed in the hybrid light emitting section 470 to monitor the respective amounts of light emitted from the first and second light sources 160 and 170. . By applying feedback to the individual light emission amounts using this photodetector, the light emission amounts from the first and second light sources 160 and 170 can be stabilized.

 4.3節 近赤外発光蛍光体の材質および構造とその生成方法 
近赤外光を発光する蛍光体材料の説明から始める。本実施形態では、蛍光体材料に希土類元素か遷移元素に属する原子またはイオンを含める。この蛍光体材料を使用する事で、近赤外光を発光する蛍光特性を得る。具体的実施形態例として遷移元素に属する4価クロムイオンまたは3価クロムイオンを含む無機結晶(酸化物結晶)を材料に用いた結果、1030nm~1350nm波長範囲内に最大発光波長が現れた。
Section 4.3 Material and structure of near-infrared emitting phosphor and its production method
We will begin with an explanation of phosphor materials that emit near-infrared light. In this embodiment, the phosphor material contains atoms or ions belonging to rare earth elements or transition elements. By using this phosphor material, we obtain fluorescent properties that emit near-infrared light. As a specific embodiment, when an inorganic crystal (oxide crystal) containing a tetravalent chromium ion or a trivalent chromium ion belonging to a transition element was used as a material, the maximum emission wavelength appeared within the wavelength range of 1030 nm to 1350 nm.

 蛍光体材料に希土類元素に属する原子またはイオンを含めた場合の実施形態を説明する。希土類元素として3価のイッテルビウムイオンYb3+を使用すると、1μm近傍の波長帯に最大発光波長が現れた。また3価のネオジムNd3+を使用すると、0.9μmと1.06μm、1.3μmの波長帯それぞれに極大発光波長が現れた。そして3価のサマリウムSm3+を使用すると、0.85μm~1.2μmの広い波長帯で蛍光発光が観察された。
また希土類元素に属する原子またはイオンとして、3価のエルビウムEr3+や3価のプラセオジムPr3+を使用しても良い。さらにそれに限らず本実施形態では、蛍光体材料に任意の希土類元素または遷移元素に属する原子またはイオンを使用しても良い。
An embodiment in which the phosphor material contains atoms or ions belonging to rare earth elements will be described. When trivalent ytterbium ion Yb 3+ was used as the rare earth element, the maximum emission wavelength appeared in a wavelength band around 1 μm. Furthermore, when trivalent neodymium Nd 3+ was used, maximum emission wavelengths appeared in the wavelength bands of 0.9 μm, 1.06 μm, and 1.3 μm, respectively. When trivalent samarium Sm 3+ was used, fluorescence emission was observed in a wide wavelength band of 0.85 μm to 1.2 μm.
Furthermore, trivalent erbium Er 3+ or trivalent praseodymium Pr 3+ may be used as atoms or ions belonging to rare earth elements. Furthermore, the present embodiment is not limited thereto, and atoms or ions belonging to any rare earth element or transition element may be used for the phosphor material.

 図11Bは、本実施形態における近赤外発光蛍光体162、164内の詳細な構造をしめす。バインダ領域180内に、粒径の異なる(所定の粒径分布を持った)蛍光物質182~186が分散配置されている。バインダ領域180は、ガラスSiOあるいはビスマス酸化物Biやアンチモン酸化物Sbなどのガラス材を使用しても良い。またそれに限らずバインダ領域180を構成する材質として、エポキシ樹脂やアクリル樹脂、シリコーン樹脂を使用しても良い。 FIG. 11B shows the detailed structure inside the near-infrared emitting phosphors 162 and 164 in this embodiment. In the binder region 180, fluorescent substances 182 to 186 having different particle sizes (having a predetermined particle size distribution) are dispersed. For the binder region 180, a glass material such as glass SiO 2 or bismuth oxide Bi 2 O 3 or antimony oxide Sb 2 O 3 may be used. Furthermore, the material for forming the binder region 180 is not limited thereto, and epoxy resin, acrylic resin, or silicone resin may be used.

 蛍光物質182内には、上記で説明した希土類元素または遷移元素に属する原子またはイオンのいずれかAが含まれる。そして蛍光物質184内では、Aとは異なる希土類元素または遷移元素に属する原子またはイオンのいずれかBが含まれる。また蛍光物質186内には、上記AとBとは異なる希土類元素または遷移元素に属する原子またはイオンのいずれかCが含まれる。上記で説明したように、異なる原子またはイオンA、B、C間で蛍光発光時の中心発光波長が異なる。そのため異なる原子またはイオンA、B、Cを別々に含む蛍光物質182~186を混合させると、近赤外発光蛍光体162、164として幅広い波長範囲で蛍光発光する効果が生まれる。 The fluorescent material 182 contains any of the atoms or ions A belonging to the rare earth elements or transition elements described above. In the fluorescent substance 184, B is an atom or an ion belonging to a rare earth element or a transition element different from A. Further, the fluorescent substance 186 contains an atom or an ion C belonging to a rare earth element or a transition element different from the above A and B. As explained above, the different atoms or ions A, B, and C have different central emission wavelengths when emitting fluorescence. Therefore, when fluorescent substances 182 to 186 containing different atoms or ions A, B, and C are mixed together, an effect of emitting fluorescence in a wide wavelength range is produced as near-infrared emitting fluorescent substances 162 and 164.

 上記で説明した各種イオン毎の蛍光発光の中心波長は、比較的大きな塊の内部で起こる蛍光発光に対応している。しかし同じイオンが表面近傍に存在する場合には、電子軌道準位(energy levels of electron orbit)が微妙に変化し、中心発光波長シフトが発生する。また蛍光物質182~186を粒径が3μm~10μm程度の微粒子にすると、微粒子内の格子振動(原子間振動)と電子軌道準位間の相互作用が発生する。そのため蛍光物質182~186の粒径が変化すると、蛍光発光の中心波長が変化する。本実施形態ではこの状況を利用し、近赤外発光蛍光体162、164内に含まれる蛍光物質182~186に広い粒径分布を持たせている。このように粒径が大きく異なる蛍光物質182~186を混在させて、蛍光発光する波長範囲を大幅に広げる効果が生まれる。 The center wavelength of the fluorescence emission for each type of ion explained above corresponds to the fluorescence emission that occurs inside a relatively large lump. However, when the same ion exists near the surface, the energy levels of electron orbit change slightly, causing a shift in the central emission wavelength. Furthermore, when the fluorescent substances 182 to 186 are formed into fine particles with a particle size of about 3 μm to 10 μm, interaction between lattice vibrations (interatomic vibrations) within the fine particles and electron orbital levels occurs. Therefore, when the particle size of the fluorescent substances 182 to 186 changes, the center wavelength of fluorescence emission changes. In the present embodiment, taking advantage of this situation, the fluorescent substances 182 to 186 contained in the near-infrared emitting phosphors 162 and 164 have a wide particle size distribution. By mixing the fluorescent substances 182 to 186 with greatly different particle sizes in this way, the effect of greatly expanding the wavelength range in which fluorescence is emitted is created.

 製造の容易性から本実施形態では、蛍光物質182~186の平均粒径を0.5μm~100μmの範囲内程度に設定する。そして本実施形態における蛍光物質182~186の粒径分布範囲を、同一近赤外発光蛍光体162、164内に含まれる蛍光物質182~186の最大粒径と最小粒径の比率範囲で規定する。 In this embodiment, the average particle size of the fluorescent substances 182 to 186 is set within the range of 0.5 μm to 100 μm for ease of manufacture. The particle size distribution range of the fluorescent substances 182 to 186 in this embodiment is defined by the ratio range of the maximum particle size and the minimum particle size of the fluorescent substances 182 to 186 contained in the same near-infrared emitting phosphors 162 and 164. .

 希土類元素か遷移元素に属する原子(またはイオン)Aを含む蛍光物質182内の最小粒径をDmin、最大粒径Dmaxと定義し、両者間の比率がN≧Dmax/Dmin≧Mの範囲に入るように製造管理する。同様に希土類元素か遷移元素に属する原子(またはイオン)Bを含む蛍光物質182内の最小粒径をDmin、最大粒径Dmaxと定義し、両者間の比率がN≧Dmax/Dmin≧Mの範囲に入るように製造管理する。また希土類元素か遷移元素に属する原子(またはイオン)Cを含む蛍光物質182内の最小粒径をDmin、最大粒径Dmaxと定義し、両者間の比率がN≧Dmax/Dmin≧Mの範囲に入るように製造管理する。 The minimum particle size in the fluorescent substance 182 containing atoms (or ions) A belonging to rare earth elements or transition elements is defined as D A min and maximum particle size D A max, and the ratio between them is N A ≧ D A max/ Manufacture control is carried out so that D A min ≧ MA falls within the range. Similarly, the minimum particle size in the fluorescent substance 182 containing atoms (or ions) B belonging to rare earth elements or transition elements is defined as D B min and maximum particle size D B max, and the ratio between them is N B ≧ D B Manufacture control is performed so that max/D B min≧ MB falls within the range. Further, the minimum particle size in the fluorescent material 182 containing atoms (or ions) C belonging to rare earth elements or transition elements is defined as D C min and maximum particle size D C max, and the ratio between them is N C ≧ D C max. /D C min≧M C Manufacture control is carried out so that the range is within the range.

 色々確認した所、MとM、Mの値はいずれも1.5、望ましくは4が適正な事が分かった。またNとN、Nの値はいずれも1000または100、望ましくは10が適正な事が分かった。 After various checks, it was found that the values of MA , MB , and MC are all 1.5, preferably 4. It has also been found that the values of NA , NB , and NC are all 1000 or 100, preferably 10.

 図11Cは、本実施形態における近赤外発光蛍光体の製作方法を示す。近赤外発光蛍光体の製作開始ST01の最初に、ステップ02で示す希土類元素または遷移元素に属する原子(またはイオン)A/B/Cをそれぞれ含む蛍光物質の塊を、それぞれ作成する。 FIG. 11C shows a method for manufacturing the near-infrared emitting phosphor in this embodiment. Start of production of near-infrared emitting phosphor At the beginning of ST01, lumps of fluorescent material each containing atoms (or ions) A/B/C belonging to a rare earth element or a transition element shown in step 02 are created.

 例えば希土類元素に属する原子(またはイオン)のイッテルビウムまたはネオジム、サマリウムを含む蛍光物質の塊を作成する場合には、それらの酸化物Yb、Nd、Smの粉末を準備する。またこれらと混合する無機材料(酸化物材料)としてガラスSiOあるいはビスマス酸化物Biやアンチモン酸化物Sbの粉末を準備する。そしてこれらを個々に混合し、10分程度1250℃の温度に保つと、溶けて蛍光物質の塊ができる。 For example, when creating a lump of fluorescent material containing atoms (or ions) of rare earth elements such as ytterbium, neodymium, and samarium, powders of their oxides Yb 2 O 3 , Nd 2 O 3 , and Sm 2 O 3 are used. prepare. Further, powders of glass SiO 2 , bismuth oxide Bi 2 O 3 , and antimony oxide Sb 2 O 3 are prepared as inorganic materials (oxide materials) to be mixed with these materials. When these are mixed individually and kept at a temperature of 1,250°C for about 10 minutes, they melt to form a lump of fluorescent material.

 次のステップ03ではこの塊を粉砕して粉末化する。この粉末内に含まれる1個の蛍光物質の平均粒径が3μm~10μm程度になるまで、細かく粉砕する。近赤外発光蛍光体162、164の波長方向での蛍光発光強度分布特性は前述したように、蛍光物質の粒径分布が非常に大きく影響する。従って粉砕して得られた粉末に対し、均等な網目(メッシュサイズ)の篩を利用して粒径選別する。まず始めに粉砕された粉末を網目(メッシュサイズ)の大きな篩に掛けて、規格外粒径の蛍光物質を除去する。次に篩の網目(メッシュサイズ)を徐々に小さくして、粒径が所定範囲内に含まれる蛍光物質を順次選別する。このようにして所定の粒径分布を持った蛍光物質の粉末182~186を抽出する(ST04)。 In the next step 03, this lump is crushed and powdered. This powder is finely ground until the average particle size of each fluorescent substance contained in the powder is about 3 μm to 10 μm. As described above, the fluorescence emission intensity distribution characteristics of the near-infrared emitting phosphors 162 and 164 in the wavelength direction are greatly influenced by the particle size distribution of the phosphors. Therefore, the powder obtained by pulverization is subjected to particle size selection using a sieve with a uniform mesh size. First, the pulverized powder is passed through a sieve with a large mesh size to remove fluorescent substances with non-standard particle sizes. Next, the mesh size of the sieve is gradually reduced to sequentially select fluorescent substances whose particle size falls within a predetermined range. In this way, fluorescent substance powders 182 to 186 having a predetermined particle size distribution are extracted (ST04).

 ステップ05では、それぞれの粒径分布を持った蛍光物質粉末182~186と液状または粉末状のバインダ180を配合する。次にこの液状または粉末状のバインダ180を固形化(硬化または凝固)させて(ST06)、近赤外発光蛍光体の製作が完了する(ST07)。ここでバインダ180としてガラス材を使用する場合には、粉末状のガラス材と蛍光物質粉末182~186を混ぜた後に高温加熱して固める。ここでバインダ180としてエポキシ樹脂やシリコーン樹脂などの有機物を使用する場合には、硬化剤の作用で硬化するまで特定時間放置する。またバインダ180としてアクリル系の光硬化形樹脂を使用する場合には、紫外線照射で硬化させても良い。 In step 05, fluorescent material powders 182 to 186 having respective particle size distributions and a liquid or powder binder 180 are blended. Next, this liquid or powdered binder 180 is solidified (hardened or coagulated) (ST06), and the production of the near-infrared emitting phosphor is completed (ST07). When a glass material is used as the binder 180, the powdered glass material and the fluorescent substance powders 182 to 186 are mixed and then heated to a high temperature to harden the mixture. If an organic material such as an epoxy resin or a silicone resin is used as the binder 180, it is left for a specific period of time until it is cured by the action of a curing agent. Further, when an acrylic photocurable resin is used as the binder 180, it may be cured by ultraviolet irradiation.

 4.4節 光源部と計測部とを一体小形化した実施形態
図11Aを用いて4.3節で説明したハイブリッド発光部470と、図9F(a)と図9Gを用いて3.3節で説明した光源部2内光学配置、そして分光計測を行う計測部8を一体小形化した構造を、図12Aと図12Bに示す。図12Aと図12Bで示した実施形態例はいずれも、発光部470として4.3節で説明したハイブリッド発光部470が使われている。
Section 4.4 Embodiment in which the light source section and measurement section are integrated and miniaturized The hybrid light emitting section 470 explained in Section 4.3 using FIG. 11A and Section 3.3 using FIGS. 9F(a) and 9G FIGS. 12A and 12B show a structure in which the optical arrangement in the light source section 2 and the measurement section 8 that performs spectroscopic measurement are integrated and miniaturized. In both the embodiments shown in FIGS. 12A and 12B, the hybrid light emitting section 470 described in Section 4.3 is used as the light emitting section 470.

 図12Aと図12Bで示した実施形態例はいずれも分光特性計測に、反射形ブレーズドグレーティングを用いた分光素子320を使用する。この分光素子320で波長分解された測定波長毎の光は、1次元配列された光検出セルアレイで構成されるラインセンサ300上に集光する。 Both of the embodiments shown in FIGS. 12A and 12B use a spectroscopic element 320 using a reflective blazed grating to measure spectral characteristics. The light of each measurement wavelength separated by the spectroscopic element 320 is focused on the line sensor 300, which is composed of a one-dimensionally arranged photodetection cell array.

 図12Aは図9F(a)と異なり、所定光学部材90を経た後の光が計測対象物22表面上に集光する。すなわち所定光学部材90を経た後の光の集光位置の直前に折り返しミラー板314が設置され、ここで紙面の裏側に向かって折り返される。その後(図示して無いが)紙面の裏側に設置された計測対象物22表面上に集光する。 12A differs from FIG. 9F(a) in that the light after passing through the predetermined optical member 90 is focused on the surface of the measurement target 22. In FIG. That is, a folding mirror plate 314 is installed immediately before the condensing position of the light after passing through the predetermined optical member 90, and is folded back toward the back side of the page. After that (not shown), the light is focused on the surface of the measurement object 22 installed on the back side of the paper.

 4.1節で説明したように、このハイブリッド発光部470からの放射光は生体内に侵入し、生体内で乱反射を繰り返す。生体内で乱反射した光の一部は計測対象物22の表面(生体表面)の外に出る。この計測対象物22の表面(生体表面)から外に出た光の一部はピンホール310内を通過する。このピンホール310内を通過した光はハーフミラー板312で折り返され、コリメートレンズまたはFθレンズ322を経て平行光となる。 As explained in Section 4.1, the emitted light from this hybrid light emitting unit 470 enters the living body and is repeatedly reflected diffusely within the living body. A part of the light diffusely reflected within the living body exits the surface of the measurement object 22 (living body surface). A portion of the light emitted from the surface of the measurement object 22 (living body surface) passes through the pinhole 310. The light passing through the pinhole 310 is reflected by the half mirror plate 312 and becomes parallel light through a collimating lens or Fθ lens 322.

 分光素子320はわずかに傾いているため、この分光素子320で反射した光はハーフミラー板312の上部を通過してラインセンサ300に到達する。 Since the spectroscopic element 320 is slightly tilted, the light reflected by the spectroscopic element 320 passes through the upper part of the half mirror plate 312 and reaches the line sensor 300.

 このように折り返しミラー板314とハーフミラー板312を使って光路を途中で曲げると、光源部2と計測部8の一体構造の薄形化が可能となる効果が生まれる。 By bending the optical path in the middle using the folding mirror plate 314 and the half mirror plate 312 in this way, an effect is created in which the integrated structure of the light source section 2 and the measuring section 8 can be made thinner.

 図12Aに示した実施形態例の構造を使用せず、コリメートレンズまたはシリンドリカルレンズ120を取り去った図9F(b)の光学的配置を使用しても良い。 The structure of the example embodiment shown in FIG. 12A may not be used, but the optical arrangement of FIG. 9F(b) in which the collimating lens or cylindrical lens 120 is removed may be used.

 図12Bの一部は図9Gと同じ光学配置を利用すると共に、図12Aと同様に2枚の折り返しミラー板314、316で光路を曲げて、薄形化が可能としている。図12Bで使用するFθレンズまたはコリメートレンズ324を削除し、代わりに反射形所定光学部材90の光反射面を凹面形状の曲面にしても良い。それにより光学部品点数が減り、光学系の小形化と低価格化が達成できる効果が生まれる。 A part of FIG. 12B uses the same optical arrangement as FIG. 9G, and also bends the optical path with two folding mirror plates 314 and 316 similarly to FIG. 12A, making it possible to reduce the thickness. The Fθ lens or collimating lens 324 used in FIG. 12B may be omitted, and the light reflecting surface of the reflective predetermined optical member 90 may be made into a concave curved surface instead. This results in a reduction in the number of optical parts, which has the effect of making the optical system more compact and lower in price.

 第5章 光学雑音低減化への他の実施形態例
5.1節 スペックルノイズパターンの特徴 
 図13(a)は、光学的雑音の一種であるスペックルノイズ発生の基本原理を示す。間隔Pだけ離れた2個の光反射領域1046が配置されている。その光反射領域1046に対して入射光1042が垂直入射し、θ方向に反射する反射光1048の反射強度を図13(a)が示す。光の干渉理論に拠ると、その時の反射強度はcos(πPθ/λ)に比例する。ここで重要な事は、反射光1048の反射方向θで反射強度が周期的に変化する。この周期的な反射強度の変化が、スペックルノイズに関係する。
Chapter 5 Other embodiment examples for optical noise reduction Section 5.1 Features of speckle noise pattern
FIG. 13(a) shows the basic principle of generating speckle noise, which is a type of optical noise. Two light reflecting regions 1046 are arranged at a distance P apart from each other. FIG. 13A shows the reflection intensity of reflected light 1048 that is vertically incident on the light reflection region 1046 and reflected in the θ 0 direction. According to the theory of light interference, the reflected intensity at that time is proportional to cos 2 (πPθ 0 /λ). What is important here is that the reflection intensity changes periodically in the reflection direction θ 0 of the reflected light 1048. This periodic change in reflection intensity is related to speckle noise.

 図13をさらに拡張し、複数の光反射領域1046が周期Pで規則正しく配列された場合を考える。反射光1048を観察するユーザの目の位置が固定された場合、複数の光反射領域1046内の反射場所毎に、ユーザの目に入る反射方向θが変化する。そのため隣接する光反射領域1046からの反射振幅間が強め合って明るく見える場所と、反射振幅間が打ち消し合って暗く見える場所が発生する。このような見え方がスペックルノイズパターンと呼ばれる。 Further expanding FIG. 13, consider a case where a plurality of light reflection areas 1046 are regularly arranged with a period P. When the position of the user's eyes observing the reflected light 1048 is fixed, the reflection direction θ 0 that enters the user's eyes changes for each reflection location within the plurality of light reflection regions 1046. Therefore, there are areas where the reflection amplitudes from adjacent light reflection areas 1046 reinforce each other and appear bright, and areas where the reflection amplitudes cancel each other out and appear dark. This kind of appearance is called a speckle noise pattern.

 図13(b)は、2個の光反射領域1046への入射光1042の入射角がθに変化する場合の、θ方向に反射する反射光1048の反射強度を示す。光の干渉理論に拠ると、その時の反射強度はcos{πP(θ-θ)/λ} と変化する。 FIG. 13(b) shows the reflection intensity of the reflected light 1048 reflected in the θ 0 direction when the angle of incidence of the incident light 1042 on the two light reflection regions 1046 changes to θ i . According to the theory of light interference, the reflected intensity at that time changes as cos 2 {πP(θ 0 −θ i )/λ}.

 異なる波連間では互いに光干渉しないため、異なる波連間で光合成すると強度加算(光強度値の合成)に相当すると図7Cを用いて2.3節で説明した。例えば13(a)のように少なくとも1個の波連の一部を含む第1の光202を、2個の光反射領域1046に垂直入射させる。それと同時に上記波連とは光干渉しない他の波連の少なくとも一部を含む第2の光204を、図13(b)のように入射角θで入射させる。するとθ方向に反射した合成光(強度加算された光)の光強度は、cos(πPθ/λ)+cos{πP(θ-θ)/λ} で与えられる。例えば前式の第1項の光強度最大時に第2項の光強度が最小になるようにθの値を最適化すると、光強度の最大と最小が相殺(平均化または平滑化)される。その結果として、スペックルノイズ(光学的雑音)が大幅に低減される。 As explained in Section 2.3 using FIG. 7C, light synthesis between different wave trains corresponds to intensity addition (synthesis of light intensity values) because different wave trains do not interfere with each other. For example, as shown in FIG. 13(a), the first light 202 including a part of at least one wave train is vertically incident on the two light reflection regions 1046. At the same time, second light 204 including at least a part of another wave train that does not optically interfere with the wave train is made incident at an incident angle θ i as shown in FIG. 13(b). Then, the light intensity of the combined light (intensity-added light) reflected in the θ 0 direction is given by cos 2 (πPθ 0 /λ)+cos 2 {πP(θ 0 −θ i )/λ}. For example, if the value of θ i is optimized so that the light intensity of the second term is the minimum when the light intensity of the first term is maximum in the previous equation, the maximum and minimum light intensities will be canceled out (averaged or smoothed). . As a result, speckle noise (optical noise) is significantly reduced.

 つまり互いに位相非同期(位相不連続)402の関係のため、第1の光202と第2の光204間で互いに干渉しない(あるいは干渉量の少ない)状況を考える。この第1の光202と第2の光204間で互いに照射角を変えて計測対象物22に同時照射すると、スペックルノイズ(光学的雑音)が低減する。 In other words, consider a situation where the first light 202 and the second light 204 do not interfere with each other (or have a small amount of interference) due to their mutual phase asynchronous (phase discontinuous) relationship. Speckle noise (optical noise) is reduced when the first light 202 and the second light 204 are simultaneously irradiated onto the measurement object 22 with different irradiation angles.

 図13では説明の簡素化のため、互いに非干渉性(あるいは低干渉性(low coherent))の2光202、204のみの強度加算で説明した。しかしそれに限らず、互いに非干渉の関係にある3種類以上(あるいは4種類以上)の光202、204、206の照射角を変えて計測対象物22に同時照射してもよい。互いに非干渉性(あるいは低干渉性)の光の照射数を増加させると、スペックルノイズ(光学的雑音)の平均化数が増加するため、スペックルノイズ(光学的雑音)の低減効果が増加する。 In FIG. 13, in order to simplify the explanation, the intensity addition of only two beams 202 and 204 that are incoherent (or low coherent) with each other is explained. However, the present invention is not limited thereto, and three or more types (or four or more types) of light 202, 204, and 206 that have a non-interfering relationship with each other may be irradiated simultaneously onto the measurement target 22 by changing the irradiation angle. Increasing the number of irradiated lights that are incoherent with each other (or have low coherence) increases the number of averaged speckle noises (optical noises), which increases the speckle noise (optical noise) reduction effect. do.

 図14Aは、上記の原理を利用してスペックノイズ(光学的雑音)を低減させる光学配置を示す実施形態例を示す。異なる領域212、214、216を通過した各光202、204、206の光照射対象物1030に対する照射角を変化させながら重ねて照射する方法として、拡散板などの位相特性変換素子1050を利用する。位相特性変換素子1050の表面は微細な凹凸形状を有するため、そこを通過した光を拡散させる。そして光照射対象物1050上の任意位置への照射角が、第1の光202と第2の光204、第3の光206でθ、θ、θと変化する。またそれと同時にこの位置で、第1の光202と第2の光204、第3の光206が重なって照射される。 FIG. 14A shows an example embodiment illustrating an optical arrangement that utilizes the principles described above to reduce spec noise (optical noise). A phase characteristic conversion element 1050 such as a diffuser plate is used as a method of overlappingly irradiating the light irradiation target 1030 with the respective lights 202, 204, and 206 that have passed through different regions 212, 214, and 216 while changing the irradiation angle. Since the surface of the phase characteristic conversion element 1050 has fine irregularities, light passing therethrough is diffused. Then, the irradiation angle to an arbitrary position on the light irradiation target 1050 changes as θ 1 , θ 2 , and θ 3 for the first light 202 , the second light 204 , and the third light 206 . At the same time, the first light 202, the second light 204, and the third light 206 are irradiated in an overlapping manner at this position.

 それぞれの照射角が異なるので、光照射対象物1050上で現れるスペックルノイズ(光学的雑音)のパターンは第1の光202と第2の光204、第3の光206間で異なる。第1の光202と第2の光204、第3の光206間は非干渉(あるいは低干渉)の関係に有るので、光照射対象物1030上で互いに異なるスペックルノイズパターンが混ざり合う。その結果としてスペックルノイズパターンの平均化(平滑化)が行われ、全体としてのスペックルノイズ量(光学的雑音量)が低下する。 Since the respective irradiation angles are different, the pattern of speckle noise (optical noise) appearing on the light irradiation target 1050 is different between the first light 202, the second light 204, and the third light 206. Since the first light 202, the second light 204, and the third light 206 have a non-interfering (or low-interfering) relationship, different speckle noise patterns are mixed together on the light irradiation target 1030. As a result, the speckle noise pattern is averaged (smoothed), and the overall amount of speckle noise (optical noise amount) is reduced.

 ここで図14Aと図9Aとの対応関係を考える。上述したように、光照射対象物1050の表面で第1の光202と第2の光204、第3の光206が重なって照射される。従ってこの光照射対象物1050の表面が、光合成場所220に対応する。さらにこの光照射対象物1050の表面が、所定光学部材の入射面92も兼用すると考える。するとこの光照射対象物1050の表面と直交する垂線は、入射面側垂線96に相当する。図14Aが示すように光照射対象物1050に向かって進む第1の光202と第2の光204、第3の光206の進行方向と入射面側垂線96との間はそれぞれ、θ、θ、θの角度を持つ。図14Aから明らかなように、第1の光202の進行方向と入射面側垂線96間で生じる角度θに関し、θ≠0の関係が成り立つ。 Consider now the correspondence between FIG. 14A and FIG. 9A. As described above, the first light 202, the second light 204, and the third light 206 are irradiated onto the surface of the light irradiation target 1050 in an overlapping manner. Therefore, the surface of this light irradiation object 1050 corresponds to the photosynthesis site 220. Furthermore, it is considered that the surface of this light irradiation target 1050 also serves as the entrance surface 92 of the predetermined optical member. Then, the perpendicular line perpendicular to the surface of the light irradiation object 1050 corresponds to the incident surface side perpendicular line 96. As shown in FIG. 14A, the distance between the traveling directions of the first light 202, the second light 204, and the third light 206 and the normal line 96 on the incident surface side, which travel toward the light irradiation target 1050, is θ 1 , It has angles of θ 2 and θ 3 . As is clear from FIG. 14A, regarding the angle θ 1 generated between the traveling direction of the first light 202 and the normal line 96 on the incident surface side, the relationship θ 1 ≠0 holds true.

 図14Bは、本実施形態における応用例を示す。図14Bでは互いに非干渉(あるいは低干渉)の光202、204、206間で空間的に異なる位置に集光する。光照射対象物1030に対する照明系としてケラー照明系1026を採用した場合、この異なる位置に集光した光202、204、206は互いに混ざり合って(重なって)光照射対象物1030内の任意の位置に照射される。またこの時の照射角度は、互いに異なる。その結果としてスペックルノイズパターンの平均化(平滑化)が生じ、全体としてのスペックルノイズ(光学的雑音)が減少する。 FIG. 14B shows an application example of this embodiment. In FIG. 14B, lights 202, 204, and 206 that do not interfere with each other (or have low interference) are focused at spatially different positions. When the Keller illumination system 1026 is adopted as the illumination system for the light irradiation target 1030, the lights 202, 204, and 206 focused at different positions mix (overlap) with each other and are illuminated at any arbitrary position within the light irradiation target 1030. is irradiated. Further, the irradiation angles at this time are different from each other. As a result, the speckle noise pattern is averaged (smoothed) and the overall speckle noise (optical noise) is reduced.

 互いに非干渉(あるいは低干渉)の光202、204、206を空間的に異なる位置に集光させる方法として図14Bでは、複数の光軸を持つレンズを同一空間上に配置したフライアイレンズ1028を使用している。このフライアイレンズ1028を図14B(a)では、光学特性変換素子210の直後に配置している。また図14B(b)では、このフライアイレンズ1028を光学特性変換素子210の直前に配置すると共に、光学特性変換素子210と一体形成している。 As a method of focusing the lights 202, 204, and 206 that do not interfere with each other (or have low interference) at spatially different positions, in FIG. 14B, a fly-eye lens 1028 in which lenses with multiple optical axes are arranged in the same space is used. I am using it. In FIG. 14B(a), this fly-eye lens 1028 is placed immediately after the optical characteristic conversion element 210. Further, in FIG. 14B(b), this fly's eye lens 1028 is placed immediately in front of the optical property conversion element 210 and is integrally formed with the optical property conversion element 210.

 図14B(a)と図14B(b)のいずれも、第3、第2、第1の領域216,214,212を個々に通過した第3、第2、第1の光206、204、202はそれぞれ、α位置とβ位置、γ位置に集光する。ここでケラー照明系1026の採用で、各集光位置を通過後の各光206、204、202は混ざり合うと共に、異なる照射角を持って光照射対象物1030を照射する。 In both FIG. 14B(a) and FIG. 14B(b), the third, second, and first lights 206, 204, and 202 that have passed through the third, second, and first regions 216, 214, and 212 individually. are focused on the α, β, and γ positions, respectively. By employing the Keller illumination system 1026, the lights 206, 204, and 202 after passing through each condensing position are mixed together and irradiate the light irradiation target 1030 with different irradiation angles.

 異なる領域216、214、212の通過光206、204、202毎に異なる位置α、β、γに集光させる方法として図14Bの例では、フライアイレンズ1028を使用している。しかしそれに限らず、他の任意の方法で異なる位置α、β、γに集光させてもよい。その他の実施形態例として、フライアイレンズ1028の代わりに液晶レンズアレイを使用しても良い。 In the example of FIG. 14B, a fly's eye lens 1028 is used as a method for focusing the light beams 206, 204, and 202 passing through different regions 216, 214, and 212 at different positions α, β, and γ. However, the light is not limited to this, and the light may be focused at different positions α, β, and γ using any other method. In other example embodiments, a liquid crystal lens array may be used in place of fly's eye lens 1028.

 本実施形態の応用例として単芯ファイバを使う代わりに、バンドルファイバ1040を使用してもよい。図14C(a)は、バンドルファイバ1040を用いた応用例を示す。光源部2は、発光部470と光学特性変換部480から構成される。そしてこの光源部2を出た互いに非干渉な(または干渉性の低い)第1の光202と第2の光204は、ケラー照明系1026で計測対象物22に照射される。このケラー照明系1026内に設置されたコリメートレンズ318の焦点距離が、計測対象物22に照射される第1の光202と第2の光204の照射角差の値を制御する。すなわちコリメートレンズ318の焦点距離が短いと、両者間の照射角差が大きくなる。 As an application example of this embodiment, a bundle fiber 1040 may be used instead of using a single core fiber. FIG. 14C(a) shows an application example using bundle fiber 1040. The light source section 2 includes a light emitting section 470 and an optical characteristic converting section 480. The first light 202 and the second light 204 that are non-interfering with each other (or have low coherence) exiting the light source section 2 are irradiated onto the measurement object 22 by the Keller illumination system 1026. The focal length of the collimating lens 318 installed in this Keller illumination system 1026 controls the value of the difference in the irradiation angle between the first light 202 and the second light 204 irradiated onto the measurement object 22 . That is, when the focal length of the collimating lens 318 is short, the difference in the irradiation angle between the two becomes large.

 光学特性変換部480内に配置された光学特性変換素子210内では、第1の領域212と第2の領域214間で厚みが異なる。そして両者間の光路長差が可干渉距離ΔL(あるいはその2倍)を超えると、第1の光202と第2の光204間の干渉性が低下する。 In the optical property conversion element 210 disposed in the optical property conversion section 480, the first region 212 and the second region 214 have different thicknesses. If the optical path length difference between them exceeds the coherence length ΔL 0 (or twice that), the coherence between the first light 202 and the second light 204 decreases.

 集光レンズ314は、第1と第2の光202、204をバンドルファイバ1040の入射面上に集光させる。ここで第1の光202と第2の光204はそれぞれ、バンドルファイバ1040内の異なるコア領域内に入る。この通過するコア領域の違いとコリメートレンズ318の組み合わせで、バンドルファイバ1040出射光202、204間の進行方向が変化する。 The condensing lens 314 condenses the first and second lights 202 and 204 onto the incident surface of the bundle fiber 1040. Here, first light 202 and second light 204 each enter different core regions within bundle fiber 1040. Due to the combination of the difference in the core region through which the light passes and the collimator lens 318, the traveling direction between the light beams 202 and 204 emitted from the bundle fiber 1040 changes.

 図14C(b)では図14C(a)と比べ、バンドルファイバ1040の入射面直前に位相特性変換素子1050を配置した光学系を示す。この位相特性変換素子1050を通過した第1と第2の光202、204はそれぞれ、位相特性が変換されてバンドルファイバ1040内に入る。この位相特性変換素子1050として具体的には、すりガラスのように表面に微細構造をもつ拡散板を使ってもよい。またそれに限らず、グレーティングやホログラム素子、フレネルゾーンプレートなどを使用してもよい。 In comparison with FIG. 14C(a), FIG. 14C(b) shows an optical system in which a phase characteristic conversion element 1050 is arranged just before the incident surface of the bundle fiber 1040. The first and second lights 202 and 204 that have passed through the phase characteristic conversion element 1050 have their phase characteristics converted and enter the bundle fiber 1040. Specifically, as this phase characteristic conversion element 1050, a diffuser plate having a fine structure on its surface, such as ground glass, may be used. Furthermore, the present invention is not limited to this, and a grating, a hologram element, a Fresnel zone plate, etc. may be used.

 図14C(c)では図14C(b)と比べ、第1の光202と第2の光204の集光面近傍に位相特性変換素子1050を配置している。図14C(a)と図14C(b)では主に、バンドルファイバ1040内の異なるコア領域内を別々に第1の光202と第2の光204が通過する。それに比べて図14C(c)では、位相特性変換素子1050通過時に第1の光202と第2の光204が互いに混ざり合う。その結果としてバンドルファイバ1040内の同一のコア領域内を、第1の光202と第2の光204が通過する。 In FIG. 14C(c), compared to FIG. 14C(b), the phase characteristic conversion element 1050 is arranged near the convergence plane of the first light 202 and the second light 204. In FIGS. 14C(a) and 14C(b), the first light 202 and the second light 204 mainly pass through different core regions within the bundle fiber 1040. In comparison, in FIG. 14C(c), the first light 202 and the second light 204 mix with each other when passing through the phase characteristic conversion element 1050. As a result, the first light 202 and the second light 204 pass through the same core region within the bundle fiber 1040.

 図14Dは、実際に行った効果確認の実験結果を示す。図14Dの横軸は、計測対象物上の位置を表す。また図14Dの縦軸は、計測された光量を表す。スペックルノイズ量が大きいと、計測光量の変動量が大きく現れる。計測対象物22として表面凹凸高さの平均値を表すRa値が2.8μmの拡散板を使用した。 FIG. 14D shows the results of an experiment to confirm the effect actually performed. The horizontal axis in FIG. 14D represents the position on the measurement target. Further, the vertical axis in FIG. 14D represents the measured light amount. When the amount of speckle noise is large, the amount of variation in the measured light amount appears large. As the object to be measured 22, a diffuser plate having an Ra value of 2.8 μm, which represents the average height of surface irregularities, was used.

 単芯の光ファイバ内コア領域または光ガイド330/332/340内の中心部を通過した従来光を計測対象物22に照射した時のスペックルパターンの計測結果を図14D(a)が示す。図14D(a)では光強度の変動が大きく、大きなスペックルノイズが現れている。 FIG. 14D(a) shows the measurement results of the speckle pattern when the measurement target 22 is irradiated with conventional light that has passed through the core region within the single-core optical fiber or the center within the light guide 330/332/340. In FIG. 14D(a), the light intensity fluctuates greatly, and large speckle noise appears.

 図14D(b)は、図14C(b)の光学系採用時のスペックルパターンの計測結果を示す。光学変換素子210の材質に石英ガラスを使用し、それぞれ厚みが1mmずつ異なる48分割素子となっている。位相特性変換素子1050には、Ra値が0.5μmの拡散板を使用した。またバンドルファイバ1040の長さは1.5mで、直径5mmの範囲内に一本のコア径230μm、NA0.22の光ファイバを320本束ねた。集光レンズ314とコリメートレンズ318の焦点距離は共に、50mmに設定した。図14D(a)と比べて図14D(b)は、大幅に光学的干渉ノイズ(スペックルノイズ)が低減されている。 FIG. 14D(b) shows the measurement results of the speckle pattern when the optical system of FIG. 14C(b) is employed. The optical conversion element 210 is made of quartz glass, and is divided into 48 elements each having a thickness different by 1 mm. As the phase characteristic conversion element 1050, a diffusion plate having an Ra value of 0.5 μm was used. The length of the bundle fiber 1040 was 1.5 m, and 320 optical fibers each having a core diameter of 230 μm and an NA of 0.22 were bundled within a range of 5 mm in diameter. The focal lengths of the condensing lens 314 and the collimating lens 318 were both set to 50 mm. Compared to FIG. 14D(a), optical interference noise (speckle noise) in FIG. 14D(b) is significantly reduced.

 5.2節 各種光ファイバの特徴とコア領域内を通過する光特性
本5.2節では、光ファイバ各種の特徴と、光ファイバのコア領域112内を通過する光の特性に付いて説明する。そして次の5.3節で、その特性を利用したスペックルノイズ低減方法に付いて説明する。まず始めに、同じ単芯ファイバの中でのシングルモードファイバとマルチモードファイバの違いを説明する。
Section 5.2 Characteristics of various optical fibers and characteristics of light passing through the core region In Section 5.2, we will explain the characteristics of various optical fibers and the characteristics of light passing through the core region 112 of the optical fiber. . In the next section 5.3, a speckle noise reduction method using this characteristic will be explained. First, we will explain the difference between single mode fiber and multimode fiber within the same single core fiber.

 図15A(a)は、シングルモードファイバの特徴を示す。相対的に屈折率の高いコア領域112の周辺を囲むように、相対的に屈折率の低い材質で作られたクラッド領域114が配置される。ここでコア領域の屈折率をn、クラッド領域の屈折率をnで表わすと、n>nの関係が有る。 FIG. 15A(a) shows the characteristics of a single mode fiber. A cladding region 114 made of a material with a relatively low refractive index is arranged so as to surround the core region 112 with a relatively high refractive index. Here, when the refractive index of the core region is represented by n 1 and the refractive index of the cladding region is represented by n 2 , there is a relationship of n 1 >n 2 .

 このシングルモード光ファイバのコア領域112内を伝搬する光の光振幅分布は、図15A(a)右側の特性を示す。すなわちコア領域112の中央近傍で光振幅が最大値を取り、コア領域112内の周辺部に近付くと光振幅が低下する。このコア領域112内の光の振幅分布(振幅分布のモード)または電場分布152を、『基本モード』と呼ぶ。 The optical amplitude distribution of light propagating within the core region 112 of this single mode optical fiber shows the characteristics on the right side of FIG. 15A(a). That is, the light amplitude takes a maximum value near the center of the core region 112, and decreases as it approaches the periphery within the core region 112. The amplitude distribution of light (mode of amplitude distribution) or electric field distribution 152 within this core region 112 is referred to as a "fundamental mode."

 この図15A(a)右側に示した特性を常に満足する時のコア領域112内を伝搬する光の最小波長λの値は、コア領域内の直径をDとした時、 The value of the minimum wavelength λ C of light propagating within the core region 112 when the characteristics shown on the right side of FIG.

Figure JPOXMLDOC01-appb-M000021
の関係が有る事が知られている。従ってコア領域112内を伝搬する任意の光の波長λに対してシングルモードファイバとなる条件は、
Figure JPOXMLDOC01-appb-M000021
It is known that there is a relationship between Therefore, the conditions for a single mode fiber for any wavelength λ of light propagating within the core region 112 are as follows:

Figure JPOXMLDOC01-appb-M000022
となる。つまりコア領域内の直径Dが数式22の条件を満足する小さな値を取る場合には、コア領域112内で基本モードを形成する。
Figure JPOXMLDOC01-appb-M000022
becomes. In other words, when the diameter D within the core region takes a small value that satisfies the condition of Equation 22, a fundamental mode is formed within the core region 112.

 しかしコア領域内の直径Dが数式22の条件を超えて大きくなると、コア領域112内の光の振幅分布(振幅分布のモード)として、図15Aの右側の振幅分布とは異なる振幅分布を取るようになる。コア領域112内で基本モード以外の振幅分布(別のモード)が形成可能な光ファイバを、マルチモードファイバと呼ぶ。ここでは上記の基本モード以外のモード(振幅分布)を、『高次モード』と呼ぶ。上記数式22の関係を利用すると、マルチモードファイバの条件は、 However, when the diameter D in the core region becomes larger than the condition of Equation 22, the amplitude distribution (amplitude distribution mode) of light in the core region 112 takes on a different amplitude distribution from the amplitude distribution on the right side of FIG. 15A. become. An optical fiber in which an amplitude distribution (another mode) other than the fundamental mode can be formed within the core region 112 is called a multimode fiber. Here, modes (amplitude distribution) other than the above-mentioned fundamental mode are referred to as "higher-order modes." Using the relationship in Equation 22 above, the conditions for the multimode fiber are:

Figure JPOXMLDOC01-appb-M000023
で与えられる。本実施形態例では次の5.3節で説明するように、コア領域112内で発生する基本モード以外のモードを利用してスペックルノイズ(光学的雑音)を低減する。従って本実施形態例が実行可能な条件として、図23式を満足する必要が有る。
Figure JPOXMLDOC01-appb-M000023
is given by In this embodiment, speckle noise (optical noise) is reduced by using modes other than the fundamental mode generated within the core region 112, as described in the next section 5.3. Therefore, as a condition for implementing this embodiment, it is necessary to satisfy the equation shown in FIG.

 図15A(b)は、コア領域112内に光を集める方法を示す。集光レンズ330が光をコア領域112内に集光させる。ここで光の波動性から、集光スポットは幅dの広がりを持つ。集光レンズ330の絞り角の半値θと集光する光の波長λとの間には、 FIG. 15A(b) shows a method of focusing light within the core region 112. A focusing lens 330 focuses the light into the core region 112. Here, due to the wave nature of light, the focused spot has a width d. Between the half value θ of the aperture angle of the condensing lens 330 and the wavelength λ of the condensed light,

Figure JPOXMLDOC01-appb-M000024
の関係が有る。ここでD<<dで集光スポットの外側がコア領域112からはみ出すため、光の利用効率が低下する。本実施形態では充分に大きな光利用効率を確保するため、
Figure JPOXMLDOC01-appb-M000024
There is a relationship between Here, when D<<d, the outside of the condensed spot protrudes from the core region 112, resulting in a decrease in light utilization efficiency. In this embodiment, in order to ensure sufficiently high light utilization efficiency,

Figure JPOXMLDOC01-appb-M000025
の条件を追加する。そしてこの数式25を変形すると、
Figure JPOXMLDOC01-appb-M000025
Add the condition. And if we transform this formula 25, we get

Figure JPOXMLDOC01-appb-M000026
の関係が成立する。
Figure JPOXMLDOC01-appb-M000026
The relationship holds true.

 図15Bは、2種類の光ファイバの特徴を示す。図15Bの右側は、コア領域112内とクラッド領域114の屈折率分布138を示す。ここで縦軸は、光ファイバ断面内位置124を示す。図15B(a)に示す光ファイバは、コア領域112内の屈折率が至る所均一なステップインデックスSI(step-index)型光ファイバを示す。また図15B(b)に示す光ファイバは、グレードインデックスGI(graded index)型光ファイバを示す。このGI型光ファイバのコア領域112内の中央部は屈折率が高く、周辺部で屈折率が低い。本実施形態では、SI型とGI型のいずれの光ファイバを使用しても良い。 Figure 15B shows the characteristics of two types of optical fibers. The right side of FIG. 15B shows the refractive index distribution 138 within the core region 112 and in the cladding region 114. Here, the vertical axis indicates the position 124 within the optical fiber cross section. The optical fiber shown in FIG. 15B(a) is a step-index (SI) type optical fiber in which the refractive index within the core region 112 is uniform throughout. Further, the optical fiber shown in FIG. 15B(b) is a grade index GI (graded index) type optical fiber. The central portion of the core region 112 of this GI type optical fiber has a high refractive index, and the peripheral portion has a low refractive index. In this embodiment, either SI type or GI type optical fiber may be used.

 SD型とGI型いずれの光ファイバでも、コア領域112内の通過できる光の収束角の限界θmaxが決まっている。すなわち図15A(b)の集光レンズ330で光を集光する時の絞り角の半値θの値としてθmaxを超えた光はコア領域112とクラッド領域114間界面での全反射が不可能となる。そのためこの光は図15Bの破線矢印が示すように、コア領域112からクラッド領域114を経由して光ファイバの外に飛び出す。 In both the SD type and GI type optical fibers, the limit θmax of the convergence angle of light that can pass through the core region 112 is determined. That is, when light is focused by the condensing lens 330 in FIG. 15A(b), light exceeding θmax as the value of half-maximum θ of the aperture angle cannot be totally reflected at the interface between the core region 112 and the cladding region 114. Become. Therefore, this light jumps out of the optical fiber from the core region 112 via the cladding region 114, as indicated by the dashed arrow in FIG. 15B.

 この時のθmaxの正弦関数値sin(θmax)が光ファイバのNA値と定義される。そしてこのNA値が満たす条件として、 The sine function value sin(θmax) of θmax at this time is defined as the NA value of the optical fiber. And as a condition that this NA value satisfies,

Figure JPOXMLDOC01-appb-M000027
が成り立つことが知られている。この数式27を数式23に代入すると、
Figure JPOXMLDOC01-appb-M000027
is known to hold true. Substituting this formula 27 into formula 23, we get

Figure JPOXMLDOC01-appb-M000028
が得られる。次の5.3節で説明する本実施形態を実現する条件として、先に説明した数式23の代わりに上記数式28を使用しても良い。コア領域112の屈折率nとクラッド領域114の屈折率nは、光ファイバにより異なる。それに比べて光ファイバの仕様内にNA値が一般的に明記されている。従って数式28を使用すると、汎用性が高く、光学設計が容易になる効果が有る。
Figure JPOXMLDOC01-appb-M000028
is obtained. As a condition for realizing this embodiment described in the next section 5.3, the above-mentioned formula 28 may be used instead of the previously explained formula 23. The refractive index n 1 of the core region 112 and the refractive index n 2 of the cladding region 114 differ depending on the optical fiber. In comparison, the NA value is generally specified within the specifications of the optical fiber. Therefore, use of Equation 28 has the effect of providing high versatility and facilitating optical design.

 数式28は、コア領域112内に高次モードが発生し得る条件を示している。従ってコア領域112内に基本モードしか発生しない条件は、数式28の不等号方向を反転させた式で与えられる。またコア領域112内で光伝搬されるための入射角θの条件として Equation 28 shows the conditions under which higher-order modes can occur within the core region 112. Therefore, the condition under which only the fundamental mode occurs in the core region 112 is given by an expression obtained by reversing the direction of the inequality sign in Equation 28. Also, as a condition for the incident angle θ for light propagation within the core region 112,

Figure JPOXMLDOC01-appb-M000029
が常に成り立つ。この数式29の関係を利用すると、コア領域112内に基本モードしか発生しない条件として
Figure JPOXMLDOC01-appb-M000029
always holds true. Using the relationship of Equation 29, the condition for only the fundamental mode to occur in the core region 112 is

Figure JPOXMLDOC01-appb-M000030
が成立すると予想される。
Figure JPOXMLDOC01-appb-M000030
is expected to hold true.

 5.3節 光ファイバ内モード特性を利用したスペックルノイズパターン低減化方法
図15Cは、コア領域112内入射時の光の入射角θとコア領域112内光伝搬時のモード(電場分布152)との関係を説明する。まず始めに、図15Cと図9Aとの関係を説明する。光ファイバのコア領域112内が、所定光学部材90に対応する。またコア領域112内部が、光合成場所220を兼用する。そして光ファイバ(内コア領域112)の入口面が、所定光学部材の入射面92に対応する。また光ファイバの入射面が垂直に切断された構造を持つ場合には、光ファイバ(内コア領域112)の光軸方向が入射面側垂線96と平行になる。そしてコア領域112への入射角θが異なる入射光間で、第1の光202と第2の光204が識別される。
Section 5.3 Speckle Noise Pattern Reduction Method Using Mode Characteristics in an Optical Fiber Explain the relationship between First, the relationship between FIG. 15C and FIG. 9A will be explained. The inside of the core region 112 of the optical fiber corresponds to the predetermined optical member 90 . Further, the inside of the core region 112 also serves as a photosynthesis site 220. The entrance surface of the optical fiber (inner core region 112) corresponds to the entrance surface 92 of the predetermined optical member. Further, when the optical fiber has a structure in which the entrance surface is cut vertically, the optical axis direction of the optical fiber (inner core region 112) is parallel to the normal line 96 on the entrance surface side. Then, the first light 202 and the second light 204 are identified among the incident lights having different incident angles θ to the core region 112.

 図15C(a)は、第2の光204がコア領域112内に入射した状態を示す。第2の光204は、入射面側垂線96とほぼ平行な方向からコア領域112内に入射する。またこの時に第2の光204は、コア領域112内のほぼ中央部で入射する場合を考える。この時の入射角θ(第2の光204の入射方向と入射面側垂線96との間の角度)は、数式30の条件を満足する。従ってコア領域112内での第2の光204の電場分布152は、基本モード(TE(transverse electric)1)を形成する。 FIG. 15C(a) shows a state in which the second light 204 enters the core region 112. The second light 204 enters the core region 112 from a direction substantially parallel to the normal line 96 on the incident surface side. Also, consider the case where the second light 204 is incident at approximately the center of the core region 112 at this time. At this time, the incident angle θ (the angle between the incident direction of the second light 204 and the normal to the incident surface side 96) satisfies the condition of Equation 30. Therefore, the electric field distribution 152 of the second light 204 within the core region 112 forms a fundamental mode (TE (transverse electric) 1).

 図15C(b)は、第1の光202がコア領域112内に入射した状態を示す。図9Aで説明したように第1の光202と第2の光第2の光204との間では互いに、所定光学部材(ここでは光ファイバ)90の入射面92直前の進行方向が異なる、従って第1の光202の入射角θ(第1の光202の入射方向と入射面側垂線96との間の角度)に関して、θ≠0の関係が成り立つ。但し所定光学部材の入射面92(コア領域112の入射面)上では、第1の光202がコア領域112の中央部を通過する場合を考える。 FIG. 15C(b) shows a state in which the first light 202 enters the core region 112. As explained with reference to FIG. 9A, the first light 202 and the second light 204 travel in different directions just before the entrance surface 92 of the predetermined optical member (here, an optical fiber) 90, so Regarding the incident angle θ of the first light 202 (the angle between the incident direction of the first light 202 and the normal to the incident surface side 96), the relationship θ≠0 holds true. However, consider a case where the first light 202 passes through the center of the core region 112 on the entrance surface 92 of the predetermined optical member (the entrance surface of the core region 112).

 図15Cではマルチモードファイバを使用する(但しSI型とGI型のいずれを使用しても良い)場合は、コア領域112の直径Dに関して数式28(あるいは数式23)が成り立つ。またこの時に高い光利用効率を確保するため、数式26(あるいは数式25)を満たすように第1の光202の入射角θを設定する。ここでこの入射角θは、数式29の条件を満たす必要が有る。従って第1の光202の入射角θ範囲に関して、 In FIG. 15C, when a multimode fiber is used (although either the SI type or the GI type may be used), Equation 28 (or Equation 23) holds true regarding the diameter D of the core region 112. Furthermore, in order to ensure high light utilization efficiency at this time, the incident angle θ of the first light 202 is set so as to satisfy Equation 26 (or Equation 25). Here, this incident angle θ needs to satisfy the condition of Equation 29. Therefore, regarding the incident angle θ range of the first light 202,

Figure JPOXMLDOC01-appb-M000031
と記述できる。本実施形態では第1の光202の入射角θを第2の光の入射角より大きく設定して、コア領域112内で第2の光と異なる電場分布モード(高次モード)を形成させる。
Figure JPOXMLDOC01-appb-M000031
It can be written as In this embodiment, the incident angle θ of the first light 202 is set to be larger than the incident angle of the second light, so that an electric field distribution mode (higher-order mode) different from that of the second light is formed within the core region 112.

 この高次モード内のTE2モードは図15C(b)右側に示すように、コア領域112内の断面位置132の中央部で電場値が0となる。そしてコア領域112内の断面位置132のずれる方向で、電場の極性が反転する。 As shown on the right side of FIG. 15C(b), the electric field value of the TE2 mode within this higher-order mode becomes 0 at the center of the cross-sectional position 132 in the core region 112. The polarity of the electric field is reversed in the direction in which the cross-sectional position 132 within the core region 112 is shifted.

 コア領域112内での基本モード(TE1モード)とTE2モードとの違いは、光ファイバからの出射光の強度分布特性の違いに現れる。例えばコア領域112内で第2の光204が基本モード(TE1モード)で伝搬した場合には、光ファイバからの出射場所から離れた所での光断面強度分布(Far Field pattern)は『中心が明るく、周辺部が暗い強度分布』となる。他方で第1の光202のようにコア領域112内でTE2モードで伝搬した場合には、『中央部が相対的に暗く、中央から少しずれた領域が明るくなるドーナツ形の強度分布』を示す。従って光ファイバからの出射光の強度分布の観察で、コア領域112内伝搬光のモードの違いが予想できる。 The difference between the fundamental mode (TE1 mode) and TE2 mode within the core region 112 appears in the difference in the intensity distribution characteristics of the light emitted from the optical fiber. For example, when the second light 204 propagates in the fundamental mode (TE1 mode) within the core region 112, the light cross-sectional intensity distribution (Far Field pattern) at a location away from the emission location from the optical fiber is The intensity distribution is bright and the periphery is dark. On the other hand, when propagating in the TE2 mode within the core region 112 like the first light 202, it exhibits a ``doughnut-shaped intensity distribution in which the center is relatively dark and the areas slightly shifted from the center are bright.'' . Therefore, by observing the intensity distribution of the light emitted from the optical fiber, it is possible to predict the difference in the mode of the light propagating within the core region 112.

 図17Aの実験結果から、数式31内のκの値が決まる。このκの値は、3/4(望ましくは1/2)が適正と考えられる。さらにκ=1/4に設定すると、TE2モードを取る確率が高まる。 The value of κ in Equation 31 is determined from the experimental results shown in FIG. 17A. The appropriate value of κ is considered to be 3/4 (preferably 1/2). Furthermore, when κ is set to 1/4, the probability of taking the TE2 mode increases.

 図15C(c)は、第3の光206の入射角θ(所定光学部材の入射面92に入射直前の第3の光206の進行方向と入射面側垂線96間の角度)を第1の光202よりさらに大きく設定した時の状態を示す。この時の入射角θの条件は、 FIG. 15C(c) shows that the incident angle θ of the third light 206 (the angle between the traveling direction of the third light 206 immediately before entering the entrance surface 92 of the predetermined optical member and the perpendicular line 96 on the entrance surface side) is The state is shown when the light is set to be even larger than the light 202. The condition for the incident angle θ at this time is

Figure JPOXMLDOC01-appb-M000032
で与えられる。図15C(c)右図は、TE3モードの電場分布152を示す。この場合には、コア領域112の中心部で電場の値が負値を取る。そしてTE3モード(高次モード)が取れるコア領域112内直径Dの条件は、数式28で与えられる。
Figure JPOXMLDOC01-appb-M000032
is given by The right diagram in FIG. 15C(c) shows the electric field distribution 152 in the TE3 mode. In this case, the value of the electric field takes a negative value at the center of the core region 112. The condition for the inner diameter D of the core region 112 that allows the TE3 mode (higher order mode) is given by Equation 28.

 集光レンズ330を用いた集束光がコア領域112内に入射した場合には、集光レンズの通過位置で入射角θが変化する。そのため異なるモード形成光が混在した状態でコア領域112内を光伝搬する。 When the focused light using the condensing lens 330 enters the core region 112, the incident angle θ changes at the passing position of the condensing lens. Therefore, light propagates within the core region 112 in a state in which different mode-forming lights are mixed.

 図16Aは、コア領域112内の中心位置を基準として左右対称なモード形成光と左右非対称なモード形成光の組み合わせ状態を示す。図16A(a)は、左右対称な基本モード(TE1モード)形成光の電場分布152を示す。また図16A(b)は、左右非対称なTE2モード形成光の電場分布152を示す。そして図16A(c)は、両者を合成した電場分布152を示す。 FIG. 16A shows a combination of mode-forming light that is bilaterally symmetrical and mode-forming light that is asymmetrical with respect to the center position within the core region 112. FIG. 16A(a) shows an electric field distribution 152 of light forming a bilaterally symmetrical fundamental mode (TE1 mode). Further, FIG. 16A(b) shows an electric field distribution 152 of left-right asymmetric TE2 mode forming light. FIG. 16A(c) shows an electric field distribution 152 that combines both.

 図16A(b)に示すTE2モードでは、大きな電場値を示す位置124が左側(L)と右側(R)の2通りの場合が有る。そのため合成光(図16A(c))の強度分布を取った時の重心位置が、左図(L)と右図(R)でずれる。つまり図16A(c)の左側(L)では、重心位置116Aがコア領域112内中央位置より左側にずれる。また図16A(c)の右側(R)では、重心位置116Bがコア領域112内中央位置より右側にずれる。この重心位置ずれは、TE2モードを形成する光に限らない。例えばTE4やTE6など電場分布152が左右非対称な特性を示す任意の光を合成すると、重心位置ずれは発生する。ここまでの説明では主に、SI型光ファイバを中心に説明した。しかしそれに限らず、GI型光ファイバでも同様に、上記の説明内容は適用される。 In the TE2 mode shown in FIG. 16A(b), there are two cases in which the position 124 exhibiting a large electric field value is on the left side (L) and on the right side (R). Therefore, the center of gravity position when taking the intensity distribution of the combined light (FIG. 16A(c)) is shifted between the left diagram (L) and the right diagram (R). That is, on the left side (L) of FIG. 16A(c), the center of gravity position 116A is shifted to the left side from the center position within the core region 112. Further, on the right side (R) of FIG. 16A(c), the center of gravity position 116B is shifted to the right side from the center position within the core region 112. This gravity center position shift is not limited to light forming the TE2 mode. For example, when arbitrary lights such as TE4 and TE6 whose electric field distribution 152 exhibits a left-right asymmetrical characteristic are combined, a shift in the center of gravity occurs. The explanation so far has mainly focused on SI type optical fibers. However, the above description is not limited to this, and the above description is also applicable to GI type optical fibers.

 図16Bは、この重心位置ずれを利用してスペックルノイズを低減させた実施形態例を示す。平行光の光路中にマスクパターンMPを配置して、レーザ光断面510内の上側扇形領域Aのみを抽出する。集光レンズ330は、この抽出された光を導波素子(光ファイバ/光導波路/光ガイド)110のコア領域112内に集光させる。するとこの導波素子(光ファイバ/光導波路/光ガイド)110の出口では、コア領域112の中心位置からずれた位置に強度分布の重心位置116Aが発生する。コリメートレンズ318がこの導波素子(光ファイバ/光導波路/光ガイド)110からの出射光を平行光に変換する。 FIG. 16B shows an embodiment in which speckle noise is reduced by utilizing this center of gravity position shift. A mask pattern MP is placed in the optical path of the parallel light to extract only the upper fan-shaped area A within the laser beam cross section 510. The condenser lens 330 condenses this extracted light into the core region 112 of the waveguide element (optical fiber/optical waveguide/light guide) 110. Then, at the exit of this waveguide element (optical fiber/optical waveguide/light guide) 110, a gravity center position 116A of the intensity distribution occurs at a position shifted from the center position of the core region 112. A collimating lens 318 converts the light emitted from the waveguide element (optical fiber/optical waveguide/light guide) 110 into parallel light.

 そしてレーザ光断面510内の下側扇形領域Bのみを抽出した場合には、この導波素子(光ファイバ/光導波路/光ガイド)110の出口面内に強度分布の重心位置116Bが発生する。この重心位置116Bは、コア領域112の中心位置を基準とした重心位置116Aとは反対の位置に現れる。 When only the lower fan-shaped region B in the laser beam cross section 510 is extracted, a center of gravity position 116B of the intensity distribution occurs within the exit plane of the waveguide element (optical fiber/optical waveguide/light guide) 110. This center of gravity position 116B appears at a position opposite to the center of gravity position 116A based on the center position of the core region 112.

 コリメートレンズ318通過後の平行光の進行方向は、AとBで互いに若干ずれる。上側扇形領域Aで抽出された光Aと下側扇形領域Bで抽出された光B間で干渉しない関係(位相非同期402)が成立する場合を考える。互いに進行方向が異なるAとBの光をケラー照明系を用いて計測対象物22に同時照射した場合には、図13で説明したようにスペックルノイズが低減する。 The traveling directions of the parallel light after passing through the collimating lens 318 are slightly shifted from each other in A and B. Consider a case where a relationship (phase asynchronization 402) in which light A extracted in the upper fan-shaped region A and light B extracted in the lower fan-shaped region B do not interfere is established. When the measurement object 22 is simultaneously irradiated with the lights A and B, which travel in different directions using the Keller illumination system, speckle noise is reduced as described with reference to FIG. 13.

 図16Cは、光学特性変換素子を使用してスペックルノイズを低減させる実施形態例を示す。レーザ光断面510に対して、この光軸を中心に角度方向に8領域に分割(角度分割)する。各領域を通過した光(エレメント)間は互いに、可干渉距離ΔL(の2倍値)以上の光路長差を持つ。すると導波素子(光ファイバ/光導波路/光ガイド)110の出口面内には、8個の互いに異なる強度分布の重心位置116が形成される。 FIG. 16C shows an example embodiment that uses optical property conversion elements to reduce speckle noise. The laser beam cross section 510 is divided into eight regions (angular division) in the angular direction around the optical axis. The light (elements) that have passed through each region have an optical path length difference that is greater than or equal to (twice the value of) the coherence length ΔL 0 . Then, within the exit plane of the waveguide element (optical fiber/optical waveguide/light guide) 110, eight barycenter positions 116 with mutually different intensity distributions are formed.

 レーザ光断面510を角度分割すると、コア領域112内で非対称な電場モード(TE2など)が形成され易くなる。それによりスペックルノイズの低減効果が向上する効果が生まれる。 If the laser beam cross section 510 is divided into angles, asymmetric electric field modes (TE2, etc.) are likely to be formed within the core region 112. This produces the effect of improving the speckle noise reduction effect.

 図16Cまたは図16Bでは、集光レンズ330がコア領域112内の入射面92上に集光させる。集光位置をずらしてコア領域112内の入射面92上に入射するスポットサイズを大きくすると、スペックルノイズの低減効果が薄れる現象を実験で確認できた。コア領域112内の入射面92上でのスポットサイズを大きくすると、コア領域112とクラッド領域114間の界面での全反射頻度が増加する。この界面での全反射で位相シフトが発生するため、スペックルノイズの低減効果が薄れる。 In FIG. 16C or 16B, a focusing lens 330 focuses light onto the entrance surface 92 within the core region 112. Experiments have confirmed that when the spot size incident on the incident surface 92 in the core region 112 is increased by shifting the light collection position, the speckle noise reduction effect weakens. When the spot size on the entrance surface 92 in the core region 112 is increased, the frequency of total reflection at the interface between the core region 112 and the cladding region 114 increases. Since a phase shift occurs due to total reflection at this interface, the effect of reducing speckle noise is weakened.

 大きなスペックルノイズの低減効果を確保するには、コア領域112内直径Dに対するスポットサイズ(直径)の比率は1以下が必須となる。またこの比率として、3/4以下、もしくは1/2以下が望ましい。 In order to ensure a large speckle noise reduction effect, the ratio of the spot size (diameter) to the inner diameter D of the core region 112 must be 1 or less. Moreover, this ratio is desirably 3/4 or less, or 1/2 or less.

 ここでのスポットサイズとしてここでは、光学系の有効光束径を定義する。この有効光束径として例えば、集光レンズ330内を通過できるレーザ光断面510の最大直径をコア領域112内の入射面92上に投映した時の直径と定義しても良い。集光面上での光強度は矩形特性では無く、中心が最大で周辺が減少する光強度分布を取る場合が多い。この状況を考慮して、コア領域112内の入射面92上での強度分布における最大強度の半値を取る範囲の直径(半値幅)あるいは最大強度のe-2値を取る範囲の直径(e-2幅)をスポットサイズと見なしても良い。 The spot size here is defined as the effective beam diameter of the optical system. For example, the effective beam diameter may be defined as the diameter when the maximum diameter of the laser beam cross section 510 that can pass through the condenser lens 330 is projected onto the incident surface 92 in the core region 112. The light intensity on the light collecting surface does not have a rectangular characteristic, but often takes a light intensity distribution that is maximum at the center and decreases at the periphery. Taking this situation into account, the diameter of the range that takes half the maximum intensity in the intensity distribution on the incident surface 92 in the core region 112 (half-width) or the diameter of the range that takes the e -2 value of the maximum intensity (e - 2 width) may be regarded as the spot size.

 またコア領域112内の入射面92上でのスポット(レーザ光断面510)の中心がコア領域112の中心から大きくずれると、コア領域112とクラッド領域114間の界面での全反射に起因する位相シフト量が大きくなる。従ってスペックルノイズの低減効果を得るための、コア領域112内の入射面92上でのスポット(レーザ光断面510)の中心とコア領域112の中心との間の許容ずれ量を説明する。コア領域112内直径Dに対して、このずれ量はD/2以下の必要が有る。またこのずれ量は、D/4(またはD/8)以下が望ましい。 Furthermore, if the center of the spot (laser beam cross section 510) on the incident surface 92 in the core region 112 deviates significantly from the center of the core region 112, the phase shift due to total reflection at the interface between the core region 112 and the cladding region 114 will occur. The amount of shift increases. Therefore, the allowable amount of deviation between the center of the spot (laser beam cross section 510) on the incident surface 92 in the core region 112 and the center of the core region 112 in order to obtain the effect of reducing speckle noise will be explained. With respect to the inner diameter D of the core region 112, this amount of deviation needs to be D/2 or less. Further, this amount of deviation is preferably D/4 (or D/8) or less.

 コア領域112内を伝搬するTE3モードの光は、コア領域112の中心位置を基準とした対称な電場分布特性を持つ。そのためTE3モードの光は、強度分布の重心位置ずれ量増大化には貢献しない。従って効果的にスペックルノイズを低減させるには、コア領域112内に入射する光の全ての入射角θに関してsinθ≦κNAの条件を満足させるのが望ましい。 The TE3 mode light propagating within the core region 112 has electric field distribution characteristics that are symmetrical with respect to the center position of the core region 112. Therefore, the TE3 mode light does not contribute to an increase in the amount of deviation of the center of gravity of the intensity distribution. Therefore, in order to effectively reduce speckle noise, it is desirable to satisfy the condition of sin θ≦κNA for all incident angles θ of light incident on the core region 112.

 また合成光の強度分布内の重心位置ずれ量(図16A(c))は、基準モード(TE1モード)を形成する第2の光204とTE2モードを形成する第1の光202間のトータル振幅量の違いで変化する。すなわちTE2モードを形成する第1の光202の相対的なトータル振幅量を増加させると、合成光の強度分布内の重心位置ずれ量は増大する。逆にTE2モードを形成する第1の光202が存在しないと、合成光の強度分布内の重心位置のずれは発生しない。従って効果的にスペックルノイズを低減させるには、コア領域112内に入射する全ての光に関する最大入射角θが数式31を満足する必要がある。 In addition, the amount of deviation of the center of gravity in the intensity distribution of the combined light (FIG. 16A(c)) is the total amplitude between the second light 204 forming the reference mode (TE1 mode) and the first light 202 forming the TE2 mode. It changes depending on the amount. That is, when the relative total amplitude amount of the first light 202 forming the TE2 mode is increased, the amount of gravity center position shift in the intensity distribution of the combined light increases. Conversely, if the first light 202 forming the TE2 mode does not exist, no shift in the center of gravity within the intensity distribution of the combined light occurs. Therefore, in order to effectively reduce speckle noise, the maximum incident angle θ for all light incident on the core region 112 needs to satisfy Expression 31.

 本実施形態例では第1の光202と第2の光の入射角の違い(進行方向の違い)を利用して、コア領域112内のモードの違いを発生させる。そのため本実施形態例では、マルチモードファイバの使用(SI型とGI型のいずれでも良い)を前提としている。従ってコア領域内の直径Dに関して、数式28を満足する必要が有る。 In this embodiment, a difference in the incident angle (difference in traveling direction) between the first light 202 and the second light is used to generate a difference in mode within the core region 112. Therefore, this embodiment assumes the use of a multimode fiber (either SI type or GI type may be used). Therefore, it is necessary to satisfy Equation 28 regarding the diameter D within the core region.

 5.4節 光学雑音低減化効果
図17Aは、本実施形態例におけるスペックルノイズ低減効果を確認した実験結果を示す。図16Cの光学系内を、半導体レーザ素子500から出射した520nm波長光を通過させた。コリメートレンズ318通過後の光を、表面粗さの平均値Raが2.82μmの拡散板表面で90度方向に反射させた。そしてこの反射光を、CCDカメラで観察した。実験に使用したマルチモード光ファイバとして、コア径D=600μm、NA値=0.22、全長1.5mのSI型を使った。
Section 5.4 Optical Noise Reduction Effect FIG. 17A shows the experimental results that confirmed the speckle noise reduction effect in this embodiment. The 520 nm wavelength light emitted from the semiconductor laser element 500 was passed through the optical system shown in FIG. 16C. The light after passing through the collimating lens 318 was reflected in a 90 degree direction on the surface of the diffuser plate with an average surface roughness Ra of 2.82 μm. This reflected light was then observed with a CCD camera. The multimode optical fiber used in the experiment was an SI type with a core diameter D of 600 μm, an NA value of 0.22, and a total length of 1.5 m.

 スペックルノイズで発生する計測光強度の変動値を平均値で割って得られる分布特性の標準偏差をスペックルコントラストCs(speckle contrast)と定義し、従来光使用時のCs値を図17Aの左側縦軸に示した。また従来光使用時のCs値に対する8領域に角度分割した光学特性変換素子210使用時のCs値に比率を、図17Aの右側縦軸に示した。 The standard deviation of the distribution characteristic obtained by dividing the fluctuation value of the measured light intensity caused by speckle noise by the average value is defined as speckle contrast Cs (speckle contrast), and the Cs value when using conventional light is shown on the left side of Figure 17A. Shown on the vertical axis. The right vertical axis of FIG. 17A shows the ratio of the Cs value when using the optical characteristic conversion element 210, which is angularly divided into eight regions, to the Cs value when using conventional light.

 有効光束系を用いて算出したコア領域112内に入射する全ての光に関する最大入射角θの正弦関数値sinθに対するNA値の比率を、図17Aの横軸に設定した。この横軸の値が増加すると、スペックノイズの低減効果が増大する。 The ratio of the NA value to the sine function value sin θ of the maximum incident angle θ for all the light incident on the core region 112 calculated using the effective beam system is set on the horizontal axis of FIG. 17A. As the value on this horizontal axis increases, the spec noise reduction effect increases.

 図17Bは、光学特性変換素子210の角度分割(横軸)を変化させた時のCs値の変化を示す。図17Aと同一の実験条件になっている。角度分割が1の時は、光学特性変換素子210を使用する前の従来光学系を示す。角度分割数を増加させると、スペックルノイズ量が低下する。 FIG. 17B shows the change in the Cs value when the angular division (horizontal axis) of the optical property conversion element 210 is changed. The experimental conditions are the same as in FIG. 17A. When the angle division is 1, a conventional optical system is shown before using the optical characteristic conversion element 210. As the number of angular divisions increases, the amount of speckle noise decreases.

 有効光束系を用いて算出したコア領域112内に入射する全ての光に関する最大入射角θの正弦関数値を、実行NA値と定義している。実行NA値として、図17B(a)は1/29、図17B(b)は1/44の時の実験結果を示す。ここでNA値が0.22の光ファイバを実験に使用した。従って図17B(a)での換算値はNA/sinθ=29×0.22=6.4、図17B(b)の換算値は換算値はNA/sinθ=44×0.22=9.7となる。 The sine function value of the maximum incident angle θ for all the lights incident on the core region 112 calculated using the effective beam system is defined as the execution NA value. FIG. 17B(a) shows the experimental results when the execution NA value is 1/29, and FIG. 17B(b) shows the experimental result when it is 1/44. Here, an optical fiber with an NA value of 0.22 was used in the experiment. Therefore, the converted value in FIG. 17B(a) is NA/sinθ=29×0.22=6.4, and the converted value in FIG. 17B(b) is NA/sinθ=44×0.22=9.7 becomes.

 第6章 時系列方向で任意波形の発光が可能な光源部 
6.1節 発光量の高速制御機能を有する光学系配置
図18Aは、発光量の高速制御機能に対応可能な光源部2内光学系に関する実施形態例を示す。半導体レーザ素子500からの放射光断面510は、楕円特性を持つ。その楕円特性を補正するため、互いに母線(bus bar)が直交する2枚のシリンドリカルレンズ256、258を使用する。すなわち長軸側対応シリンドリカルレンズ256が、半導体レーザ素子500からの放射光を長軸方向で平行光に変換する。そして短軸側対応シリンドリカルレンズ256が、半導体レーザ素子500からの放射光を短軸方向で平行光に変換する。
Chapter 6 Light source unit capable of emitting arbitrary waveform light in time series direction
Section 6.1 Arrangement of Optical System Having a Function of High-speed Control of Light Emission Amount FIG. 18A shows an example of an embodiment regarding an optical system in the light source unit 2 that can support a function of high-speed control of the light emission amount. A cross section 510 of emitted light from the semiconductor laser device 500 has elliptical characteristics. In order to correct the elliptic characteristic, two cylindrical lenses 256 and 258 whose bus bars are perpendicular to each other are used. That is, the long-axis side cylindrical lens 256 converts the emitted light from the semiconductor laser element 500 into parallel light in the long-axis direction. The short-axis side corresponding cylindrical lens 256 converts the emitted light from the semiconductor laser element 500 into parallel light in the short-axis direction.

 レーザ光断面510が略円形になった場所に、8領域に角度分割する光学特性変更素子210を配置する。図17Bが示すように光学特性変換素子210内の角度分割数を増加させると、スペックルノイズの低減効果が増大する。従ってこの角度分割数は、任意数に設定しても良い。 At a location where the laser beam cross section 510 becomes approximately circular, an optical characteristic changing element 210 that divides the angle into eight regions is placed. As shown in FIG. 17B, increasing the number of angular divisions in the optical characteristic conversion element 210 increases the speckle noise reduction effect. Therefore, the number of angular divisions may be set to an arbitrary number.

 光路変換用プリズム252は、光学特性変更素子210通過光を光検出素子250と導波素子(光ファイバ/光導波路/光ガイド)110に分波する。光路変換用プリズム252内の光入出力面は、反射防止コート面246となっている。また全反射面248では、光路変換用プリズム252内で光が全反射する。 The optical path converting prism 252 splits the light passing through the optical characteristic changing element 210 into a photodetecting element 250 and a waveguide element (optical fiber/optical waveguide/light guide) 110. The light input/output surface within the optical path converting prism 252 is an antireflection coated surface 246 . Further, on the total reflection surface 248, the light is totally reflected within the optical path changing prism 252.

 集光レンズ330-1は、光量一部の反射面254で反射した一部の光を光検出素子250の受光面上に集光させる。また集光レンズ330-2は、残りの光を導波素子(光ファイバ/光導波路/光ガイド)110に向けて集光させる。 The condensing lens 330-1 condenses a portion of the light reflected by the reflecting surface 254 on the light receiving surface of the photodetector element 250. Further, the condenser lens 330-2 condenses the remaining light toward the waveguide element (optical fiber/optical waveguide/light guide) 110.

 6.2節 光源部内構造例
図18Bは、図1内一部の詳細内容を説明した図である。この図18Bでは、図18Aに示した光源部2内の光検出素子250と半導体レーザ素子500のみを抜粋して記載している。
Section 6.2 Example of Internal Structure of Light Source Section FIG. 18B is a diagram illustrating detailed contents of a part of FIG. 1. In FIG. 18B, only the photodetector element 250 and the semiconductor laser element 500 in the light source section 2 shown in FIG. 18A are extracted and illustrated.

 発光量制御部30内部は、プリアンプ回路716と差動演算回路712、電流駆動回路718から構成される。光検出素子250で検出された発光量信号は、プリアンプ回路716で増幅される。差分演算回路712はプリアンプ回路716で増幅された発光量信号と時変発光量生成回路728から与えられる信号との間の差分値を算出し、この差分値を電流駆動回路718に出力する。電流駆動回路718ではこの出力信号に応じた電流値を半導体レーザ素子500に駆動する事で、半導体レーザ素子500からの発光量を制御する。 The inside of the light emission amount control section 30 is composed of a preamplifier circuit 716, a differential calculation circuit 712, and a current drive circuit 718. The light emission amount signal detected by the photodetector element 250 is amplified by the preamplifier circuit 716. The difference calculation circuit 712 calculates the difference value between the light emission amount signal amplified by the preamplifier circuit 716 and the signal given from the time-varying light emission amount generation circuit 728, and outputs this difference value to the current drive circuit 718. The current drive circuit 718 controls the amount of light emitted from the semiconductor laser device 500 by driving the semiconductor laser device 500 with a current value corresponding to this output signal.

 記録信号生成部32は、時変発光量生成回路728とメモリ回路726から構成される。この記録信号生成部32では、複雑な任意の時変発光パターンを生成できる。これに拠り、複雑で任意な時変発光パターンに基付いた発光が可能となる。メモリ回路726が、この複雑で任意な時変発光パターンを記憶する。この複雑で任意な時変発光パターンの情報は予め、USBメモリやハードディスクなどの外部記憶媒体内に記録しても良い。この場合には制御回路720の制御に基付いて、外部記憶素子駆動回路72経由で時変発光パターン情報がメモリ回路726内に転送される。 The recording signal generation section 32 includes a time-varying light emission amount generation circuit 728 and a memory circuit 726. This recording signal generation section 32 can generate any complex time-varying light emission pattern. This allows light emission based on a complex and arbitrary time-varying light emission pattern. A memory circuit 726 stores this complex and arbitrary time-varying light emission pattern. Information on this complex and arbitrary time-varying light emission pattern may be recorded in advance in an external storage medium such as a USB memory or a hard disk. In this case, time-varying light emission pattern information is transferred into the memory circuit 726 via the external storage element drive circuit 72 under the control of the control circuit 720 .

 例えば第8章で後述する光源部2と計測部8で連携した時系列的に複雑な処理を行う場合には、光源部2と計測部8間での高精度な同期合わせが必要となる。その同期合わせ用の接続端子が設けられている。この同期合わせ用接続端子に(計測部8を含む)外部との同期合わせ用信号ライン730が設置され、この外部との同期合わせ用信号ライン730内を伝達される信号に同期した基準クロックが、基準クロック生成回路732内で生成される。 For example, when the light source section 2 and the measurement section 8 are to perform complex time-series processing that will be described later in Chapter 8, highly accurate synchronization between the light source section 2 and the measurement section 8 is required. A connection terminal for synchronization is provided. A signal line 730 for synchronization with the outside (including the measurement unit 8) is installed at this connection terminal for synchronization, and a reference clock synchronized with the signal transmitted through the signal line 730 for synchronization with the outside is provided. It is generated within the reference clock generation circuit 732.

 計測部8も含めた外部との時系列的な高精度の連携動作を行うため、通信制御部740が設置されている。なおこの通信制御部740は、図1で記載した情報伝達経路4内の一部に含まれる。ここで説明する光源部2は、有線と無線のいずれの通信媒体を経由して情報通信が可能な構造となっている。そして有線通信実行部738が、外部との有線での情報通信制御を行う。また無線通信実行部736が、外部との無線での情報通信制御を行う。有線と無線のいずれの経路で通信される情報内容に関しては、通信制御インターフェース処理部742が情報処理(データ処理)を行う。 A communication control unit 740 is installed in order to perform time-series highly accurate cooperative operations with the outside including the measurement unit 8. Note that this communication control unit 740 is included in a part of the information transmission path 4 described in FIG. The light source section 2 described here has a structure that allows information communication via either wired or wireless communication media. Then, the wired communication execution unit 738 controls information communication by wire with the outside. Further, a wireless communication execution unit 736 controls wireless information communication with the outside. The communication control interface processing unit 742 performs information processing (data processing) regarding the information content communicated via either wired or wireless route.

 計測部8も含めた外部との間で通信される情報には例えば、図19Cで後述するような複雑なデータ構造を持つ場合が有る。通信情報解読部748が、このような複雑なデータ構造を解読する。また半導体レーザ素子500での発光パターンには秘匿性を含み、暗号化された情報を転送する場合がある。認証処理制御部746では、通信相手との認証処理や暗号鍵交換などの暗号化情報の転送に関係した処理を行う。 For example, information communicated with the outside including the measurement unit 8 may have a complicated data structure as described later with reference to FIG. 19C. The communication information decoding unit 748 decodes such a complicated data structure. Furthermore, the light emission pattern of the semiconductor laser element 500 may include confidentiality, and encrypted information may be transferred. The authentication processing control unit 746 performs processing related to the transfer of encrypted information, such as authentication processing with a communication partner and encryption key exchange.

 図18Bに示した電子回路の説明では、図18Aで説明した光学系に適用する例として説明した。しかしそれに限らず、例えば図9Iの光学系に適用しても良い。この場合には図18Bの半導体レーザ素子500を発光部470に置き換えるだけで、全ての動作が実行可能となる。また他の実施形態として、あらゆる光源部2に対して図18Bに示した電子回路を適用させても良い。図18Bで説明した実施形態を組み合わせた場合、光源部2からの放射光の一部を光検出素子250に振り分けて発光量をモニターすれば、発光量を高精度に制御できる。 In the description of the electronic circuit shown in FIG. 18B, the electronic circuit was explained as an example applied to the optical system described in FIG. 18A. However, the present invention is not limited thereto, and may be applied to, for example, the optical system shown in FIG. 9I. In this case, all operations can be performed simply by replacing the semiconductor laser element 500 in FIG. 18B with the light emitting section 470. Furthermore, as another embodiment, the electronic circuit shown in FIG. 18B may be applied to any light source section 2. When the embodiment described in FIG. 18B is combined, the amount of light emitted can be controlled with high precision by distributing a part of the emitted light from the light source section 2 to the photodetecting element 250 and monitoring the amount of light emitted.

 6.3節 発光波形設定のフォーマット例
記録信号生成部32内で生成される複雑で任意な時変発光パターンの設定方法は基本的に、『所定時間間隔毎の発光量系列』のデジタル信号で規定される。そして経過時間毎の発光量は、バイナリーデータで表現される。従ってこの時変発光パターンとして、経過時間毎のバイナリーデータ系列を表わすCSVファイル形式またはリレーショナルデータベース形式を採用しても良い。
Section 6.3 Format example of light emission waveform setting The method for setting a complex and arbitrary time-varying light emission pattern generated in the recording signal generation unit 32 is basically a digital signal of a ``series of light emission amounts at predetermined time intervals.'' stipulated. The amount of light emitted at each elapsed time is expressed in binary data. Therefore, as this time-varying light emission pattern, a CSV file format or a relational database format representing a binary data series for each elapsed time may be adopted.

 図19Aは、発光波形を規定したデータフォーマットの実施形態例を示す。ここでは異なる複数の発光パターンを同時に規定できる。時変の発光パターン毎の識別が可能なように、冒頭に時変発光パターンID(identification)(識別情報)750が設定される。冒頭もしくは冒頭に近い場所に時変発光パターンID(識別情報)750を配置すると、発光パターン検索が容易となる効果が生まれる。 FIG. 19A shows an example embodiment of a data format that defines a light emission waveform. Here, a plurality of different light emission patterns can be defined simultaneously. A time-varying light emitting pattern ID (identification) (identification information) 750 is set at the beginning so that each time-varying light emitting pattern can be identified. Placing the time-varying light emitting pattern ID (identification information) 750 at the beginning or a location near the beginning has the effect of facilitating the light emitting pattern search.

 基準クロック周波数752の方法は、基準クロック生成回路732で生成される基準クロック周波数を規定する。データステップ時間間隔754は、経過時間毎に発光量を設定する時の時間間隔を示す。時変発光パターン継続時間756は、時変発光パターンID(識別情報)750で規定された発光パターンの継続時間を表わす。時変発光パターン持続時間756の情報を利用すると、発光パターンの継続時間が事前に分かる。その結果として発光制御の事前準備の利便性が向上する効果が生まれる。 The reference clock frequency 752 method defines the reference clock frequency generated by the reference clock generation circuit 732. The data step time interval 754 indicates the time interval when setting the amount of light emission for each elapsed time. The time-varying light emitting pattern duration 756 represents the duration of the light emitting pattern defined by the time-varying light emitting pattern ID (identification information) 750. By using the information on the time-varying light emitting pattern duration 756, the duration of the light emitting pattern can be known in advance. As a result, there is an effect that the convenience of advance preparation for light emission control is improved.

 時変発光パターンのトータルステップ数758は、時変発光パターンID(識別情報)750で規定された時系列的に変化する発光パターンのステップ数を表わす。この時変発光パターンのトータルステップ数758とデータステップ時間間隔754を掛け合わせた値が、時変発光パターン継続時間756に相当する。 The total number of steps 758 of the time-varying light emitting pattern represents the number of steps of the time-series changing light emitting pattern defined by the time-varying light emitting pattern ID (identification information) 750. The value obtained by multiplying the total number of steps 758 of this time-varying light emitting pattern by the data step time interval 754 corresponds to the time-varying light emitting pattern duration 756.

 時変発光量のダイナミックレンジ(ビット階調数)760は、1個の発光量を規定するバイナリーデータの表現ビット数を表わす。このビット数を大きくすると、微細な発光量変化まで設定できる。フルレンジでの出力光量値762は、前記のバイナリーデータを最大値にした時に光源部2から出力される光量値を示す。 The dynamic range (bit gradation number) 760 of the time-varying light emission amount represents the number of representation bits of binary data that defines one light emission amount. By increasing the number of bits, it is possible to set even minute changes in the amount of light emitted. The full range output light amount value 762 indicates the light amount value output from the light source section 2 when the binary data is set to the maximum value.

 外部から指定される時変発光パターンの識別は基本的には、時変発光パターンID(識別情報)750の情報を用いて指定される。本実施形態ではそれ以外の方法として外部から入力されるアナログ信号レベルで時変発光パターンが指定可能となっている。例えば外部との同期合わせ用信号ラインをアナログレベルに設定可能とし、この信号ラインで設定される信号レベルを用いて時変発光パターンの切換も可能とする。この時に使用する情報が、発光パターンIDを指定する入力信号レベル最大値764と発光パターンIDを指定する入力信号レベル最小値766に対応する。例えば外部から入力されるアナログ信号レベルが発光パターンIDを指定する入力信号レベル最大値764以下で発光パターンIDを指定する入力信号レベル最小値766以上のレベルに設定された期間中に、対応する時変発光パターンを発光しても良い。 Identification of the time-varying light emitting pattern specified from the outside is basically specified using information of the time-varying light emitting pattern ID (identification information) 750. In this embodiment, as an alternative method, a time-varying light emission pattern can be specified using an analog signal level input from the outside. For example, a signal line for external synchronization can be set to an analog level, and the signal level set by this signal line can be used to switch the time-varying light emission pattern. The information used at this time corresponds to an input signal level maximum value 764 that specifies the light emission pattern ID and an input signal level minimum value 766 that specifies the light emission pattern ID. For example, when the analog signal level input from the outside is set to a level that is less than the maximum input signal level value 764 that specifies the emission pattern ID and equal to or higher than the minimum input signal level value 766 that specifies the emission pattern ID, A variable luminescence pattern may be emitted.

 本実施形態例では発光レベルが“0”に設定されると基本的に、“光源部2は無発光”の状態になる。しかしそれに限らず発光レベルが“0”に設定時でも、わずかに発光しても良い。この時のわずかな発光量を、アイドリング発光量値770として設定可能にしている。そしてこのアイドリング発光量値770が“0”以外の場合には、アイドリング発光量有無フラグ768の値が“1”となる。
 時変発光パターンID(識別情報)750毎の経過時間毎の発光量を示すバイナリーデータ系列は、上記のデータの下に時間経過に従ってバイナリー値が配置される。この時のトータルデータサイズ772の値は、時変発光パターンのトータルステップ数758で設定された値と時変発光量のダイナミックレンジ(ビット階調数)760で設定された値の積で与えられる。
In this embodiment, when the light emission level is set to "0", the light source section 2 is basically in a state of "no light emission". However, the present invention is not limited to this, and even when the light emission level is set to "0", a slight amount of light may be emitted. The small amount of light emitted at this time can be set as the idling light amount value 770. If the idling light emitting amount value 770 is other than "0", the value of the idling light emitting amount presence/absence flag 768 becomes "1".
In the binary data series indicating the amount of light emitted at each elapsed time for each time-varying light emission pattern ID (identification information) 750, binary values are arranged below the above data according to the elapsed time. The value of the total data size 772 at this time is given by the product of the value set in the total step number 758 of the time-varying light emitting pattern and the value set in the dynamic range (number of bit gradations) 760 of the time-varying light emitting amount. .

 6.4節 発光の通信制御例 
図19Bは、光源部2内とその外部との間の通信制御例を示す。ここで通信相手となる光源部2の外部をホストと呼ぶ。このホストは図1の実施形態では、システム内制御部50を対応させても良い。
Section 6.4 Light emission communication control example
FIG. 19B shows an example of communication control between the inside of the light source section 2 and the outside thereof. Here, the external part of the light source unit 2 that is the communication partner is called a host. In the embodiment of FIG. 1, this host may be associated with the internal system control unit 50.

 ここではホスト50側から光源部2に対して事前に発光パターン送信788を行い、その後でホスト50と光源部2間で発光タイミングの同期合わせを行う例を示す。この発光タイミングの同期合わせを行った後に、光源部2が指定された発光パターンで発光を開始する。 Here, an example is shown in which a light emission pattern is transmitted 788 from the host 50 side to the light source section 2 in advance, and then the light emission timing is synchronized between the host 50 and the light source section 2. After synchronizing the light emission timing, the light source section 2 starts emitting light in the specified light emission pattern.

 いずれの通信の場合も最初に相互認証期間780を持つ。まず始めにホスト50側から光源部2に対してホストIDを送信する。光源部2はホストIDを受信すると、光源部IDをホスト側に返信する。次にホスト50が光源部2に対してホスト側の暗号鍵を送信し、その後で光源部2はホスト50側に光源部側暗号鍵を送信する。このような相互認証期間780を設けると、光源部2がインターネット経由で世界中の任意の機器との間での情報通信が可能となる効果が生まれる。 In any case of communication, there is a mutual authentication period 780 first. First, a host ID is transmitted from the host 50 side to the light source unit 2. When the light source section 2 receives the host ID, the light source section 2 returns the light source section ID to the host side. Next, the host 50 transmits the host side encryption key to the light source section 2, and then the light source section 2 transmits the light source section side encryption key to the host 50 side. Providing such a mutual authentication period 780 has the effect that the light source unit 2 can communicate information with any device around the world via the Internet.

 次の制御信号通信期間790では、ホスト50と光源部2間で交換した暗号鍵から生成された特殊鍵(例えばホスト側暗号鍵と光源側暗号鍵との合成鍵)で暗号化された制御信号情報が通信される。 In the next control signal communication period 790, a control signal is encrypted with a special key (for example, a composite key of a host side encryption key and a light source side encryption key) generated from the encryption key exchanged between the host 50 and the light source unit 2. Information is communicated.

 発光パターン送信期間788では、制御信号送信期間790後に発光パターンの情報が送信784される。ここでパターン送信期間784でも、ホスト50と光源部2間で交換した暗号鍵から生成された特殊鍵(例えばホスト側暗号鍵と光源側暗号鍵との合成鍵)で暗号化された発光パターン情報が、図19Aのフォーマットで送信される。 In the light emission pattern transmission period 788, information on the light emission pattern is transmitted 784 after the control signal transmission period 790. Here, during the pattern transmission period 784 as well, the light emission pattern information is encrypted with a special key (for example, a composite key of the host side encryption key and the light source side encryption key) generated from the encryption key exchanged between the host 50 and the light source unit 2. is transmitted in the format shown in FIG. 19A.

 他方で発光タイミング同期合わせ798の期間では、制御信号送信期間790後に発光期間794に入り、指定されたパターンでの発光が開始される。 On the other hand, during the period of light emission timing synchronization 798, a light emission period 794 begins after the control signal transmission period 790, and light emission in a specified pattern is started.

 図19Cは、発光タイミング同期合わせ期間798中に送信される制御信号790内のデータ構造例を示す。図19C(a)の制御信号790内は、図19C(b)に示すデータ構造を持つ。最初に配置されたプリアンブル640は、同期合わせ用に使用される。次に配置される(プリアンブル640の次に送信される)送信元/受信元確認情報620内の構造は、図19C(c)に示される。最初にホスト(送信元)側のID(識別情報)622が送信され、続いて光源部(受信先)側のID(識別情報)628が送信される。ここで送信される情報として、ID(識別情報)以外にIPアドレスを使用しても良い。 FIG. 19C shows an example data structure within a control signal 790 sent during a light emission timing synchronization period 798. The control signal 790 in FIG. 19C(a) has a data structure shown in FIG. 19C(b). The first placed preamble 640 is used for synchronization. The structure of the source/receiver confirmation information 620 placed next (transmitted after the preamble 640) is shown in FIG. 19C(c). First, an ID (identification information) 622 on the host (sending source) side is transmitted, followed by an ID (identification information) 628 on the light source (receiving destination) side. As the information transmitted here, an IP address may be used in addition to the ID (identification information).

 制御信号識別情報642は、今回送信される制御信号790の種別情報が格納される。この種別情報を使い、発光パターン送信期間788か、発光タイミング同期合わせ期間798かの識別が可能となる。 The control signal identification information 642 stores type information of the control signal 790 transmitted this time. Using this type information, it is possible to identify whether it is the light emission pattern transmission period 788 or the light emission timing synchronization period 798.

 発光パターン識別情報750の領域内は、図19Aの時変発光パターンID(識別情報)内に登録された識別情報のいずれかの情報を記述できる。光源部2はこの発光パターン識別情報750内に格納された情報を解読し、この直後に発光する発光パターンを認識する。 In the area of the light emitting pattern identification information 750, any information of the identification information registered in the time-varying light emitting pattern ID (identification information) of FIG. 19A can be written. The light source section 2 decodes the information stored in this light emission pattern identification information 750 and recognizes the light emission pattern that emits light immediately after this.

 発光開始時期指定情報648は、光源部2が発光を開始するタイミングを指定する。具体的には同期合わせ用プリアンブル640の開始タイミングまたはこの発光開始時期指定情報648の送信開始(または受信開始)タイミングと基準タイミングとし、この基準タイミングから発光開始までの遅延時間を指定する。例えば第8章で後述する光源部2と計測部8で連携した時系列的に複雑な処理を行う場合には、光源部2と計測部8間での時間軸方向に沿った(例えば1nSレベルの)高精度なタイミング同期が必要となる。この発光開始時期指定情報648を利用する事で、光源部2と計測部8での高精度な同期処理が可能となる効果が生まれる。 The light emission start timing designation information 648 specifies the timing at which the light source section 2 starts emitting light. Specifically, the start timing of the synchronization preamble 640 or the transmission start (or reception start) timing of this light emission start time designation information 648 is set as a reference timing, and the delay time from this reference timing to the start of light emission is specified. For example, when performing complex time-series processing in which the light source section 2 and the measurement section 8 cooperate, which will be described later in Chapter 8, the light source section 2 and the measurement section 8 may ) requires highly accurate timing synchronization. By using this light emission start timing designation information 648, an effect is produced in which highly accurate synchronization processing between the light source section 2 and the measuring section 8 can be performed.

 図1を用いて冒頭で説明した内容では、単一の光学素位置10が本第6章で説明した光源部2を内蔵する。しかしそれに限らず本第6章で説明した光源部2が単独で存在し、インターネット経由でホスト50に接続されても良い。単独形態(stand alone type)ではなく、インターネット内の任意システムの一部として使用すると、光源部2の利用用途(application genre)が大幅に広がる効果が出る。 In the content explained at the beginning using FIG. 1, the single optical element position 10 incorporates the light source section 2 explained in Chapter 6 of this book. However, the present invention is not limited thereto, and the light source unit 2 described in Chapter 6 of this book may exist independently and be connected to the host 50 via the Internet. When used as part of an arbitrary system within the Internet, rather than as a stand alone type, the light source unit 2 has the effect of greatly expanding its applications.

 第7章 光学的雑音低減化と電気的雑音低減化との組み合わせ
7.1節 光応用分野における高精度計測方法
図20Aは、本実施形態例での高精度計測方法を示す。この図20Aは、図1で説明した光学装置10内の主要部分が抜粋抽出されて描画されている。つまり計測部8内で計測対象物24に対する光学的計測1002を行う。そして信号処理部42はその光学的計測1002の結果を解析し、必要な情報抽出1004を行う。
Chapter 7 Combination of Optical Noise Reduction and Electrical Noise Reduction Section 7.1 High-precision measurement method in optical application field FIG. 20A shows a high-precision measurement method in this embodiment. In FIG. 20A, the main parts inside the optical device 10 described in FIG. 1 are extracted and drawn. That is, optical measurement 1002 is performed on the measurement object 24 within the measurement unit 8 . The signal processing unit 42 then analyzes the results of the optical measurement 1002 and extracts necessary information 1004.

 ここで精度の高い情報抽出1004を行うには、光学的計測1002と情報抽出1004の両方の過程で外乱雑音(disturbance noise)を最低限に低減させる必要が有る。ここでは光学的雑音と電気的雑音の2種類の外乱雑音が混入し易い。従って高精度計測を行うには、光学的外乱雑音低減と電気的外乱雑音低減1012の2種類の外乱雑音低減が望ましい。 Here, in order to perform highly accurate information extraction 1004, it is necessary to reduce disturbance noise to a minimum in both the optical measurement 1002 and information extraction 1004 processes. Two types of disturbance noise, optical noise and electrical noise, are likely to be mixed here. Therefore, in order to perform high-precision measurement, two types of disturbance noise reduction are desirable: optical disturbance noise reduction and electrical disturbance noise reduction 1012.

 第2章と第3章、第5章で既に説明した方法を使用すると、光学的雑音の大幅な低減が可能となる。従来技術範囲では、この光学的雑音低減技術が無かったため、電気的外乱雑音低減処理の効果が充分に発揮し辛い課題があった。第2章と第3章、第5章で既に説明した光学的雑音低減方法と既存の電気的外乱雑音の低減方法の組み合わせ1012に拠り、高精度な情報抽出1004が可能となる。 By using the methods already described in Chapters 2, 3, and 5, it is possible to significantly reduce optical noise. In the prior art, this optical noise reduction technology was not available, so there was a problem in that it was difficult to fully demonstrate the effect of electrical disturbance noise reduction processing. Highly accurate information extraction 1004 is possible based on the combination 1012 of the optical noise reduction method already explained in Chapters 2, 3, and 5 and the existing electrical disturbance noise reduction method.

 光学的雑音低減方法と電気的外乱雑音低減方法の組み合わせ1012の具体的方法として、下記の手順での処理1000を行っても良い。この処理手順1000とは、
1.計測対象物22からの検出光を利用した第1の情報の獲得
2.第1の情報を利用して、検出信号内の光学的外乱雑音低減または電気的外乱雑音低減1012の処理実施
3.上記処理後の雑音低減された信号を利用した、第2の情報の獲得
の処理を順次行う。
As a specific method for the combination 1012 of the optical noise reduction method and the electrical disturbance noise reduction method, processing 1000 may be performed using the following procedure. This processing procedure 1000 is
1. Acquisition of first information using detection light from measurement target object 222. 3. Perform processing for optical disturbance noise reduction or electrical disturbance noise reduction 1012 in the detection signal using the first information. Second information acquisition processing is sequentially performed using the noise-reduced signal after the above processing.

 ここで得られた高精度で情報抽出1004された抽出情報は、情報伝達経路4を経由して情報転送1006される。この情報転送1006時に使用される転送フォーマット1014例として、
A〕既存の画像や映像の圧縮方法の転用またはその拡張フォーマット
B〕データの種別毎に分散配置されたパックまたはパケットの多重化転送方法
C〕ハイパーテキスト内で関連がリンク付け(管理)されたファイル毎の個別転送
などを利用しても良い。
The extracted information extracted 1004 with high accuracy obtained here is transferred 1006 via the information transmission path 4. As an example of the transfer format 1014 used during this information transfer 1006,
A] Diversion of existing image and video compression methods or their expanded formats B] Multiplex transfer method of packs or packets distributed for each type of data C] Relationships linked (managed) within hypertext Individual transfer of each file may also be used.

 この転送フォーマット1014で情報転送1006された各種情報は、収集情報保存領域74内に保存1010される。あるいは表示部18または情報提供部72で表示1008されても良い。 Various information transferred 1006 in this transfer format 1014 is saved 1010 in the collected information storage area 74. Alternatively, it may be displayed 1008 on the display unit 18 or the information providing unit 72.

 図20Bは、本実施形態で使用する情報の例をまとめた一覧表を表わす。この情報のカテゴリ1020としては、不要な光学的作用および、計測対象物の形状や位置、計測対象移動体検出、構成部位の組成比、時間変化を伴う活動などに分類できる。 FIG. 20B shows a list of examples of information used in this embodiment. The information can be classified into categories 1020 such as unnecessary optical effects, the shape and position of the object to be measured, detection of a moving object to be measured, the composition ratio of the constituent parts, and activities that change over time.

 また抽出情報の概要1022として、計測対象物内部での光学的作用や、計測対象物表面での光学的作用、光伝搬経路途中での光学的作用、形状の輪郭情報や特徴情報、移動体領域、固体内構成材質分析、液体内物質の含有率、生体活動などが上げられる。 In addition, the extracted information summary 1022 includes optical effects inside the measurement target, optical effects on the surface of the measurement target, optical effects in the middle of the light propagation path, shape contour information and feature information, and moving body area. Examples include analysis of constituent materials in solids, content of substances in liquids, and biological activities.

 図20Cは、計測対象物22内の計測対象領域1032毎の外乱雑音発生原因1036とその対策方法1038を一覧表として示す。電気的外乱雑音の発生原因1036は計測対象領域1032に拠らず同様に、ショットノイズや熱雑音、電磁誘導雑音などが相当する。 FIG. 20C shows a list of disturbance noise generation causes 1036 and countermeasures 1038 for each measurement target region 1032 within the measurement target object 22. The causes of electrical disturbance noise 1036 are shot noise, thermal noise, electromagnetic induction noise, etc., regardless of the measurement target area 1032.

 本実施形態における電気的外乱雑音の低減対策方法1038として、検出信号の帯域制限を行ってキャリア成分のみを抽出E1してもよい。またそれに限らず本実施形態では、ロックイン増幅(Lock-in Amplifier)E2を行ってもよい。このロックイン増幅E2では、検出信号に対する基準信号の周波数と位相の同期化が必要となる。そのため本実施形態では図20B内で時間変化を伴う活動のカテゴリ1020に含まれる各種情報を第1の抽出情報1004に利用して、上記周波数と位相の同期化を行ってもよい。 As the electrical disturbance noise reduction method 1038 in this embodiment, only the carrier component may be extracted E1 by band-limiting the detection signal. Furthermore, the present embodiment is not limited to this, and lock-in amplification (Lock-in Amplifier) E2 may be performed. This lock-in amplification E2 requires synchronization of the frequency and phase of the reference signal with respect to the detection signal. Therefore, in this embodiment, various information included in the category 1020 of activities with time changes in FIG. 20B may be used as the first extracted information 1004 to perform the frequency and phase synchronization.

 電気的外乱雑音の低減対策方法1038としてそれ以外に、デジタル化された信号のエラー訂正機能E3を使用してもよい。具体例としてはPRML(Partial Response Most Likelihood)などの技術を使用して、最も適正と思われる信号系列に自動補正してもよい。 In addition to the method 1038 for reducing electrical disturbance noise, the error correction function E3 of the digitized signal may be used. As a specific example, a technique such as PRML (Partial Response Most Likelihood) may be used to automatically correct the signal sequence to the most appropriate signal sequence.

 光学的外乱雑音の発生原因1036は、計測対象物22内の計測対象領域1032に応じて若干異なる。両者に共通する光学的外乱雑音の発生原因1036として、光学的干渉雑音(optical interference noise)の影響が存在する。そしてこの光学的干渉雑音の低減対策方法が、第2章と第3章、第5章で既に説明した技術内容に対応する。 The cause 1036 of optical disturbance noise differs slightly depending on the measurement target area 1032 within the measurement target object 22. A common cause 1036 of optical disturbance noise in both cases is the influence of optical interference noise. This method of reducing optical interference noise corresponds to the technical content already explained in Chapters 2, 3, and 5.

 他の光学的外乱雑音の発生原因1036には、他の光学的作用の混入が存在する。この他の光学的作用の混入の対策方法として本実施形態例では、信号処理部42が計測信号間の演算処理(信号処理または信号解析)L3を行い、混入された他の光学的作用の影響を除去する。 Other causes of optical disturbance noise 1036 include the contamination of other optical effects. In this embodiment, the signal processing unit 42 performs arithmetic processing (signal processing or signal analysis) L3 between measurement signals to prevent the influence of other optical effects from being mixed in. remove.

 この場合の処理手順と、図20Aで既に説明した第2の情報抽出に至る処理手順1000との関係を説明する。
1.計測対象物22からの検出光を利用した第1の情報の獲得
に対応した処理手順では、信号処理部42が計測部8または信号受信部40から獲得した計測信号の中から他の光学的作用結果に基付く情報を抽出1004する処理が、第1の抽出情報1004に対応する。次の
2.第1の情報を利用して、検出信号内の光学的外乱雑音低減または電気的外乱雑音低減1012の処理実施が、
前記の計測信号から前記第1の抽出情報1004の成分を除去する処理に対応する。そして
3.第2の情報の獲得が、
他の光学的作用の影響が除去された後の第2の情報抽出に対応する。
The relationship between the processing procedure in this case and the processing procedure 1000 leading to the second information extraction already described with reference to FIG. 20A will be described.
1. In the processing procedure corresponding to the acquisition of the first information using the detection light from the measurement object 22, the signal processing section 42 detects other optical effects from the measurement signal acquired from the measurement section 8 or the signal reception section 40. The process of extracting 1004 information based on the result corresponds to first extracted information 1004. Next 2. Performing the process of optical disturbance noise reduction or electrical disturbance noise reduction 1012 in the detection signal using the first information,
This corresponds to the process of removing the component of the first extraction information 1004 from the measurement signal. And 3. The acquisition of the second information is
This corresponds to a second information extraction after the influence of other optical effects has been removed.

 計測対象物22内の計測対象領域1032に応じて発生する光学的外乱雑音の発生原因1036として、計測対象物22全体の総合的特性計測時には発生せず、計測対象物22内の局所的特性のみを計測する時に初めて発生する原因1036が存在する。その光学的外乱雑音発生原因1030として、計測対象となる局所領域外から混入する外乱光の影響が存在する。 The cause 1036 of optical disturbance noise that occurs depending on the measurement target area 1032 within the measurement target 22 is that it does not occur when measuring the comprehensive characteristics of the entire measurement target 22, but only local characteristics within the measurement target 22. There is a cause 1036 that occurs for the first time when measuring. The optical disturbance noise generation cause 1030 is the influence of disturbance light entering from outside the local region to be measured.

 計測対象となる局所領域外から混入する外乱光の影響を低減する方法1038として本実施形態例では、計測対象となる局所領域に対する結像位置(imaging position)または共焦位置(confocal position)に開口制限を設けて不要な外乱光の遮光L4を行ってもよい。それに拠り、例えば計測対象物22内部の3次元計測を行う場合に、計測対象となる局所領域以外の深さ位置からの検出光を外乱光として誤計測するのを防止できる。 In this embodiment, as a method 1038 for reducing the influence of disturbance light entering from outside the local area to be measured, an aperture is provided at an imaging position or a confocal position for the local area to be measured. A limit may be set to block unnecessary disturbance light L4. Accordingly, when performing three-dimensional measurement inside the measurement target object 22, for example, it is possible to prevent detection light from a depth position other than the local area to be measured from being erroneously measured as disturbance light.

 7.2節 ロックイン増幅技術を応用した各種の実施形態例
 図21Aは、本実施形態の一例を示す。図21Aの実施形態例では、計測部8(または信号受信部40)から得られる計測信号の中から、第1の抽出情報1218を抽出する。光学装置10内の計測部8から得られた時系列的分光特性信号または時系列的画像信号、データキューブ信号は、信号受信部40へ転送される。信号受信部40内での情報抽出1004として、この入力信号から所定の時系列信号1208を部分抽出1202する。
Section 7.2 Various Embodiments Applying Lock-in Amplification Technology FIG. 21A shows an example of this embodiment. In the embodiment shown in FIG. 21A, first extraction information 1218 is extracted from the measurement signal obtained from the measurement unit 8 (or signal reception unit 40). A time-series spectral characteristic signal, a time-series image signal, or a data cube signal obtained from the measuring section 8 in the optical device 10 is transferred to the signal receiving section 40 . As information extraction 1004 within the signal receiving unit 40, a predetermined time series signal 1208 is partially extracted 1202 from this input signal.

 信号受信部40内で部分抽出1202された所定の時系列信号1208は信号処理部42へ転送される。そしてこの信号処理部42では上記所定の時系列信号1208を利用して基準信号抽出1210を行う。そして更にこの基準信号から直流成分を除去1212し、交流成分のみの形態を第1の抽出情報1218として利用する。 A predetermined time-series signal 1208 partially extracted 1202 within the signal receiving unit 40 is transferred to the signal processing unit 42. The signal processing unit 42 performs reference signal extraction 1210 using the predetermined time series signal 1208. Then, the DC component is further removed 1212 from this reference signal, and the form containing only the AC component is used as first extracted information 1218.

 それと並行して信号受信部40から信号処理部42に転送された時系列的分光特性信号または時系列的画像信号、データキューブ信号が、上記の第1の抽出情報1218と掛け算1230される。ここで信号処理部42に転送された信号が時系列的分光特性信号の場合は、測定波長毎の掛け算が行われる。また号処理部42に転送された信号が時系列的画像信号の場合は、画素毎の掛け算が行われる。またデータキューブ信号が転送された場合には、各画素内の測定波長毎の掛け算が行われる。 At the same time, the time-series spectral characteristic signal, time-series image signal, or data cube signal transferred from the signal receiving unit 40 to the signal processing unit 42 is multiplied 1230 by the first extraction information 1218 described above. If the signal transferred to the signal processing unit 42 is a time-series spectral characteristic signal, multiplication is performed for each measurement wavelength. Further, if the signal transferred to the signal processing unit 42 is a time-series image signal, multiplication is performed for each pixel. Furthermore, when the data cube signal is transferred, multiplication is performed for each measurement wavelength within each pixel.

 この掛け算の結果は、超狭帯域ローバスフィルタの働きで波長毎または画素毎の時系列的直流成分の抽出1236が行われ、所定信号抽出部680内で第2の抽出情報1018が生成される。この掛け算の結果の処理方法として他に、帯域制限を行って第1の抽出情報1218に対応したキャリア成分のみを抽出E1してもよい。但し帯域制限に拠るキャリア成分抽出E1よりもロックイン増幅E2して直流成分のみを抽出すると直流成分抽出効果が高く、第2の抽出情報1018の精度が向上する。 As a result of this multiplication, time-series DC components are extracted 1236 for each wavelength or each pixel by the action of an ultra-narrow band low-pass filter, and second extraction information 1018 is generated in the predetermined signal extraction section 680. . As another method for processing the result of this multiplication, band limitation may be performed to extract only the carrier component corresponding to the first extraction information 1218 E1. However, rather than carrier component extraction E1 based on band limitation, extracting only the DC component by lock-in amplification E2 has a higher DC component extraction effect and improves the accuracy of the second extraction information 1018.

 図21Bは、電気的外乱雑音低減可能な本実施形態の他の実施形態例を示す。図21Aでは第1の抽出情報1218を、計測部8からの計測信号から情報抽出1004していた。それと比べて図21Bに示す他の実施形態例では第1の抽出情報1218を、発光量制御部30から得られる所定の時系列信号1208から情報抽出1004している。この発光量制御部30から得られる所定の時系列信号1208として例えば、図18Bの発光量出力回路702からの出力信号を利用しても良い。 FIG. 21B shows another embodiment of this embodiment capable of reducing electrical disturbance noise. In FIG. 21A, first extracted information 1218 is extracted 1004 from the measurement signal from the measurement unit 8. In contrast, in another embodiment shown in FIG. 21B, first extraction information 1218 is extracted 1004 from a predetermined time-series signal 1208 obtained from the light emission amount control unit 30. As the predetermined time series signal 1208 obtained from the light emission amount control section 30, for example, an output signal from the light emission amount output circuit 702 in FIG. 18B may be used.

 例えば外乱光が混入し易い環境で計測した場合、外乱光の影響で計測精度が大幅に低下する。この場合には発光部2から放射される所定光230の光量に変調を加え、図21Bのようにその変調光に対応した信号成分のみを第2の抽出情報1018として情報抽出1004すると、計測精度が大幅に向上する。 For example, when measuring in an environment where ambient light is likely to enter, the measurement accuracy will be significantly reduced due to the influence of the ambient light. In this case, the amount of predetermined light 230 emitted from the light emitting unit 2 is modulated, and only the signal component corresponding to the modulated light is extracted 1004 as second extraction information 1018 as shown in FIG. 21B. is significantly improved.

 図21Cは図21Bの応用実施形態例として、パルス光を計測対象物22に照射して電気的外乱雑音を低減する方法を示す。ここで信号処理部42から発光量制御部30に送信される発光量変調信号1228は、矩形状のパルス波形の形態を取っても良い。 FIG. 21C shows a method of reducing electrical disturbance noise by irradiating the measurement object 22 with pulsed light as an example of an applied embodiment of FIG. 21B. Here, the light emission amount modulation signal 1228 transmitted from the signal processing section 42 to the light emission amount control section 30 may take the form of a rectangular pulse waveform.

 図21Cの応用実施形態例では、データ処理ブロック630内の時間変化成分抽出処理部700で基準パルスを発生1220する。そしてパルスカウンタ1222内では、前記の基準パルス1220の所定パルス発生回毎に1回ずつパルスが発生する。そしてこのパルスカウンタ1222が出力するパルスが、第1の抽出情報1218として利用される。この第1の抽出情報1218は、発光量制御部1228内の発光量変調信号1228として使われ、この発光量変調信号1228に合わせて計測対象物22への照射光量が矩形のパルス状に変化する。またこの第1の抽出情報1218(パルスカウンタ1222の出力パルス)が同時に、波長毎または画素毎の掛け算回路1230にも転送される。このように図21Cに示した応用実施形態例では、同一の第1の抽出情報1218が複数目的で同時に利用される。 In the example application embodiment of FIG. 21C, the reference pulse is generated 1220 in the time-varying component extraction processing section 700 within the data processing block 630. In the pulse counter 1222, a pulse is generated once every predetermined number of times the reference pulse 1220 is generated. The pulses output by this pulse counter 1222 are used as the first extraction information 1218. This first extraction information 1218 is used as a light emission amount modulation signal 1228 in the light emission amount control unit 1228, and the amount of light irradiated onto the measurement object 22 changes in a rectangular pulse shape in accordance with this light emission amount modulation signal 1228. . This first extraction information 1218 (output pulses of the pulse counter 1222) is also simultaneously transferred to the multiplication circuit 1230 for each wavelength or each pixel. In this way, in the example application embodiment shown in FIG. 21C, the same first extracted information 1218 is used for multiple purposes simultaneously.

 計測部8から得られた時系列的分光特性信号または時系列的画素信号、データキューブ信号は、時間変化成分抽出処理部700内で発生した基準パルス1220に同期1224して検波され、時間変化成分抽出処理部700内の波長毎または画素毎の掛け算回路1230内に転送される。 The time-series spectral characteristic signal, time-series pixel signal, or data cube signal obtained from the measurement unit 8 is detected in synchronization 1224 with the reference pulse 1220 generated in the time-varying component extraction processing unit 700, and the time-varying component is detected. The signal is transferred to a multiplication circuit 1230 for each wavelength or each pixel in the extraction processing section 700.

 図21Cのように第1の抽出情報1218がパルス状の矩形波形をする場合には、非常に簡単な回路で波長毎または画素毎の掛け算回路1230を構成できる。この波長毎または画素毎の掛け算回路1230内は、インバータ(極性反転)回路1226とスイッチ1232のみから構成される。そしてパルスカウンタ1222から与えられる第1の抽出情報1218に応じて、波長毎または画素毎の時系列的直流成分抽出回路(超狭帯域ローパスフィルタ)1236に送信される信号極性が切り替わる(第1の抽出情報1218に同期した信号極性切り替えに関して後述する)。 When the first extracted information 1218 has a pulsed rectangular waveform as shown in FIG. 21C, the multiplication circuit 1230 for each wavelength or each pixel can be configured with a very simple circuit. The multiplication circuit 1230 for each wavelength or pixel is composed of only an inverter (polarity inversion) circuit 1226 and a switch 1232. Then, in accordance with the first extraction information 1218 given from the pulse counter 1222, the polarity of the signal transmitted to the time-series DC component extraction circuit (ultra-narrow band low-pass filter) 1236 for each wavelength or pixel is switched (the first The signal polarity switching in synchronization with the extraction information 1218 will be described later).

 なお図21Cに示した応用実施形態例は、測長や3次元画像計測(3次元映像計測)に使用しても良い。光はおよそ3×108m/sの速度で空気中を伝搬するので、1nSのパルス幅期間で光はおよそ30cm進む。そして遠方に配置された計測対象物22の表面で反射した光が戻ってくるまでの時間を計測することで、計測対象物22までの距離を計測(測長)できる。例えばパルス幅1nSでデューティ比50%のパルスを基準パルス1220として使用し、パルスカウント値1222に応じた反射光強度変化を測定すると、30cmの空間的距離分解能で測長できる。さらに計測対象物22からの反射光を撮像素子300で画像信号として計測すると、3次元画像計測(3次元映像計測)が可能となる。 Note that the applied embodiment example shown in FIG. 21C may be used for length measurement or three-dimensional image measurement (three-dimensional image measurement). Light propagates through air at a speed of approximately 3×10 8 m/s, so in a pulse width period of 1 nS, light travels approximately 30 cm. The distance to the measurement object 22 can be measured (length measurement) by measuring the time it takes for the light reflected from the surface of the measurement object 22 placed far away to return. For example, if a pulse with a pulse width of 1 nS and a duty ratio of 50% is used as the reference pulse 1220 and a change in reflected light intensity is measured according to the pulse count value 1222, length can be measured with a spatial distance resolution of 30 cm. Furthermore, when the reflected light from the measurement object 22 is measured as an image signal by the image sensor 300, three-dimensional image measurement (three-dimensional image measurement) becomes possible.

 具体的には上記の基準パルス1220を固定し、パルスカウント値1222に応じた間欠タイミングで発光量制御部30からのパルス発光量(発光量変調信号)1223を制御する。それと同時に上記の基準パルス1220に同期して撮像素子300からの画素毎の出力信号1200を時間変化成分抽出処理部700に送信する。 Specifically, the above reference pulse 1220 is fixed, and the pulsed light emission amount (light emission amount modulation signal) 1223 from the light emission amount control section 30 is controlled at intermittent timing according to the pulse count value 1222. At the same time, the output signal 1200 for each pixel from the image sensor 300 is transmitted to the time-varying component extraction processing section 700 in synchronization with the reference pulse 1220 described above.

 レーザパルスを用いた測長方法自体は、車の自動運転に使用するLiDAR(Light Detection and Ranging)などに応用されている。しかし従来技術で画像測定に利用を試みると、レーザ光の可干渉性に起因するスペックルノイズの影響で計測精度が大幅に低下する。しかし第12章で説明した空間的な干渉ノイズ低減化方法を併用する事で、高精度の測長や3次元画像(映像)計測が可能となる。 The length measurement method itself using laser pulses is applied to LiDAR (Light Detection and Ranging) used for self-driving cars. However, when conventional techniques are used for image measurement, the measurement accuracy is significantly reduced due to speckle noise caused by the coherence of laser light. However, by using the spatial interference noise reduction method explained in Chapter 12, highly accurate length measurement and three-dimensional image (video) measurement becomes possible.

 7.3節 電荷蓄積形信号受信部の構造 
図22は、計測部4として電荷蓄積形信号受信部を使用した場合の特徴説明図である。分光特性信号や画像信号、データキューブ信号の多くは、時系列的に連続して信号が得られず、計測期間1258とデータ転送期間1254に時分割される(図23A(a)と図23B(a)参照)。すなわち計測期間1258で、計測データが電荷量記憶部1170に蓄積される。そしてデータ転送期間1254に、その蓄積データがデータ転送部1180を介して信号処理部42に転送される。
Section 7.3 Structure of charge storage type signal receiver
FIG. 22 is a characteristic explanatory diagram when a charge accumulation type signal receiving section is used as the measuring section 4. Most of the spectral characteristic signals, image signals, and data cube signals cannot be obtained continuously in time series, but are time-divided into a measurement period 1258 and a data transfer period 1254 (Fig. 23A (a) and Fig. 23B ( a)). That is, during the measurement period 1258, measurement data is accumulated in the charge amount storage section 1170. Then, during the data transfer period 1254, the accumulated data is transferred to the signal processing section 42 via the data transfer section 1180.

 図22では、有機半導体を用いた分光特性信号の生成原理例を示している。各有機半導体層1102、1104、1106ではそれぞれ検出光1100に対する吸収波長が異なる。つまり検出光1100の入射側に最も近い第1の有機半導体層1102では、所定の波長範囲の検出光1100のみが吸収される。そしてこの第1の有機半導体層1102での吸収を逃れた他の波長光を含む検出光1100のみが、この第1の有機半導体層1102を通過する。そして第2の有機半導体層1104は、第1の有機半導体層1102での吸収を逃れた他の波長光の中で他の波長範囲の検出光1100を吸収する。 FIG. 22 shows an example of the principle of generating a spectral characteristic signal using an organic semiconductor. Each of the organic semiconductor layers 1102, 1104, and 1106 has a different absorption wavelength for the detection light 1100. In other words, the first organic semiconductor layer 1102 closest to the incident side of the detection light 1100 absorbs only the detection light 1100 in a predetermined wavelength range. Then, only the detection light 1100 containing light of other wavelengths that has escaped absorption by the first organic semiconductor layer 1102 passes through the first organic semiconductor layer 1102. The second organic semiconductor layer 1104 absorbs the detection light 1100 in other wavelength ranges among the other wavelengths of light that have escaped absorption in the first organic semiconductor layer 1102.

 また有機半導体層1102、1104、1106はそれぞれ、一対の透明導電膜に挟まれ、さらに透明導電膜間を透明絶縁層1124、1126が仕切っている。さらに透明導電膜の配置で、画素領域1152、1154が規定される。すなわち図22の左図内の左側が第1の画素領域1152を形成し、右側が第2の画素領域1154を形成する。 Furthermore, the organic semiconductor layers 1102, 1104, and 1106 are each sandwiched between a pair of transparent conductive films, and the transparent conductive films are further partitioned by transparent insulating layers 1124 and 1126. Further, pixel regions 1152 and 1154 are defined by the arrangement of the transparent conductive films. That is, the left side in the left diagram of FIG. 22 forms the first pixel area 1152, and the right side forms the second pixel area 1154.

 有機半導体層1102、1104、1106内で所定波長範囲の検出光1100を吸収すると、有機半導体層1102、1104、1106内で電荷が発生し、検出信号として利用される。例えば第1の有機半導体層1102内の左側に検出光1100が入射し、第1の有機半導体層1102内で吸収されると、第1の有機半導体層1102内で電荷が発生する。第1の有機半導体層1102内に隣接する下側の透明導電膜1112はグランドラインに接続されているため、第1の有機半導体層1102内で発生した電荷は透明導電膜1142を経由してプリアンプ1150-6に入る。 When the detection light 1100 in a predetermined wavelength range is absorbed within the organic semiconductor layers 1102, 1104, 1106, charges are generated within the organic semiconductor layers 1102, 1104, 1106, and are used as detection signals. For example, when the detection light 1100 enters the left side of the first organic semiconductor layer 1102 and is absorbed within the first organic semiconductor layer 1102, charges are generated within the first organic semiconductor layer 1102. Since the lower transparent conductive film 1112 adjacent to the first organic semiconductor layer 1102 is connected to the ground line, the charges generated within the first organic semiconductor layer 1102 are preamplified via the transparent conductive film 1142. Enter 1150-6.

 プリアンプ1150-6に入った電荷は、所定期間(計測期間1258の間)コンデンサ1160-6に蓄えられる。このように電荷蓄積形信号受信部40の特徴として、所定期間内(計測期間1258の間)は連続してコンデンサ1160-6内に電荷が蓄積される。このコンデンサ1160-6に蓄えられた電荷量は、所定期間終了時に電荷量記憶部1170-2に転送された後、電荷量が放電される。その後、次の所定期間(計測期間1258の間)でコンデンサ1160-6に再度電荷が蓄えられる。 The charge that has entered the preamplifier 1150-6 is stored in the capacitor 1160-6 for a predetermined period (during the measurement period 1258). As described above, a feature of the charge accumulation type signal receiving section 40 is that charge is continuously accumulated in the capacitor 1160-6 within a predetermined period (during the measurement period 1258). The amount of charge stored in this capacitor 1160-6 is transferred to the amount of charge storage section 1170-2 at the end of a predetermined period, and then the amount of charge is discharged. Thereafter, charge is again stored in the capacitor 1160-6 during the next predetermined period (during the measurement period 1258).

 例えば図12Aや図12B21A分光素子(ブレーズドグレーティング)320を用いて検出光を測定波長毎に分離する場合には、撮像素子300にラインセンサや2次元配列センサを使用する。この場合でも図22と同様に、計測期間1258とデータ転送期間1254に時分割されて計測信号が出力される。 For example, when the detection light is separated for each measurement wavelength using the spectroscopic element (blazed grating) 320 in FIG. 12A or FIG. 12B21A, a line sensor or a two-dimensional array sensor is used as the image sensor 300. In this case as well, similarly to FIG. 22, the measurement signal is output in a time-divided manner into the measurement period 1258 and the data transfer period 1254.

 従って本実施形態例では、計測期間1258とデータ転送期間1254が時分割された計測信号に適した検出信号の帯域制限方法E1またはロックイン増幅方法E2、デジタル化された信号のエラー訂正方法E3を提供する(図20C内記号290の列を参照)。特に微弱な検出光を用いて計測する場合には相対的に計測期間1258が長くなり、帯域制限E1やロックイン増幅E2を用いた計測精度が低下し易い。特に図21Aで例示したように計測部8から得られた時系列的分光特性信号や時系列的画像信号、データキューブ信号から第1の抽出情報1218を情報抽出1004する場合、計測期間1258が相対的に長くなると、第1の抽出情報1218の抽出精度が低下し易い。 Therefore, in this embodiment, the detection signal band limiting method E1 or lock-in amplification method E2, which is suitable for a measurement signal in which the measurement period 1258 and the data transfer period 1254 are time-divided, and the error correction method E3 of the digitized signal are used. (See column 290 in FIG. 20C). In particular, when measurement is performed using weak detection light, the measurement period 1258 becomes relatively long, and measurement accuracy using the band limit E1 and lock-in amplification E2 tends to deteriorate. In particular, when the first extraction information 1218 is extracted 1004 from the time-series spectral characteristic signals, time-series image signals, and data cube signals obtained from the measurement unit 8 as illustrated in FIG. 21A, the measurement period 1258 is When the time length becomes longer, the extraction accuracy of the first extraction information 1218 tends to decrease.

 7.4節 電荷蓄積形信号の信号処理形態例
図23Aは、相対的に長い計測期間1258に対して精度良く第1の抽出情報1218を情報抽出1004する方法を示す。ここでは図21Aで説明したロックイン増幅回路E2を使用している。
Section 7.4 Example of Signal Processing Form of Charge Accumulation Signal FIG. 23A shows a method of extracting first extraction information 1218 with high precision 1004 for a relatively long measurement period 1258. Here, the lock-in amplifier circuit E2 described in FIG. 21A is used.

 高等生物の生体内では、血管が通っている。そして脈動(血流量1252の変化)に合わせて血管の膨張と収縮が繰り返される。この血管周辺に近赤外光を照射すると、血管の膨張と収縮に同期して近赤外光の吸収量が時系列的に変化する。図23Aは、血流量1252の変化に応じた近赤外光の吸収量変化を第1の抽出情報1218として情報抽出1004までの処理方法を示している。 Blood vessels run within the bodies of higher organisms. The blood vessels expand and contract repeatedly in accordance with the pulsations (changes in blood flow 1252). When near-infrared light is irradiated around this blood vessel, the amount of near-infrared light absorbed changes over time in synchronization with the expansion and contraction of the blood vessel. FIG. 23A shows a processing method up to information extraction 1004 in which changes in near-infrared light absorption amount in response to changes in blood flow 1252 are used as first extraction information 1218.

 図23A(a)に示すように、計測期間1258とデータ転送期間1254が時分割された計測信号が信号処理部42に入る。図23A(b)は、信号受信部40から送られる時分割された計測信号形態例を示す。また図23A(b)では、近赤外光の吸収量変化として得られた血流量1252を縦軸に取っている。また図23の横軸は、経過時間1250を示す。
 図23A(b)に示すようにデータ転送期間1254内は、電荷蓄積形信号受信部(計測部8)からの計測信号が得られない。このため図23A(b)に示すように、計測期間1258から間欠的に階段状の計測信号のみが得られる。信号処理部42は図23A(c)に示すように、この間欠的に階段状の計測信号に対してサンプルホールド方法を用いて間欠的計測信号を連続化する。この段階では図23A(c)に示すように、計測信号は階段状に不連続的に変化する。
As shown in FIG. 23A(a), a measurement signal obtained by time-sharing a measurement period 1258 and a data transfer period 1254 enters the signal processing unit 42. FIG. 23A(b) shows an example of a time-divided measurement signal format sent from the signal receiving section 40. Further, in FIG. 23A(b), the vertical axis represents the blood flow rate 1252 obtained as a change in the absorption amount of near-infrared light. Further, the horizontal axis in FIG. 23 indicates elapsed time 1250.
As shown in FIG. 23A(b), no measurement signal is obtained from the charge storage type signal receiving section (measuring section 8) during the data transfer period 1254. Therefore, as shown in FIG. 23A(b), only step-like measurement signals are obtained intermittently from measurement period 1258. As shown in FIG. 23A(c), the signal processing unit 42 converts the intermittently step-like measurement signal into a continuous one using a sample hold method. At this stage, as shown in FIG. 23A(c), the measurement signal changes discontinuously in a stepwise manner.

 図23A(c)の階段状に不連続的に変化する計測信号に対して、最適化された多重並列バンドパスフィルタを用いて平滑化する。更に図23A(e)に示すように図23A(d)の波形内の直流成分を除去して、第1の抽出情報1218を生成する。 The measurement signal that changes discontinuously in a stepwise manner as shown in FIG. 23A(c) is smoothed using an optimized multiplex parallel bandpass filter. Furthermore, as shown in FIG. 23A(e), the DC component in the waveform of FIG. 23A(d) is removed to generate first extraction information 1218.

 図23Bは、図21Aの信号処理(データ処理)方法を用いて第2の抽出情報1018の情報抽出1004に至るまでの信号処理(データ処理)過程を示す。図23B(a)は、信号受信部40から送られる計測信号の形態例を示す。計測期間1258とデータ転送期間1254が時分割されて転送される。図23B(b)は、分光特性信号内の測定波長毎の時系列的データあるいは撮像素子内画素毎の時系列的データ、データキューブに含まれる撮像素子内の画素毎の分光特性信号内の測定波長毎の時系列的データ1200を表す。データ転送期間1254は計測されないので、間欠的な矩形の(パルス状の)時系列的データとして送られて来る。 FIG. 23B shows a signal processing (data processing) process up to the information extraction 1004 of the second extraction information 1018 using the signal processing (data processing) method of FIG. 21A. FIG. 23B(a) shows an example of the format of the measurement signal sent from the signal receiving section 40. The measurement period 1258 and the data transfer period 1254 are time-divided and transferred. FIG. 23B(b) shows time-series data for each measurement wavelength in the spectral characteristic signal or time-series data for each pixel in the image sensor, and measurement in the spectral characteristic signal for each pixel in the image sensor included in the data cube. It represents time-series data 1200 for each wavelength. Since the data transfer period 1254 is not measured, it is sent as intermittent rectangular (pulse-like) time-series data.

 図23B(c)は、図23A(e)で情報抽出1004された第1の抽出情報1218の波形を示す。そして図23B(d)は、図23B(b)と図23B(c)との間の時系列毎の掛け算の結果を表す。この図23B(d)の波形は、波長毎または画素毎の掛け算処理部1230の出力波形に一致する。図23B(c)内では“負の値”を取る時期が有るため、図23B(d)の波形内でも“負の値”を取る期間が生まれる。 FIG. 23B(c) shows the waveform of the first extracted information 1218 extracted 1004 in FIG. 23A(e). FIG. 23B(d) represents the result of multiplication for each time series between FIG. 23B(b) and FIG. 23B(c). The waveform in FIG. 23B(d) matches the output waveform of the multiplication processing unit 1230 for each wavelength or pixel. Since there is a period in which the waveform in FIG. 23B(c) takes a "negative value," there is also a period in which the waveform in FIG. 23B(d) takes a "negative value."

 図23B(e)は、図21Aで情報抽出1004された第2の抽出情報1018の結果を示す。図21A内の波長毎または画素毎の時系列的直流成分抽出部(超狭帯域ローパスフィルタ)1236の作用を利用して、図23B(d)の離散的信号の直流成分を抽出すると、図23B(e)のような経過時間1250に依存しない一定値が得られる。 FIG. 23B(e) shows the result of the second extracted information 1018 extracted 1004 in FIG. 21A. When the DC component of the discrete signal in FIG. 23B(d) is extracted using the action of the time-series DC component extraction unit (ultra-narrow band low-pass filter) 1236 for each wavelength or pixel in FIG. 21A, the result is as shown in FIG. 23B. A constant value that does not depend on the elapsed time 1250 as shown in (e) can be obtained.

 7.5節 実証実験結果の一例
図21Aで説明した回路構成を用いて計測した、脈動を持った血流中の分光特性測定結果を以下に説明する。図24Aは、計測に使用した光源部2内の光学配置を示す。図4で説明した光学系と図18Aで説明した光学系を、ダイクロイックミラー350で合成した。ここで発光波長1330nmのレーザ光を利用した。
Section 7.5 Example of Demonstration Experiment Results The results of measuring spectral characteristics in a pulsating blood flow using the circuit configuration described in FIG. 21A will be described below. FIG. 24A shows the optical arrangement inside the light source section 2 used for measurement. The optical system described in FIG. 4 and the optical system described in FIG. 18A were combined using a dichroic mirror 350. Here, a laser beam with an emission wavelength of 1330 nm was used.

 コア径0.6mmのSI型マルチモード単芯ファイバSFが、この合成光を人差し指先端360まで誘導した。別のコア径0.6mmのSI型マルチモード単芯ファイバSFが、この人差し指先端360を通過した光(人差し指先端360内散乱光)を計測部8内の分光器に誘導した。 An SI type multimode single-core fiber SF with a core diameter of 0.6 mm guided this combined light to the tip 360 of the index finger. Another SI type multimode single-core fiber SF with a core diameter of 0.6 mm guided the light that passed through the index finger tip 360 (light scattered within the index finger tip 360) to the spectrometer in the measurement unit 8.

 図24B(a)は、人差し指先端360への照射光の分光特性を示す。レーザ光の発光波長(1330nm)での発光光量が圧倒的に大きい。なおハロゲンランプHLからの放射光の長波長側は、ダイクロイックミラー350で遮光されている。 FIG. 24B(a) shows the spectral characteristics of the light irradiated onto the index finger tip 360. The amount of light emitted at the laser light emission wavelength (1330 nm) is overwhelmingly large. Note that the long wavelength side of the emitted light from the halogen lamp HL is blocked by a dichroic mirror 350.

 図24B(b)は、人差し指先端360を透過した透過光(人差し指先端360内部で光散乱を繰り返した後に人差し指先端360の反対側から出た光)の光透過率の分光特性を示す。
光透過率で見ると、1330nm波長光(レーザ光)に対しても充分な光量を分光器で検出できた。
FIG. 24B(b) shows the spectral characteristics of the light transmittance of transmitted light that has passed through the index finger tip 360 (light that has repeatedly been scattered inside the index finger tip 360 and then exited from the opposite side of the index finger tip 360).
In terms of light transmittance, the spectrometer was able to detect a sufficient amount of light even for 1330 nm wavelength light (laser light).

 図25Aは、分光器で検出した相対的光透過率の時間変化を示す。ここでは実際の計測データから得られた光透過率を、図24B(b)で事前に計測した時間変化しない光透過率で割った値を相対的光透過率と定義する。そしてこの相対的光透過率を、図25Aの縦軸に使用した。図25Aの横軸は0.1秒刻みの時間経過を示す。従って時間経過値10進む毎に、1秒ずつ経過する事になる。1330nm波長光(レーザ光)で充分な光量が検出できたので、この波長光からの信号を第1の抽出情報1218(図20Aまたは図21A)として利用できる。 FIG. 25A shows the temporal change in relative light transmittance detected by a spectrometer. Here, the value obtained by dividing the light transmittance obtained from actual measurement data by the light transmittance that does not change over time measured in advance in FIG. 24B(b) is defined as relative light transmittance. This relative light transmittance was then used on the vertical axis of FIG. 25A. The horizontal axis in FIG. 25A shows the passage of time in 0.1 second increments. Therefore, every time the time elapsed value advances by 10, one second passes. Since a sufficient amount of light with a wavelength of 1330 nm (laser light) could be detected, a signal from this wavelength light can be used as the first extraction information 1218 (FIG. 20A or FIG. 21A).

 図25A(a)(最上位の曲線)は、1330nm波長光(レーザ光)の相対的光透過率の時間変化を示す。脈動に応じて血管が膨張すると、水分の光吸収量が増加するため、相対的光透過率は減少する。図25A(a)から、1秒前後の周期的な光透過率の変化が見られる。この周期的な光透過率変化は、脈動に同期していると考えられる。 FIG. 25A(a) (top curve) shows the relative light transmittance of 1330 nm wavelength light (laser light) over time. When blood vessels expand in response to pulsation, the amount of light absorbed by water increases, so the relative light transmittance decreases. From FIG. 25A(a), periodic changes in light transmittance around 1 second can be seen. This periodic light transmittance change is considered to be synchronized with pulsation.

 グルコース(糖質)は図10Aで説明したように、0.9μm~1.0μmの波長範囲内に吸収ピークが現れる。また同様に蛋白質は、0.95μm~1.1μmの波長範囲内に吸収ピークが現れる。そして図25(b)(太字曲線)は、ペプチド骨格が吸収すると予想される波長1026nmでの相対的光透過率の時間変化を示す。また図25(c)(最下位の曲線)は、グルコースが吸収すると予想される波長928nmでの相対的光透過率の時間変化を示す。 As explained in FIG. 10A, glucose (carbohydrate) has an absorption peak within the wavelength range of 0.9 μm to 1.0 μm. Similarly, proteins exhibit absorption peaks within the wavelength range of 0.95 μm to 1.1 μm. FIG. 25(b) (bold curve) shows the relative light transmittance over time at a wavelength of 1026 nm, which is expected to be absorbed by the peptide skeleton. Further, FIG. 25(c) (lowest curve) shows the temporal change in relative light transmittance at a wavelength of 928 nm, which is expected to be absorbed by glucose.

 説明のし易さから図21Aの第2の抽出情報1018の形態を取らず、計測部8から得られた生信号波形を示した。図25(b)と(c)を比較すると明らかなように、1026nmの波長光から得られる信号振幅より928nmの波長光から得られる信号振幅の方が大きくなっている。 For ease of explanation, the form of the second extracted information 1018 in FIG. 21A is not taken, but the raw signal waveform obtained from the measurement unit 8 is shown. As is clear from comparing FIGS. 25(b) and 25(c), the signal amplitude obtained from the 928 nm wavelength light is larger than the signal amplitude obtained from the 1026 nm wavelength light.

 図25Bは、他の波長光での計測データを追記した結果を現す。図24B(b)では、波長971nmの所で最も直流成分の光透過率が低い。そのため図25B(d)として、971nmの波長光から得られる信号も一緒に現した。光透過率の直流成分が最小だった971nmの波長光から得られる信号振幅(図25B(d))は、光透過率の直流成分値が相対的に大きな1026nmの波長光から得られる信号振幅(図25B(b))より若干大きい。以上の説明から、脈動に同期した相対的光透過率の変化振幅が測定波長で変化する事が分かる。 FIG. 25B shows the result of adding measurement data using light of other wavelengths. In FIG. 24B(b), the light transmittance of the DC component is lowest at a wavelength of 971 nm. Therefore, as shown in FIG. 25B(d), a signal obtained from light with a wavelength of 971 nm is also shown. The signal amplitude obtained from light with a wavelength of 971 nm (FIG. 25B(d)), where the DC component of light transmittance was the smallest, is the same as the signal amplitude obtained from light with a wavelength of 1026 nm, where the DC component of light transmittance is relatively large (Fig. 25B(d)). It is slightly larger than FIG. 25B(b)). From the above explanation, it can be seen that the amplitude of change in relative light transmittance synchronized with pulsation changes with the measurement wavelength.

 第8章 光源部と撮像素子が内蔵された計測部との組み合わせ実施形態
8.1節 本実施形態における撮像素子内構造とデータ取り込みタイミング
7.2節で、レーザパルスを用いた測長方法を説明した。遠方に到達する光量は、発光点からの距離の二乗に反比例する。従って大きく離れた遠方の測長を行う場合には、充分に大きな発光量が必要となる。するとこの発光点の近くでは、非常に強い強度のレーザ光照射を浴び、目への損傷リスクが高まる。目への損傷リスクを低減させるためには4.1節で説明したように、測長に近赤外レーザ光を使用するのが望ましい。
Chapter 8 Section 8.1 Embodiment of combination of light source section and measurement section with built-in image sensor In Section 7.2, internal structure of image sensor and data acquisition timing in this embodiment, a length measurement method using laser pulses is explained. explained. The amount of light that reaches a long distance is inversely proportional to the square of the distance from the light emitting point. Therefore, when measuring a distance at a large distance, a sufficiently large amount of light emission is required. Those in the vicinity of this light-emitting point are exposed to extremely high intensity laser beam irradiation, increasing the risk of eye damage. In order to reduce the risk of damage to the eyes, it is desirable to use near-infrared laser light for length measurement, as explained in Section 4.1.

 図26Aは、(測長対応可能な)3D(dimensional)カラー撮像素子1280の構造を示す。撮像する画素1262毎の表面に各種光学フィルタ1272~1278が配置され、撮像する画素1262毎に到達可能な光に波長制限が掛かる。 FIG. 26A shows the structure of a 3D (dimensional) color image sensor 1280 (capable of length measurement). Various optical filters 1272 to 1278 are arranged on the surface of each pixel 1262 to be imaged, and wavelength restrictions are applied to light that can reach each pixel 1262 to be imaged.

 すなわち赤色光と近赤外光を検出する画素1262-1~2の直前には、赤色光と近赤外光のみを透過する光学フィルタ1272が設置される。そして緑色光と近赤外光を検出する画素1264-1~2の直前には、緑色光と近赤外光のみを透過する光学フィルタ1274が設置される。また青色光と近赤外光を検出する画素1266-1~2の直前には、青色光と近赤外光のみを透過する光学フィルタ1276が設置される。同様に白色光と近赤外光を検出する画素1268-1~2の直前には、白色光と近赤外光のみを透過する光学フィルタ1278が設置される。ここでレーザ光を用いた測長には、近赤外レーザ光が使用される。 That is, an optical filter 1272 that transmits only red light and near-infrared light is installed immediately in front of pixels 1262-1 and 1262-2 that detect red light and near-infrared light. Immediately before the pixels 1264-1 and 1264-2 that detect green light and near-infrared light, an optical filter 1274 that transmits only green light and near-infrared light is installed. Furthermore, an optical filter 1276 that transmits only blue light and near-infrared light is installed immediately before the pixels 1266-1 and 1266-2 that detect blue light and near-infrared light. Similarly, an optical filter 1278 that transmits only white light and near-infrared light is installed immediately before the pixels 1268-1 and 1268-2 that detect white light and near-infrared light. Here, near-infrared laser light is used for length measurement using laser light.

 図26Bは、この3Dカラー撮像素子1280内の電子回路(の等価回路)を示す。赤色光と近赤外光を検出する画素1262-1、2それぞれには、プリアンプ1150-1、2が個々に接続される。また緑色光と近赤外光を検出する画素1264-1、2のそれぞれには、プリアンプ1150-3、4が個々に接続される。そして露光期間内には、各プリアンプ1150-1~4の検出信号に応じて、コンデンサ1160-1~4に電荷が蓄積される。 FIG. 26B shows (the equivalent circuit of) the electronic circuit within this 3D color image sensor 1280. Preamplifiers 1150-1 and 2 are individually connected to pixels 1262-1 and 2 that detect red light and near-infrared light, respectively. Furthermore, preamplifiers 1150-3 and 4 are individually connected to pixels 1264-1 and 2 that detect green light and near-infrared light. During the exposure period, charges are accumulated in capacitors 1160-1 to 1160-4 according to detection signals from each preamplifier 1150-1 to 1150-4.

 露光時間と非露光時間に応じて連動スイッチ1300-1、2が別々に連動してON/OFFされる。そしてこの連動スイッチ1300-1、2のON/OFFタイミングは、露光タイミング設定回路1292-1、2が制御する。ここで露光時には連動スイッチ1300-1、2が別々に切断され、プリアンプ1150-1~4が個別に対応するコンデンサ1160-1~4に電荷が蓄えられる。また非露光時には、連動スイッチ1300-1、2が別々に切断され、3Dカラー撮像素子1280内各画素1262-1、2と1264-1、2からの検出信号は、アース線に向けて放出される。また同時に、コンデンサ1160-1~4に蓄えられた電荷が放電される。 The interlocking switches 1300-1 and 1300-2 are independently turned ON/OFF depending on the exposure time and non-exposure time. The ON/OFF timing of these interlocking switches 1300-1 and 1300-2 is controlled by exposure timing setting circuits 1292-1 and 2. Here, during exposure, interlocking switches 1300-1 and 1300-2 are separately turned off, and charges are stored in capacitors 1160-1 to 1160-4 corresponding to preamplifiers 1150-1 to 1150-4 individually. Furthermore, during non-exposure, the interlocking switches 1300-1 and 2 are disconnected separately, and the detection signals from the pixels 1262-1 and 2 and 1264-1 and 2 in the 3D color image sensor 1280 are emitted toward the ground wire. Ru. At the same time, the charges stored in capacitors 1160-1 to 1160-4 are discharged.

 各プリアンプ1150-1~4には個別に、包絡線検出回路1288-1~4が接続されている。そして露光終了のタイミングで、包絡線検出回路1288-1~4の出力電圧が、ページメモリ1296-1、2に一時的に保存される。そしてページメモリ1296-1、2に一時的に保存された出力電圧データは定期的に、読み出し回路1290を介して外部に移動する。 Envelope detection circuits 1288-1 to 4 are individually connected to each preamplifier 1150-1 to 1150-4. At the end of exposure, the output voltages of the envelope detection circuits 1288-1 to 1288-4 are temporarily stored in the page memories 1296-1 and 1296-2. The output voltage data temporarily stored in the page memories 1296-1 and 1296-2 is periodically transferred to the outside via the readout circuit 1290.

 図26Bの電子回路では、露光タイミング毎に検出信号がページメモリ1296-1、2に一時保存される。この露光タイミング毎に検出信号が保存可能なページメモリ1296-1、2を配置すると、非常に短い露光期間の検出信号を安定に検出できる効果が生まれる。 In the electronic circuit of FIG. 26B, the detection signal is temporarily stored in the page memories 1296-1 and 1296-2 at each exposure timing. By arranging page memories 1296-1 and 1296-2 capable of storing detection signals at each exposure timing, an effect is produced in which detection signals during a very short exposure period can be stably detected.

 図26Cは、図26B内露光タイミング設定回路1292の制御タイミングを示す。図26Bにおける露光期間(すなわち3Dカラー撮像素子内の画素1264からの検出信号をプリアンプ1150がコンデンサ1160に電荷蓄積した信号を包絡線検出回路1288に送信し続ける期間)とτと定める。すると時刻t1からt1+τまで時間経過(図26C(a))する間だけ、連動スイッチ1300の接続が遮断される(図26C(b))。 FIG. 26C shows the control timing of the exposure timing setting circuit 1292 in FIG. 26B. The exposure period in FIG. 26B (i.e., the period during which the preamplifier 1150 continues to transmit the detection signal from the pixel 1264 in the 3D color image sensor to the envelope detection circuit 1288 by accumulating the charge in the capacitor 1160) is defined as τ. Then, the connection of the interlocking switch 1300 is cut off (FIG. 26C(b)) only while time elapses from time t1 to t1+τ (FIG. 26C(a)).

 図26C(c)は、包絡線検出回路1288の出力がページメモリ1296に取り込まれるタイミングを示す。このように露光期間τの終了直後に、ページメモリ1296内に取り込まれる。 FIG. 26C(c) shows the timing at which the output of the envelope detection circuit 1288 is taken into the page memory 1296. In this way, immediately after the exposure period τ ends, the image is captured into the page memory 1296.

 図26C(d)は、露光期間前後での包絡線検出回路1288の出力信号波形を示す。露光期間前はコンデンサ1160-1~4に蓄積された電荷量は“0”なので、包絡線検出回路1288の出力信号は“0”の状態を保持している。露光期間に入ると、コンデンサ1160-1~4内に電荷が蓄積され始めるので、包絡線検出回路1288の出力信号が増加し始める。露光期間τの終了直後にコンデンサ1160-1~4内の電荷が放電されるが、包絡線検出回路1288は、露光期間τの終了直前の状態を保持する。 FIG. 26C(d) shows the output signal waveform of the envelope detection circuit 1288 before and after the exposure period. Before the exposure period, the amount of charge accumulated in the capacitors 1160-1 to 1160-4 is "0", so the output signal of the envelope detection circuit 1288 maintains the state of "0". When the exposure period begins, charge begins to accumulate in capacitors 1160-1 to 1160-4, so the output signal of envelope detection circuit 1288 begins to increase. Immediately after the exposure period τ ends, the charges in the capacitors 1160-1 to 1160-4 are discharged, but the envelope detection circuit 1288 maintains the state immediately before the exposure period τ ends.

 図26C(e)は、ページメモリ1296内に取り込まれるデータを示す。露光期間τ前のページメモリ1296内データは、初期値として“0”となっている。露光期間τの終了直後に、露光タイミング設置回路1292からページメモリ1296に対するデータ取り込み指示が出る。このデータ取り込み指示のタイミングで、包絡線検出回路1288の出力データがページメモリ1296内に取り込まれる。この取り込まれたデータは、適正なタイミングで読み出し回路1290に引き渡される。 FIG. 26C(e) shows data taken into page memory 1296. The data in the page memory 1296 before the exposure period τ has an initial value of “0”. Immediately after the exposure period τ ends, the exposure timing setting circuit 1292 issues a data capture instruction to the page memory 1296. At the timing of this data capture instruction, the output data of the envelope detection circuit 1288 is captured into the page memory 1296. This captured data is delivered to readout circuit 1290 at appropriate timing.

 8.2節 撮像素子を内蔵した計測部と光源部との組み合わせ実施形態
図27Aは、前の8.1節で説明した3Dカラー撮像素子1280を内蔵した計測部8と光源部2を組み合わせた光学系の実施形態例を示す。ここで使用される光源部2内の詳細構造例として、図18Aと図18Bの組み合わせを使用しても良い。図18Bの外部との同期合わせ用信号ライン730を用いると、撮像素子1280の露光期間τに関する高精度な時間連携が取れる。しかしそれに限らず、図9Iなどを含めた任意の光源部2構造を利用しても良い。
Section 8.2 Embodiment of Combination of Measurement Unit with Built-in Image Sensor and Light Source Unit FIG. 27A shows a combination of measurement unit 8 with built-in 3D color image sensor 1280 and light source unit 2 described in the previous Section 8.1. An example embodiment of an optical system is shown. As an example of the detailed structure inside the light source section 2 used here, a combination of FIGS. 18A and 18B may be used. By using the external synchronization signal line 730 in FIG. 18B, highly accurate time coordination regarding the exposure period τ of the image sensor 1280 can be achieved. However, the present invention is not limited to this, and any structure of the light source section 2 including that shown in FIG. 9I may be used.

 図17Bを用いて5.4節内で説明したように、光学特性変更素子210内で角度分割数の増加に伴ってスペックルノイズ量が大幅に低減する。しかしスペックルノイズ量の完全除去は難しい。そのため本実施形態では、光源部2からの放射光進行方向に対して微動可動な構造を付加しても良い。図13を用いて5.1節内で説明したように、計測対象物22への照射角度の違いでスペックルノイズパターンが変化する。従って計測対象物22への照射角度を時間と共に変化させ、計測結果を時間平均(もしくは時間積分)すれば、さらにスペックルノイズ量は低減する。 As explained in Section 5.4 using FIG. 17B, the amount of speckle noise is significantly reduced as the number of angular divisions increases within the optical characteristic changing element 210. However, it is difficult to completely remove the amount of speckle noise. Therefore, in this embodiment, a structure that can be slightly movable with respect to the traveling direction of the emitted light from the light source section 2 may be added. As explained in Section 5.1 using FIG. 13, the speckle noise pattern changes depending on the irradiation angle to the measurement target object 22. Therefore, by changing the irradiation angle onto the measurement object 22 over time and time-averaging (or time-integrating) the measurement results, the amount of speckle noise can be further reduced.

 図27Aでは光源部2からの放射光の光路途中に光反射板520を配置し、圧電素子526、528を利用して光反射板520の傾き角を微動させる。これに拠り計測対象物22への照射角度を時間と共に変化させる。 In FIG. 27A, a light reflecting plate 520 is placed in the optical path of the emitted light from the light source section 2, and the inclination angle of the light reflecting plate 520 is slightly moved using piezoelectric elements 526 and 528. Based on this, the irradiation angle to the measurement target object 22 is changed over time.

 計測部8内にハーフミラー536を配置し、計測対象物22(図示して無い)からの反射光の一部が3Dカラー撮像素子1280に到達する。結像レンズ移動機構540が結像レンズ144を光軸方向に沿って移動させる。この結像レンズ移動機構540の働きで、任意の位置に存在する計測対象物22に対する結像位置に3Dカラー撮像素子1280を設置できる。 A half mirror 536 is disposed within the measurement unit 8, and a portion of the reflected light from the measurement object 22 (not shown) reaches the 3D color image sensor 1280. An imaging lens moving mechanism 540 moves the imaging lens 144 along the optical axis direction. Due to the function of this imaging lens moving mechanism 540, the 3D color image sensor 1280 can be placed at an imaging position for the measurement target 22 located at an arbitrary position.

 3Dカラー撮像素子1280と同じ結像位置に2次元配列光シャッタ530が設置される。この2次元配列光シャッタ530は、液晶シャッタなど任意の光シャッタで構成されても良い。この2次元配列光シャッタ530を利用すると、3Dカラー撮像素子1280を用いた測長に近赤外光を使用した場合の計測精度が向上する。 A two-dimensional array optical shutter 530 is installed at the same imaging position as the 3D color image sensor 1280. This two-dimensionally arrayed optical shutter 530 may be configured with any optical shutter such as a liquid crystal shutter. When this two-dimensional array optical shutter 530 is used, measurement accuracy is improved when near-infrared light is used for length measurement using the 3D color image sensor 1280.

 図10Aを用いて4.1節で説明したように、水は近赤外光を吸収する。特に波長が1.3μmを超えた光に対する水の光吸収量は大きい。そのため雨や霧の環境下では、近赤外光を用いた測長精度は低下する。特に計測対象物22の表面が濡れた場合、そこからの近赤外光の反射光量は低下する。 As explained in Section 4.1 using FIG. 10A, water absorbs near-infrared light. In particular, water absorbs a large amount of light having a wavelength exceeding 1.3 μm. Therefore, in rainy or foggy environments, the accuracy of length measurement using near-infrared light decreases. In particular, when the surface of the measurement object 22 becomes wet, the amount of near-infrared light reflected therefrom decreases.

 2次元配列光シャッタ530の利用方法を説明する。最初に光源部2から放射される近赤外光の光量を“0”にした時の、計測対象物22から得られるカラー画像を3Dカラー撮像素子1280を用いて計測する。そこで得られたカラー画像の輪郭抽出を行い、可視光で観察時の計測対象物22の形状を把握する。次に光源部2から放射される近赤外光を連続発光させた時に、3Dカラー撮像素子1280から得られる画像を収集する。光源部2の発光有無での画像比較から、計測対象物22表面での水分吸収分布が分かる。その計測対象物22表面での水分吸収分布を補正するパターンを2次元配列光シャッタ530に与えると、計測対象物22表面での水分吸収分布の影響を低減できる。 A method of using the two-dimensional array optical shutter 530 will be explained. First, when the amount of near-infrared light emitted from the light source section 2 is set to "0", a color image obtained from the measurement object 22 is measured using the 3D color image sensor 1280. The outline of the obtained color image is extracted, and the shape of the measurement object 22 when observed with visible light is grasped. Next, when the near-infrared light emitted from the light source section 2 is continuously emitted, an image obtained from the 3D color image sensor 1280 is collected. By comparing the images with and without light emission from the light source section 2, the moisture absorption distribution on the surface of the measurement object 22 can be determined. When a pattern that corrects the moisture absorption distribution on the surface of the measurement object 22 is applied to the two-dimensional array optical shutter 530, the influence of the moisture absorption distribution on the surface of the measurement object 22 can be reduced.

 図27Aでは2次元配列光シャッタ530が計測部8内に配置され、光源部2と計測部8が近接位置に配置されている。しかしそれに限らず、光源部2と計測部8が離れた位置に配置されても良い。例えば後述する図31Bや図30B、図30Cでは、光源部2と計測部8が離れた位置に配置されている。この場合には、圧電素子526、528を含む光反射板520や2次元配列光シャッタ530が、光源部2内に内蔵されても良い。 In FIG. 27A, a two-dimensionally arrayed optical shutter 530 is arranged within the measurement section 8, and the light source section 2 and the measurement section 8 are arranged in close proximity. However, the present invention is not limited thereto, and the light source section 2 and the measurement section 8 may be arranged at separate positions. For example, in FIGS. 31B, 30B, and 30C, which will be described later, the light source section 2 and the measurement section 8 are arranged at separate positions. In this case, a light reflecting plate 520 including piezoelectric elements 526 and 528 and a two-dimensionally arranged optical shutter 530 may be built into the light source section 2.

 図27Bは、3Dカラー撮像素子1280の制御回路構成を示す。この制御回路構成を用いて計測対象物22表面での水分吸収分布の影響を低減だけでなく、高精度な測長計測を実現する。発光量制御部30内の発光タイミング制御部1302が、上記の光源部2内の近赤外光の発光量制御を行う。信号受信部40内の非発光時のカラー画像(映像)保存部1316は、計測部8内3Dカラー撮像素子1280が光源部2の非発光時に収集するカラー画像(映像)を保存する。 FIG. 27B shows the control circuit configuration of the 3D color image sensor 1280. Using this control circuit configuration, it is possible to not only reduce the influence of the moisture absorption distribution on the surface of the measurement object 22, but also realize highly accurate length measurement. A light emission timing control section 1302 within the light emission amount control section 30 controls the light emission amount of near-infrared light within the light source section 2 described above. A non-emission color image (video) storage section 1316 in the signal receiving section 40 stores a color image (video) collected by the 3D color image sensor 1280 in the measurement section 8 when the light source section 2 is not emitting light.

 次に発光量制御部30内の発光タイミング制御部1302は、上記の光源部2内の近赤外光の連続発光量制御を行う。同時に信号受信部40内の光源波長光を含むカラー画像(映像)保存部1318が、この時に3Dカラー撮像素子1280が収集するカラー画像(映像)を保存する。 Next, the light emission timing control section 1302 in the light emission amount control section 30 performs continuous light emission control of the near-infrared light in the light source section 2 described above. At the same time, a color image (video) storage section 1318 that includes the light source wavelength light in the signal receiving section 40 stores the color image (video) collected by the 3D color image sensor 1280 at this time.

 信号処理部42内の差分演算処理部1314で、光源部2の発光有無でのカラー画像(映像)の違いを抽出する。信号処理部内の光源波長光のみの画像(映像)保存部1312は、その差分結果を保存する。そしてその保存内容が、発光量制御部30内の2次元配列光シャッタのパターン設定部1304に転送される。そしてこの2次元配列光シャッタのパターン設定部1304が、2次元配列光シャッタ530での透過光量分布特性を制御する。 The difference calculation processing unit 1314 in the signal processing unit 42 extracts the difference in color images (videos) depending on whether or not the light source unit 2 emits light. An image (video) storage unit 1312 of only the light source wavelength light in the signal processing unit stores the difference result. The saved contents are then transferred to the pattern setting unit 1304 of the two-dimensionally arrayed optical shutter in the light emission amount control unit 30. The pattern setting unit 1304 of the two-dimensionally arrayed optical shutter controls the transmitted light amount distribution characteristics of the two-dimensionally arrayed optical shutter 530.

 図27Bの制御回路はそれだけでなく、図28A以降で後述する各種のタイミング制御を行う。信号受信部40内の露光タイミング設定部1310が、後述する各種のタイミング制御を行う。それと同時並行して発光量制御部30内の発光タイミング制御部1302が、システム内制御部50からの指令に従って光源部2の発光タイミングを制御する。 The control circuit in FIG. 27B not only performs this but also performs various timing controls that will be described later from FIG. 28A onwards. An exposure timing setting section 1310 in the signal receiving section 40 performs various timing controls to be described later. At the same time, a light emission timing control section 1302 in the light emission amount control section 30 controls the light emission timing of the light source section 2 in accordance with a command from the internal system control section 50.

 図27Bが示す光源装置10内に光源部2と計測部8が共存する構造に限らず本実施形態例では、光源部2と計測部8が物理的に離れた位置に配置され、インターネット経由で両者が接続されても良い。この場合には、6.4節で説明した方法で(ホスト50を介在して)光源部2と計測部8間でインターネット通信を行っても良い。 In addition to the structure in which the light source section 2 and the measurement section 8 coexist in the light source device 10 shown in FIG. 27B, in this embodiment, the light source section 2 and the measurement section 8 are arranged at physically separate positions, and are connected via the Internet. Both may be connected. In this case, Internet communication may be performed between the light source section 2 and the measurement section 8 (via the host 50) using the method described in Section 6.4.

 3Dカラー撮像素子1280と光源部2を組み合わせた計測内容は、アプリケーション分野(各種光応用分野)適合部60(図1と図27B内に重複記載)の中の3Dカラー画像(映像)を利用したアプリケーションソフト58で利用される。この3Dカラー画像(映像)を利用したアプリケーションソフト58の利用形態の一例を、次の第9章で後述する。しかし第9章で後述するサービス提供内容に限らず、任意のサービス提供内容に利用しても良い。 The content of the measurement using the combination of the 3D color image sensor 1280 and the light source section 2 uses the 3D color image (video) in the application field (various optical application fields) compatible section 60 (duplicated description in FIG. 1 and FIG. 27B). It is used by application software 58. An example of how the application software 58 is used using this 3D color image (video) will be described later in Chapter 9 below. However, it is not limited to the service provision content described later in Chapter 9, and may be used for any service provision content.

 また本第8章では、主に3Dカラー撮像素子1280を利用した実施形態例を説明する。しかしそれに限らず、他の実施形態例として(例えば7.3節での説明内容を含む)任意の電荷蓄積形信号受信部を利用しても良い。 Also, in this Chapter 8, embodiments that mainly utilize the 3D color image sensor 1280 will be described. However, the present invention is not limited thereto, and any charge storage type signal receiving section may be used as other embodiments (including those described in Section 7.3, for example).

 8.3節 距離計測手順
図28Aは、3次元カラー画像(映像)計測を高精度で行う方法を示す。なおここでは説明の便宜上、“3次元カラー画像(映像)計測”の表現を利用しているに過ぎない。従って本8.3節と次の8.4節で説明する実施形態例に対する他の実施形態例として、(例えば7.3節での説明内容を含む)任意の電荷蓄積形信号受信部と光源部2を組み合わせても良い。また本8.3節で説明する手順は、前の8.2節で説明した『発光源2の発光波長光照射時の計測対象物22から得られる検出光特性分布の補正処理』と併用(前記補正処理の前後に実施)しても良い。
Section 8.3 Distance Measurement Procedure FIG. 28A shows a method for measuring a three-dimensional color image (video) with high accuracy. Note that for convenience of explanation, the expression "three-dimensional color image (video) measurement" is used here. Therefore, as an alternative embodiment to the embodiment described in this Section 8.3 and the following Section 8.4, any charge storage type signal receiver and light source (including, for example, the content described in Section 7.3) may be used. Part 2 may be combined. In addition, the procedure described in Section 8.3 can be used in conjunction with the "correction process for the detected light characteristic distribution obtained from the measurement object 22 when irradiated with light at the emission wavelength of the light emitting source 2" described in the previous Section 8.2. (performed before or after the correction process).

 図28Aでは、光源部2での複数の発光パターン750(計測対象物22への複数の光照射パターン)を使用する。この発光パターン750の違いは、制御信号790内の発光パターン識別情報750(図19C)を使って光源部2に事前通知しても良い。 In FIG. 28A, a plurality of light emission patterns 750 (a plurality of light irradiation patterns to the measurement target object 22) in the light source section 2 are used. This difference in the light emission pattern 750 may be notified to the light source section 2 in advance using the light emission pattern identification information 750 (FIG. 19C) in the control signal 790.

 本実施形態例では例えば100m範囲や1Km範囲など、非常に広い範囲での測長が可能となる。このような広い測定範囲内に対して、一度に高精度で測長するのは難しい。例えば1Kmなどの広い範囲の全領域にわたり、計測誤差1mm以下の高精度計測を1回の計測のみで実現するには時間が掛かる。また計測対象物22が存在する場所近傍でのみ、高い計測精度が要求される。反対に計測対象物22が存在しない場所では、高精度計測は余り意味が無い。 In this embodiment, length measurement is possible over a very wide range, such as a 100 m range or a 1 km range. It is difficult to measure the length within such a wide measurement range at once with high precision. For example, it takes time to achieve high-precision measurement with a measurement error of 1 mm or less over a wide area, such as 1 km, in just one measurement. Further, high measurement accuracy is required only near the location where the measurement target object 22 exists. On the other hand, in a place where there is no object 22 to be measured, high-precision measurement has little meaning.

 そのため最初に広範囲での計測対象物22までのおよその距離を把握した後に、計測対象物22近傍で高精度計測する方法もある。このように計測を複数回に分け、計測範囲と計測精度を段階的に変化させると、全体の計測効率が向上する。そして計測する範囲と要求精度毎に最適な発光パターン750を切り替えても良い。このように複数の発光パターン750を用いた計測を組み合わせると、遠く離れた位置の計測対象物22に対しても高い精度での測長が可能となる。 Therefore, there is a method of first ascertaining the approximate distance to the measurement object 22 over a wide range, and then performing high-precision measurement near the measurement object 22. In this way, by dividing the measurement into multiple times and changing the measurement range and measurement accuracy in stages, the overall measurement efficiency is improved. The optimum light emission pattern 750 may be switched depending on the range to be measured and the required accuracy. By combining measurements using a plurality of light emitting patterns 750 in this way, it is possible to measure the length of the object 22 to be measured with high accuracy even at a far away position.

 3次元カラー画像(映像)の収集開始(ST10)から終了(ST17)までの間に、3種類の異なる発光パターン750(計測対象物22への複数の光照射パターン)を使用しても良い。ステップ11の2次元カラー画像(映像)収集では、図28B(b)で後述する発光パターンを使用する。そしてその発光パターンから得られた画像(映像)を利用してステップ12では、計測距離範囲全体の光源波長光での反射光パターンを撮像する。 Three different light emission patterns 750 (a plurality of light irradiation patterns to the measurement target object 22) may be used between the start (ST10) and the end (ST17) of collecting three-dimensional color images (videos). In the two-dimensional color image (video) collection in step 11, a light emission pattern described later with reference to FIG. 28B(b) is used. Using the image (video) obtained from the light emission pattern, in step 12, a reflected light pattern of the light source wavelength light over the entire measurement distance range is imaged.

 次のステップ13での2次元カラー画像(映像)収集では、図28C(b)で後述する発光パターンを使用する。そしてその発光パターンから得られた画像(映像)を利用してステップ14では、計測距離範囲毎の光源波長光での反射光パターンの撮像を行う。 In collecting a two-dimensional color image (video) in the next step 13, a light emission pattern described later in FIG. 28C(b) is used. Using the image (video) obtained from the light emission pattern, in step 14, a reflected light pattern of the light source wavelength light is imaged for each measurement distance range.

 その後のステップ15での2次元カラー画像(映像)収集では、図28D(b)で後述する発光パターンを使用する。そしてその発光パターンから得られた画像(映像)を利用してステップ16では、計測距離範囲毎の詳細な距離計測を行う。 In the subsequent two-dimensional color image (video) collection in step 15, a light emission pattern described later in FIG. 28D(b) is used. Then, in step 16, detailed distance measurement is performed for each measurement distance range using the image (video) obtained from the light emission pattern.

 図28Bは、計測距離範囲全体の光源波長光での反射光パターンを撮像する方法を示す。一点の発光点からの放射光強度は、発光点からの距離の二乗に反比例する。また計測対象物22からの反射光を用いて計測対象物22までの距離計測する場合、光源部2から計測部8までの距離は計測対象物22までの距離の2倍に広がる。従って計測範囲(計測可能な光源部2からの最大距離)で、光源部2の最大発光量が決まる。また計測対象物22の位置が離れると、光源部2の発光量を上げる必要が有る。 FIG. 28B shows a method of imaging a reflected light pattern with light source wavelength light over the entire measurement distance range. The intensity of emitted light from one light emitting point is inversely proportional to the square of the distance from the light emitting point. Further, when measuring the distance to the measurement object 22 using reflected light from the measurement object 22, the distance from the light source section 2 to the measurement section 8 increases to twice the distance to the measurement object 22. Therefore, the maximum amount of light emitted from the light source section 2 is determined by the measurement range (the maximum distance from the light source section 2 that can be measured). Further, when the measurement target object 22 is moved away from the object 22, it is necessary to increase the amount of light emitted from the light source section 2.

 図28B(b)は、計測距離範囲全域にわたる全反射光パターンを撮像するのに適正な発光パターンを示す。時刻t1からt2に至る発光期間は連続発光させ、時間経過と共に発光量を2次曲線に沿って減少させる。それに拠り、計測距離範囲全域での均一な光反射特性が収集できる。 FIG. 28B(b) shows a light emission pattern suitable for imaging a total reflection light pattern over the entire measurement distance range. During the light emitting period from time t1 to t2, light is emitted continuously, and the amount of light emitted decreases along a quadratic curve as time passes. Accordingly, uniform light reflection characteristics can be collected over the entire measurement distance range.

 図28B(c)は、計測部8内(3Dカラー撮像素子1280)の露光タイミングを示す。光源部2の発光停止時刻t2直後に、τの期間(矩形波期間)だけ露光する。この露光期間τ中には、Δt1やΔt2だけ前に光源部2を出発した光が一緒に戻って来る。計測距離Δxに関して、Δx=cΔt/2(c:空気中の光速)の関係が有る。従ってこの場合の計測距離範囲として、c(t2-t1)/2の距離まで計測する。図28B(c)の
露光タイミングで取得した画像から、計測距離範囲全域にわたる全反射光パターンが分かる。ここで得られた画像(計測距離範囲全域にわたる全反射光パターン)と後述する図28Cで得られる画像を比較する事で、計測距離範囲全域にわたる全反射光パターン内の細かな領域毎のおよその距離が把握できる。なお図28Bから図28Eに至る説明で取得する画像は全て、図27Bで示した信号処理部42内の光源波長光のみの画像(映像)保存部1312に保存される画像を意味している。
FIG. 28B(c) shows the exposure timing in the measurement unit 8 (3D color image sensor 1280). Immediately after the light emission stop time t2 of the light source section 2, exposure is performed for a period of τ (a rectangular wave period). During this exposure period τ, the light that left the light source section 2 Δt1 and Δt2 before returns together. Regarding the measured distance Δx, there is a relationship of Δx=cΔt/2 (c: speed of light in air). Therefore, the measurement distance range in this case is measured up to a distance of c(t2-t1)/2. From the image acquired at the exposure timing shown in FIG. 28B(c), a total reflection light pattern over the entire measurement distance range can be seen. By comparing the image obtained here (total reflection light pattern over the entire measurement distance range) with the image obtained in FIG. Distance can be determined. Note that all the images acquired in the explanation from FIG. 28B to FIG. 28E refer to images stored in the image (video) storage section 1312 of only the light source wavelength light in the signal processing section 42 shown in FIG. 27B.

 図28Cは、計測距離範囲毎の光源波長光での反射光パターンの撮像方法を示す。光源部2では図28C(b)に示すように、間欠的パルス発光する。この間欠的パルス発光は、時刻t1、t2、t3と一定の周期でパルス発光する。この一定周期内に所定期間の干渉防止期間798を設け、この干渉防止期間798内では光源部2の発光が起きないように制御されている。この干渉防止期間798内で無発光制御すると、誤ったタイミングでのパルス発光に基付く距離の誤計測が防止でき、計測精度が向上する。 FIG. 28C shows a method of imaging a reflected light pattern using light source wavelength light for each measurement distance range. The light source section 2 emits intermittent pulsed light as shown in FIG. 28C(b). This intermittent pulsed light emission is performed in a constant cycle at times t1, t2, and t3. An interference prevention period 798 of a predetermined period is provided within this fixed period, and the light source section 2 is controlled so as not to emit light during this interference prevention period 798. If no light emission is controlled within this interference prevention period 798, erroneous distance measurement based on pulsed light emission at wrong timing can be prevented, and measurement accuracy is improved.

 また発光タイミングt1、t2、t3毎に発光量を変化させている。上記で、遠方の計測対象物22からの検出信号量が低下する現象を説明した。このように計測したい計測距離範囲毎に発光量を変化させると、計測距離範囲毎の計測精度変化が防止され、計測距離範囲全域にわたる安定な距離計測が可能となる。 Furthermore, the amount of light emission is changed at each light emission timing t1, t2, and t3. The phenomenon in which the amount of detection signals from the distant measurement target 22 decreases has been described above. By changing the amount of light emitted for each measurement distance range to be measured in this way, changes in measurement accuracy for each measurement distance range are prevented, and stable distance measurement over the entire measurement distance range is possible.

 図28C(b)では矩形のパルス発光状態を示している。しかしそれに限らずパルス発光期間中に『光源部2固有の変調信号(光源部2のID情報信号)』で発光させても良い。同一場所に複数の発光部2が配置されたシステムでは、異なる発光部2からの発光状態が計測部8(の3Dカラー撮像素子1280)内で誤検知されるリスクが有る。計測部8内に別の光検出素子250を配置し、この別の光検出素子250で同時に光源部2毎に異なる固有の変調信号(光源部2のID情報信号)を検出してもよい。別の光検出素子250での光源部2固有の変調信号検出を併用すると、対象の計測部8内の露光期間τ内の別の発光部2からの不要な発光パルス混入が検出できる。これに拠り、同一場所に複数の発光部2が配置されたシステムでの誤検知リスクが大幅に低減する。 FIG. 28C(b) shows a rectangular pulsed light emission state. However, the present invention is not limited thereto, and light may be emitted using a "modulation signal unique to the light source section 2 (ID information signal of the light source section 2)" during the pulse emission period. In a system in which a plurality of light emitting units 2 are arranged at the same location, there is a risk that the light emission state from different light emitting units 2 will be erroneously detected within the measuring unit 8 (3D color image sensor 1280 thereof). Another photodetection element 250 may be arranged within the measurement section 8, and this another photodetection element 250 may simultaneously detect a unique modulation signal (ID information signal of the light source section 2) that is different for each light source section 2. When the detection of a modulation signal specific to the light source section 2 using another photodetection element 250 is also used, unnecessary light emission pulses mixed in from another light emitting section 2 within the exposure period τ within the target measurement section 8 can be detected. Accordingly, the risk of false detection in a system in which a plurality of light emitting units 2 are arranged at the same location is significantly reduced.

 図28C(c)は、計測部8内(3Dカラー撮像素子1280)の露光タイミングを示す。光源部2での発光タイミングに対する計測部8内(3Dカラー撮像素子1280)の露光タイミングの遅延時間Δt1とΔt2、Δt3が、発光時刻t1、t2、t3毎にことなる。この遅延時間の変化で、計測距離範囲を切り替えている。図28Cでは、
Δt1<Δt2<Δt3となっている。しかしそれに限らず、遅延時間Δt1とΔt2、Δt3間で一定値以上の開きが有れば、各遅延時間を任意に設定しても良い。従って
Δt1>Δt2>Δt3と設定してもよい。
FIG. 28C(c) shows the exposure timing in the measurement unit 8 (3D color image sensor 1280). The delay times Δt1, Δt2, and Δt3 of the exposure timing in the measurement unit 8 (3D color image sensor 1280) with respect to the light emission timing in the light source unit 2 are different for each light emission time t1, t2, and t3. The measurement distance range is changed by changing this delay time. In Figure 28C,
Δt1<Δt2<Δt3. However, the present invention is not limited thereto, and each delay time may be arbitrarily set as long as there is a difference of a certain value or more between the delay times Δt1, Δt2, and Δt3. Therefore, it is also possible to set Δt1>Δt2>Δt3.

 それぞれのタイミングで同一露光期間τで取得した各画像は、図28Bで取得した画像内の一部分を示す。したがってここで取得する各画像と図28Bで取得した画像を比較すると、計測距離範囲全域にわたる全反射光パターン内を細かく分割した領域毎のおよその距離が把握できる。 Each image acquired at each timing and during the same exposure period τ shows a part of the image acquired in FIG. 28B. Therefore, by comparing each image acquired here with the image acquired in FIG. 28B, it is possible to grasp the approximate distance of each region into which the total reflection light pattern over the entire measurement distance range is finely divided.

 図28Dは、各計測距離範囲内の詳細な距離計測方法を示す。光源部2の発光状態(図28D(b))に干渉防止期間798を設定する内容や、光源部2の発光時刻t1、t2毎に露光期間τの遅延時間Δt1とΔt2を変化させる内容(図28D(c))は、図28Cと一致する。 FIG. 28D shows a detailed distance measurement method within each measurement distance range. The interference prevention period 798 is set in the light emission state of the light source section 2 (FIG. 28D(b)), and the delay times Δt1 and Δt2 of the exposure period τ are changed at each light emission time t1 and t2 of the light source section 2 (FIG. 28D(b)). 28D(c)) is consistent with FIG. 28C.

 図28D(b)が示すように、光源部2を間欠的に変調発光させる部分が、図28C(b)とは異なる。図28D(b)で間欠的な光量変調期間内では、『均一周期発光』が望ましい。またさらにこの光量変調時には『デューティ比(Duty Ratio)が50%』が望ましい。この発光状態として『均一周期』と『デューティ比50%』の条件を満足すれば、任意の波形で発光して良い。例えば正弦波や矩形波、三角波など自由に設定してもよい。 As shown in FIG. 28D(b), the part that causes the light source section 2 to emit modulated light intermittently is different from FIG. 28C(b). In the period of intermittent light amount modulation in FIG. 28D(b), "uniform periodic light emission" is desirable. Furthermore, when modulating the light amount, it is desirable that the duty ratio be 50%. As long as this light emission state satisfies the conditions of "uniform period" and "50% duty ratio", light may be emitted with any waveform. For example, a sine wave, a rectangular wave, a triangular wave, etc. may be freely set.

 図28Eは、位相検出法を用いた距離計測方法の説明図を示す。図26Aを用いて、3Dカラー撮像素子1280内の構造を説明した。ここでの距離計測には、4個の画素の組1262-1と1264-1、1266-1、1268-1を使用する。また上記の光源部2の変調発光状態に合わせて、それぞれの露光タイミングをずらす。 FIG. 28E shows an explanatory diagram of a distance measurement method using the phase detection method. The structure inside the 3D color image sensor 1280 has been described using FIG. 26A. For distance measurement here, four pixel sets 1262-1, 1264-1, 1266-1, and 1268-1 are used. Further, each exposure timing is shifted in accordance with the modulated light emission state of the light source section 2 described above.

 図28E(b)~(e)は、4個の画素1262と1264、1266、1268毎の露光タイミングを表わす。ここで4個の画素1262と1264、1266、1268の露光期間τは、全て一致させる。そして露光タイミングを露光期間τずつずらす。そして図28E(f)に示すように、光源部2の変調発光周期を“4τ”に設定する。 FIGS. 28E(b) to (e) show the exposure timing for each of the four pixels 1262, 1264, 1266, and 1268. Here, the exposure periods τ of the four pixels 1262, 1264, 1266, and 1268 are all made to match. Then, the exposure timing is shifted by the exposure period τ. Then, as shown in FIG. 28E(f), the modulated light emission period of the light source section 2 is set to "4τ".

 図28E(f)の基準変調発光状態に対して図28E(g)の検出光が位相φだけずれた場合には、4個の画素1262と1264、1266、1268からページメモリ1296(図26B)に入力される信号量は、A1とA2、A3、A4の各面積値に相当する。
従って
When the detection light in FIG. 28E(g) is shifted by phase φ with respect to the reference modulated light emission state in FIG. 28E(f), the page memory 1296 (FIG. 26B) The amount of signals input to corresponds to the area values of A1, A2, A3, and A4.
Therefore

Figure JPOXMLDOC01-appb-M000033
から、検出光の遅延位相量φが算出できる。この方法を用いて、精度よく計測部8(内の3Dカラー撮像素子1280)に戻る検出光の遅延量が分かる。
Figure JPOXMLDOC01-appb-M000033
From this, the amount of delay phase φ of the detection light can be calculated. Using this method, the amount of delay of the detection light returning to the measurement unit 8 (3D color image sensor 1280 therein) can be determined with high precision.

 例えば特許文献3の技術を利用すると現在でも、露光期間τとして“1nS”まで短い露光期間τの撮像が可能で有る。図28E(f)の基準変調発光の周期は、4τ=4nSとなる。空気中の光速度を約3×10m/Sとし、計測対象物22に対する往復光を検出すると考えると、上記の周期は、3×10×4×10-9÷2=0.6mとなる。従って露光期間τ=1nSでは、60cmの計測距離範囲内の前後位置変化が計測可能となる。この場合の測長精度は、図28E(g)内のA1~A4領域の面積精度で決まる。そのため、第2章と第3章、第5章で説明した光学雑音の低減化技術が非常に重要となる。 For example, if the technology of Patent Document 3 is used, even now it is possible to capture an image with an exposure period τ as short as “1 nS”. The period of the reference modulated light emission in FIG. 28E(f) is 4τ=4nS. Assuming that the speed of light in the air is approximately 3×10 8 m/S and that the round-trip light to the measurement object 22 is detected, the above period is 3×10 8 ×4×10 −9 ÷2 = 0.6 m. becomes. Therefore, during the exposure period τ=1 nS, it is possible to measure a change in the longitudinal position within a measurement distance range of 60 cm. The length measurement accuracy in this case is determined by the area accuracy of areas A1 to A4 in FIG. 28E(g). Therefore, the optical noise reduction techniques described in Chapters 2, 3, and 5 are extremely important.

 8.4節 レーザに拠るスペックルノイズの測定への影響低減方法
で説明した方法で、スペックルノイズは大幅に低減する。しかし図17Bが示すように、光学特性変更素子210の角度分割数を大幅に増加させても、スペックルノイズを完全に除去するのは難しい。第7章では、光学的雑音低減化技術と回路技術を組み合わせた高精度計測方法を説明した。
Speckle noise can be significantly reduced using the method described in Section 8.4 Method for reducing the influence of laser speckle noise on measurements. However, as shown in FIG. 17B, even if the number of angular divisions of the optical characteristic changing element 210 is significantly increased, it is difficult to completely eliminate speckle noise. Chapter 7 explained a high-precision measurement method that combines optical noise reduction technology and circuit technology.

 図28Fは、回路技術を組み合わせてさらにスペックルノイズの影響を低減化する方法を示している。ここでは光源部2内での第7章内の記載内容あるいは図27Aを用いた8.2節内の説明内容、図9Iを用いた3.3節内の説明内容などの光学的雑音低減対策の実行が前提としている。しかしそれに限らず本実施形態例では、光学的な雑音低減を行わずに、下記に記述する実施形態例のみを実施しても良い。 FIG. 28F shows a method for further reducing the effects of speckle noise by combining circuit techniques. Here, optical noise reduction measures such as the description in Chapter 7 in the light source unit 2, the explanation in Section 8.2 using Figure 27A, and the explanation in Section 3.3 using Figure 9I are introduced. It is assumed that the execution of However, the present embodiment is not limited thereto, and only the embodiment described below may be implemented without performing optical noise reduction.

 図27F(b)は、光源部2での発光状態を示す。図28C(b)で説明したパルス発光を時刻t2から開始する。但し図28C(b)を用いて説明したように、このパルス発光期間に発光変調を行っても良い。その後の時刻t4から、図28D(b)で説明した変調発光を行う。 FIG. 27F(b) shows the light emitting state in the light source section 2. The pulsed light emission explained in FIG. 28C(b) starts from time t2. However, as explained using FIG. 28C(b), light emission modulation may be performed during this pulse light emission period. From the subsequent time t4, the modulated light emission explained in FIG. 28D(b) is performed.

 検出部8(の3Dカラー撮像素子1280)内では、それぞれ異なるタイミングで露光期間τを持つ。この異なるタイミングでの露光期間τに検出された信号は、別な信号として別々にページメモリ1296(図26B)内に保存される。図28F(c)~(e)では説明の簡素化のため、1個の画素1262のみに対応した露光期間τのみを記載した。しかし実際には図28Gと図28Hに示すように、画素1262~1268毎に露光開始時刻が露光期間τずつずれる。 Within the detection unit 8 (the 3D color image sensor 1280 thereof), each has an exposure period τ at a different timing. The signals detected during the exposure period τ at different timings are stored separately in the page memory 1296 (FIG. 26B) as separate signals. In FIGS. 28F(c) to (e), to simplify the explanation, only the exposure period τ corresponding to only one pixel 1262 is shown. However, in reality, as shown in FIGS. 28G and 28H, the exposure start time is shifted by the exposure period τ for each pixel 1262 to 1268.

 図27F(c)は、検出部8(の3Dカラー撮像素子1280)内の第1の露光期間τのタイミングを示す。この第1の露光期間τは、光源部2が発光する前の時刻t1から開始される。この第1の露光期間τで、光源部2の無発光状態での可視光のみのカラー画像(映像)が撮像される。ここで得られる画像(映像)は、図27Bの非発光時のカラー画像(映像)保存部1316に保存される画像(映像)の内容と一致する。 FIG. 27F(c) shows the timing of the first exposure period τ in (the 3D color imaging device 1280 of) the detection unit 8. This first exposure period τ starts from time t1 before the light source section 2 emits light. During this first exposure period τ, a color image (video) of only visible light is captured with the light source section 2 in a non-emission state. The image (video) obtained here matches the content of the image (video) stored in the non-emission color image (video) storage unit 1316 in FIG. 27B.

 図27F(d)は、検出部8(の3Dカラー撮像素子1280)内の第2の露光期間τのタイミングを示す。この第2の露光期間τは、光源部2がパルス発光を開始した時刻t2からΔt遅れた時刻t3から開始される。この第2の露光期間τでは、可視光から得られるカラー画像(映像)にパルス発光で照射された光源部2の発光波長光での反射画像(映像)を重畳した画像(映像)が得られる。この画像(映像)は、図27B内の光源波長光を含むカラー画像(映像)保存部1318に保存される画像(映像)の内容と一致する。 FIG. 27F(d) shows the timing of the second exposure period τ in (the 3D color image sensor 1280 of) the detection unit 8. The second exposure period τ starts at a time t3 delayed by Δt0 from the time t2 when the light source section 2 starts emitting pulsed light. During this second exposure period τ, an image (image) is obtained in which a reflected image (image) of the light of the emission wavelength of the light source unit 2 irradiated with pulsed light emission is superimposed on a color image (image) obtained from visible light. . This image (video) matches the content of the image (video) stored in the color image (video) storage unit 1318 including the light source wavelength light in FIG. 27B.

 光源部2がパルス発光を開始してΔt後に検出部8(の3Dカラー撮像素子1280)内の第2の露光期間τが始まる。このΔtの時間差は、Δx=cΔt/2だけ離れた位置での光源部2の発光波長光の反射画像(映像)が重畳されている事を意味する。実際の光源部2のパルス発光期間は、検出部8(の3Dカラー撮像素子1280)内の第2の露光期間τよりも非常に長い。従って光源部2の発光波長光の反射画像(映像)が重畳されている場所は、Δx=cΔt/2よりδだけ短いΔx=cΔt/2-δ離れた位置での光源部2の発光波長光の反射画像(映像)が重畳される。 The second exposure period τ in the detection unit 8 (3D color imaging device 1280 thereof) starts Δt 0 after the light source unit 2 starts emitting pulsed light. This time difference of Δt 0 means that reflected images (images) of light of the emission wavelength of the light source unit 2 at positions separated by Δx 0 =cΔt 0 /2 are superimposed. The actual pulse emission period of the light source section 2 is much longer than the second exposure period τ in (the 3D color image sensor 1280 of) the detection section 8. Therefore, the location where the reflected image (video) of the light emitted from the light source 2 is superimposed is at a position Δx 0 =cΔt 0 /2−δ that is shorter than Δx 0 =cΔt 0 /2 by δ. The reflected images (images) of the light emitted at wavelengths are superimposed.

 ここで重要な事は、『光源部2の発光波長光の反射画像(映像)内にスペックルノイズが混入している』所にある。そして1組を構成する4個の画素1262と1264、1266、1268間で、スペックルノイズの混入率が異なる。そしてこの第2の露光期間τでは、画素1262~1268毎のスペックルノイズが混入した画像(映像)を取得する。 The important point here is that "speckle noise is mixed in the reflected image (video) of the light emitted by the light source 2 at the wavelength". The speckle noise mixing rate differs between the four pixels 1262, 1264, 1266, and 1268 that make up one set. In this second exposure period τ, an image (video) mixed with speckle noise is obtained for each pixel 1262 to 1268.

 図28E(g)に示したA1~A4の各面積は、スペックルノイズが無い理想状態を示している。しかし実際にはスペックルノイズの影響率(スペックルノイズで発生する光量変化率)が、画素1262~1268毎に異なる。従ってスペックルノイズの影響まで考慮に入れると、図28E(g)のA1~A4のレベルが、スペックルノイズの影響で個別に大きく変化する。ここで非常に短い時間範囲内では、画素1262~1268毎のスペックルノイズの影響率(スペックルノイズで発生する光量変動率)は、発光部2での発光量に拠らず一定となる。すなわち発光部2での発光パターンが図28E(f)のように変調発光状態と、発光部2での発光量が時間方向で一定値を取る場合とでは、画素1262~1268毎のスペックルノイズの影響率(スペックルノイズで発生する光量変動率)がほぼ等しいと考えられる。 Each of the areas A1 to A4 shown in FIG. 28E(g) shows an ideal state without speckle noise. However, in reality, the influence rate of speckle noise (rate of change in amount of light caused by speckle noise) differs for each pixel 1262 to 1268. Therefore, when the influence of speckle noise is taken into consideration, the levels A1 to A4 in FIG. 28E(g) individually vary greatly due to the influence of speckle noise. Here, within a very short time range, the influence rate of speckle noise for each pixel 1262 to 1268 (rate of variation in light amount caused by speckle noise) is constant regardless of the amount of light emitted by the light emitting unit 2. In other words, speckle noise for each pixel 1262 to 1268 occurs when the light emitting pattern in the light emitting unit 2 is in a modulated light emitting state as shown in FIG. It is considered that the influence rate (rate of light amount fluctuation caused by speckle noise) is almost the same.

 矩形波を持つパルス発光期間内の短い露光期間τで見ると、その露光期間τ内は一定光量で発光していると見なせる。従って本実施形態例では検出部8(の3Dカラー撮像素子1280)内の第2の露光期間τで取得した『画素1262~1268毎のスペックルノイズの影響率(スペックルノイズで発生する光量変動率)』を第1の抽出情報(図20A)と見なし、この第1の抽出情報を利用して、『計測対象物22から得られる位相情報』に相当する第2の情報を抽出する。 When looking at the short exposure period τ within the pulse emission period with a rectangular wave, it can be considered that light is emitted with a constant amount of light within the exposure period τ. Therefore, in this embodiment, the influence rate of speckle noise (light intensity variation caused by speckle noise) for each pixel 1262 to 1268 is obtained during the second exposure period τ in the detection unit 8 (3D color image sensor 1280). The second information corresponding to the "phase information obtained from the measurement object 22" is extracted using this first extraction information.

 図27F(e)は、検出部8(の3Dカラー撮像素子1280)内の第3の露光期間τのタイミングを示す。この第3の露光期間τは、光源部2が変調発光を開始した時刻t4からΔt遅れた時刻t5から開始される。紙面上の記載の都合で、図28F内の時刻t3と時刻t4の間隔が狭くなっている。しかし実際には、時刻t3と時刻t4間の間隔は充分広がっている。この第3の露光期間τでは、可視光から得られるカラー画像(映像)に変調発光で照射された光源部2の発光波長光での反射画像(映像)を重畳した画像(映像)が得られる。そしてこの光源部2の発光波長光での反射画像(映像)内に、『計測対象物22から得られる位相成分情報』が含まれている。またこの『計測対象物22から得られる位相情報』は、『画素1262~1268毎のスペックルノイズ』の中に埋もれている。従って第2の露光期間τで獲得した第1の抽出情報である『画素1262~1268毎のスペックルノイズの影響率(スペックルノイズで発生する光量変動率)』を利用して、スペックルノイズ内に埋もれている『計測対象物22から得られる位相成分情報』(第2の情報)を抽出1000する(図20A)。 FIG. 27F(e) shows the timing of the third exposure period τ in (the 3D color imaging device 1280 of) the detection unit 8. This third exposure period τ starts at time t5, which is delayed by Δt0 from time t4 when the light source section 2 starts modulated light emission. Due to the description on the paper, the interval between time t3 and time t4 in FIG. 28F is narrow. However, in reality, the interval between time t3 and time t4 is sufficiently wide. During this third exposure period τ, an image (image) is obtained in which a reflected image (image) of light of the emission wavelength of the light source unit 2 irradiated with modulated light is superimposed on a color image (image) obtained from visible light. . The reflected image (video) of light of the emission wavelength of the light source unit 2 includes "phase component information obtained from the measurement object 22." Further, this "phase information obtained from the measurement object 22" is buried in "speckle noise for each pixel 1262 to 1268." Therefore, by using the first extracted information acquired in the second exposure period τ, ``the influence rate of speckle noise for each pixel 1262 to 1268 (rate of light amount variation caused by speckle noise)'', speckle noise "Phase component information obtained from the measurement target object 22" (second information) buried in the measurement object 22 is extracted 1000 (FIG. 20A).

 図28Gは、第1の露光期間と第2の露光期間で得られる画素毎の検出信号を示す。図28G(b)~(e)は、1組を構成する4個の画素1262~1268が第1の露光期間τで得る検出信号を示す。第1の露光期間τで収集するカラー情報内の赤色強度b1と青色強度b2、緑色強度b3、白色強度b4が、個別に得られる。また4個の画素1262~1268毎に露光期間τが、τずつずれる。 FIG. 28G shows detection signals for each pixel obtained in the first exposure period and the second exposure period. FIGS. 28G(b) to (e) show detection signals obtained by four pixels 1262 to 1268 forming one set during the first exposure period τ. Red intensity b1, blue intensity b2, green intensity b3, and white intensity b4 in the color information collected in the first exposure period τ are obtained individually. Furthermore, the exposure period τ is shifted by τ for every four pixels 1262 to 1268.

 図28G(f)~(i)は、1組を構成する4個の画素1262~1268が第2の露光期間τで得る検出信号を示す。ここでもまた4個の画素1262~1268毎に露光期間τが、τずつずれる。スペックルノイズが無い時には、カラー情報内の各色強度bi(1≦i≦4)に加算される光源部2の発光波長光に対する反射光量Δhi(1≦i≦4)は、ほぼ等しい値を示す。しかしスペックルノイズが大きい場合には、4個の画素1262~1268毎に加算される反射光量Δh1とΔh2、Δh3、Δh4は、大きく異なる。 FIGS. 28G(f) to (i) show detection signals obtained by four pixels 1262 to 1268 forming one set during the second exposure period τ. Here again, the exposure period τ is shifted by τ for every four pixels 1262 to 1268. When there is no speckle noise, the amount of reflected light Δhi (1≦i≦4) with respect to the light emitted by the light source 2 that is added to the intensity bi (1≦i≦4) of each color in the color information shows approximately the same value. . However, when the speckle noise is large, the amounts of reflected light Δh1, Δh2, Δh3, and Δh4 added for each of the four pixels 1262 to 1268 differ greatly.

 図28Hは、第2の露光期間と第3の露光期間で得られる画素毎の検出信号を示す。図28H(f)~(i)は、図28G(f)~(i)と一致する。図28H(j)~(m)は、1組を構成する4個の画素1262~1268が第3の露光期間τで得る検出信号を示す。ここも4個の画素1262~1268毎に露光期間τが、τずつずれる。光源部2の変調発光に対応して計測対象物22から得られる位相成分情報がスペックルノイズ内に埋もれた値ΔL1とΔL2、L3、ΔL4が、カラー情報内の各色強度bi(1≦i≦4)に加算される。 FIG. 28H shows detection signals for each pixel obtained in the second exposure period and the third exposure period. FIGS. 28H(f) to (i) match FIGS. 28G(f) to (i). FIGS. 28H(j) to (m) show detection signals obtained by four pixels 1262 to 1268 forming one set during the third exposure period τ. Here too, the exposure period τ is shifted by τ for every four pixels 1262 to 1268. The values ΔL1, ΔL2, L3, and ΔL4, in which the phase component information obtained from the measurement object 22 corresponding to the modulated light emission of the light source unit 2 is buried in speckle noise, are the respective color intensities bi (1≦i≦ 4).

 第2の露光期間τの開始時刻t3と第3の露光期間τの開始時刻t5との間の時刻差t5-t3が充分に小さい時には、『画素1262~1268毎のスペックルノイズの影響率(スペックルノイズで発生する光量変動率)』はΔLiとΔhi(1≦i≦4)間でほぼ一致すると見なす。するとスペックルノイズの影響を除去した実質的な位相成分Aiは、
Ai=ΔLi/Δhi(1≦i≦4)で算出できる。このように第2の露光期間τで得られる第1の抽出情報(画素1262~1268毎のスペックルノイズの影響率(スペックルノイズで発生する光量変動率))を利用して、第3の露光期間τで得られる信号内に混在する第2の情報(実質的な位相成分Ai)を抽出できる。この方法で、第2章と第3章、第5章で説明した光学雑音低減化後の残存雑音(残存したスペックルノイズなど)の影響をさらに除去できる。
When the time difference t5-t3 between the start time t3 of the second exposure period τ and the start time t5 of the third exposure period τ is sufficiently small, the “influence rate of speckle noise for each pixel 1262 to 1268 ( It is assumed that ΔLi and Δhi (1≦i≦4) are almost the same. Then, the actual phase component Ai after removing the influence of speckle noise is:
It can be calculated as Ai=ΔLi/Δhi (1≦i≦4). In this way, by using the first extraction information (influence rate of speckle noise for each pixel 1262 to 1268 (rate of light amount fluctuation caused by speckle noise)) obtained in the second exposure period τ, the third extraction information is extracted. The second information (substantive phase component Ai) mixed in the signal obtained during the exposure period τ can be extracted. With this method, the influence of residual noise (such as residual speckle noise) after optical noise reduction described in Chapters 2, 3, and 5 can be further removed.

 8.5節 照射波長制御を利用したハイパースペクトル検出方法
図29Aは、リニアバリアブルバンドパスフィルタ190の基本的な断面構造を示す。透明基板192の上に形成する光学薄膜194の厚みが場所に応じて変化する。このリニアバリアブルバンドパスフィルタ190に入射するパンクロマティックな光(互いに異なる波長光を含む光)188を入射させると、光学薄膜194表面と透明基盤192との界面間で多重反射する。その結果として透過する光198-1~4の波長が、場所に応じて変化する。
Section 8.5 Hyperspectral Detection Method Using Irradiation Wavelength Control FIG. 29A shows the basic cross-sectional structure of the linear variable bandpass filter 190. The thickness of the optical thin film 194 formed on the transparent substrate 192 changes depending on the location. When panchromatic light (light containing light with different wavelengths) 188 is made incident on this linear variable bandpass filter 190, it undergoes multiple reflections between the interface between the surface of the optical thin film 194 and the transparent substrate 192. As a result, the wavelengths of the transmitted lights 198-1 to 198-4 change depending on the location.

 図29Bは、このリニアバリアブルバンドパスフィルタ190と図26Aの構造を持つ3Dカラー撮像素子128を組み合わせて構成するハイパースペクトル検出方法の実施形態例を示す。リニアバリアブルバンドパスフィルタ移動機構196が時間進行に合わせて、リニアバリアブルバンドパスフィルタ190を移動させる。そのため集光レンズ330の集光位置に配置されたピンホール310を通過する光の波長は、時間と共に変化する。 FIG. 29B shows an embodiment of a hyperspectral detection method configured by combining this linear variable bandpass filter 190 and a 3D color image sensor 128 having the structure shown in FIG. 26A. A linear variable band pass filter moving mechanism 196 moves the linear variable band pass filter 190 as time progresses. Therefore, the wavelength of the light passing through the pinhole 310 placed at the focusing position of the focusing lens 330 changes with time.

 このピンホール310を通過した光は、結像レンズ144を介して(図示して無い)計測対象物22を照射する。計測対象物22からの反射光の一部はハーフミラー536を反射して、3Dカラー撮像素子128に向かう。結像レンズ移動機構540が結像レンズ144を光軸方向に移動し、3Dカラー撮像素子128の位置を計測対象物22の結像位置に合わせる。 The light passing through this pinhole 310 illuminates the measurement object 22 (not shown) via the imaging lens 144. A portion of the reflected light from the measurement object 22 is reflected by the half mirror 536 and is directed toward the 3D color image sensor 128 . The imaging lens moving mechanism 540 moves the imaging lens 144 in the optical axis direction, and aligns the position of the 3D color image sensor 128 with the imaging position of the measurement target 22.

 光源部2はパンクロマティック光を放射し、間欠的に発光する。3Dカラー撮像素子128は、光源部2の非発光期間中に計測対象物22に関するカラー画像を収集する。また3Dカラー撮像素子128、光源部2の発光期間中に、リニアバリアブルバンドパスフィルタ190を通過する波長光の反射光画像とカラー画像が重畳された画像を収集する。そして両者の画像間の差分光量が、ハイパースペクトルのデータキューブ信号となる。 The light source section 2 emits panchromatic light and emits light intermittently. The 3D color image sensor 128 collects color images regarding the measurement object 22 during the non-emission period of the light source section 2. Further, during the light emission period of the 3D color image sensor 128 and the light source section 2, an image in which a reflected light image of the wavelength light passing through the linear variable band pass filter 190 and a color image are superimposed is collected. The difference in light intensity between the two images becomes a hyperspectral data cube signal.

 図29Bの実施形態例を使用すると、カラー画像と同時に画素毎の近赤外分光特性が取得できる。従ってカラー画像とハイパースペクトルのデータキューブ画像間の位置合わせが不要となり、ユーザの利便性が向上する効果が有る。 By using the embodiment shown in FIG. 29B, near-infrared spectral characteristics for each pixel can be obtained simultaneously with a color image. Therefore, there is no need to align the color image and the hyperspectral data cube image, which has the effect of improving user convenience.

 パンクロマティック光を放射する光源部2内の発光部470として例えば、ハロゲンランプや水銀ランプなどの熱放射型フィラメントを使用しても良い。しかし熱放射型フィラメント内部での電気的応答速度が遅いため、高速での間欠発光は難しい。 For example, a heat emitting filament such as a halogen lamp or a mercury lamp may be used as the light emitting section 470 in the light source section 2 that emits panchromatic light. However, because the electrical response speed inside the heat-emitting filament is slow, intermittent light emission at high speeds is difficult.

 高速応答が可能なパンクロマティック光を放射できる光源部2構造として、図11A構造の改良形態を本実施形態例として説明する。第2の発光源170として使用する半導体レーザ素子500の発光波長を1250nm以下に設定し、近赤外発光蛍光体162、164の励起光源に利用する。この場合には、第1の発光源160(LED光源)の発光部470内配置を省いても良い。そして近赤外発光蛍光体162、164内に使用する蛍光物質182~186として、蛍光半減期の短い特性を持った材料(励起直後に蛍光し、励起光照射停止直後に蛍光発光を停止する材料)を使用する。その適正な材料選択に拠り、蛍光発光の応答速度が速い発光部470が実現できる。 An improved form of the structure in FIG. 11A will be described as an example of this embodiment as a structure of the light source section 2 that can emit panchromatic light capable of high-speed response. The emission wavelength of the semiconductor laser device 500 used as the second light emitting source 170 is set to 1250 nm or less, and is used as an excitation light source for the near-infrared emitting phosphors 162 and 164. In this case, the arrangement of the first light emitting source 160 (LED light source) within the light emitting section 470 may be omitted. The fluorescent substances 182 to 186 used in the near-infrared emitting phosphors 162 and 164 are materials with a short fluorescence half-life (materials that emit fluorescence immediately after excitation and stop emitting fluorescence immediately after excitation light irradiation is stopped). ). By appropriately selecting the material, a light emitting section 470 with a fast response speed for fluorescent light emission can be realized.

 なお光源部2から放射されるパンクロマティック光から所定波長光を抽出する方法として、リニアバリアブルバンドパスフィルタ190の使用に限定しない。例えば一般的な光学フィルタや透過波長可変形ファブリーベロー共振器、分光素子320を機械的に傾けるなど、任意の方法でパンクロマティック光から所定波長光を抽出しても良い。 Note that the method for extracting light of a predetermined wavelength from the panchromatic light emitted from the light source section 2 is not limited to the use of the linear variable bandpass filter 190. For example, the predetermined wavelength light may be extracted from the panchromatic light by any method such as using a general optical filter, a variable transmission wavelength Fabry-Bello resonator, or mechanically tilting the spectroscopic element 320.

 上記構造を持つ光源部2を図29Bに配置し、8.1節~8.4節で説明した方法で3Dカラー撮像素子1280と連携動作させると、計測対象物22の近赤外分光特性の測定と同時に、計測対象物22表面凹凸の3次元計測が可能となる。つまり制御部8と連動して光源部2の発光を制御し、光源部2からの放射光の波長を制御すると、計測対象物22の分光特性測定と計測対象物2までの距離測定が同時に行える。さらに2次元的(1次元的でも良い)に配列された画素毎に上記の測定を行うと、計測対象物2上の異なる複数位置での同時測定が可能となる。また4.4節で説明したように、光源部2または計測部8の小形化が可能となる。従って非常にコンパクトな構造で、一度に多種の情報が簡単に収集できる効果が生まれる。 When the light source section 2 having the above structure is arranged as shown in FIG. 29B and operated in conjunction with the 3D color image sensor 1280 in the manner described in Sections 8.1 to 8.4, the near-infrared spectral characteristics of the measurement object 22 can be changed. Simultaneously with the measurement, three-dimensional measurement of the surface irregularities of the measurement object 22 becomes possible. In other words, by controlling the light emission of the light source section 2 in conjunction with the control section 8 and controlling the wavelength of the emitted light from the light source section 2, it is possible to simultaneously measure the spectral characteristics of the measurement object 22 and the distance to the measurement object 2. . Furthermore, if the above measurement is performed for each pixel arranged two-dimensionally (or one-dimensionally), simultaneous measurement at a plurality of different positions on the measurement object 2 becomes possible. Furthermore, as explained in Section 4.4, it is possible to downsize the light source section 2 or the measurement section 8. Therefore, it has a very compact structure and has the effect of easily collecting a wide variety of information at once.

 第9章 サービス提供
9.1節 サービス提供時の入出力デバイス形態例
図30Aは、所定のサービス提供ドメインの利用に必要な入力デバイス例と出力デバイス例の説明図である。ユーザへのサービス提供方法またはユーザに対するサービス提供システムの実施形態例として、サイバー空間内に所定のサービス提供ドメイン1058を構築し、ユーザに利用してもらう方法が有る。
Chapter 9 Service Provision Section 9.1 Examples of Input/Output Device Forms When Providing Services FIG. 30A is an explanatory diagram of examples of input devices and output devices necessary for using a predetermined service providing domain. As an example of an embodiment of a method of providing services to users or a system of providing services to users, there is a method of constructing a predetermined service providing domain 1058 in cyberspace and allowing users to use it.

 ユーザ1080がそのサイバー空間内の所定サービス提供ドメイン1058を利用するには、エンドユーザ1080がそのサイバー空間内の所定サービス提供ドメイン1058の利用に必要な入力デバイス1060と出力デバイス1070が必要となる。 In order for the user 1080 to use the predetermined service providing domain 1058 in the cyberspace, the end user 1080 needs the input device 1060 and output device 1070 necessary for using the predetermined service providing domain 1058 in the cyberspace.

 パーソナルコンピュータでは、入力デバイス1060として主にキーボードとマウスが使用される。またスマートフォンでは、タッチ画面が入力デバイス1060に相当する。タッチ画面はキーボードと比べて機能の多様性に優れている。このタッチ画面よりも機能の多様性や操作性、利便性を向上させた入力方法として本実施形態例では、自動画像(映像)入力技術や自動音声入力技術を利用しても良い。すなわち本実施形態例における入力デバイス1080に関するデバイス分類1062として、画像(映像)収集機能や音声情報収集機能を利用する。この音声情報収集を行う入力デバイス形態1064としてマイクが有る。また画像(映像)収集機能を持った入力デバイス形態1064として、第8章で説明した3Dカラーカメラ1280や近赤外分光特性も同時に測定可能な可視カラーカメラを使用する事ができる。 In a personal computer, a keyboard and a mouse are mainly used as input devices 1060. Further, in a smartphone, a touch screen corresponds to the input device 1060. Touch screens have a greater variety of functions than keyboards. In this embodiment, automatic image (video) input technology or automatic voice input technology may be used as an input method with improved functional diversity, operability, and convenience compared to the touch screen. That is, the image (video) collection function and the audio information collection function are used as the device classification 1062 regarding the input device 1080 in this embodiment. A microphone is an input device type 1064 that collects this audio information. Further, as the input device form 1064 having an image (video) collection function, the 3D color camera 1280 described in Chapter 8 or the visible color camera that can simultaneously measure near-infrared spectral characteristics can be used.

 入力デバイス1080として3Dカラーカメラ1280や近赤外分光特性も同時に測定可能な可視カラーカメラをした場合の情報入力方法として、ユーザの体の動きが利用(データ利用目的1068)できる。例えばユーザのジェスチャや指の動き(finger action)を既存のマウス代わりに利用しても良い。この場合には入力デバイス1060内または入力デバイスに接続されたホスト50内でジェスチャ解釈機能や指先指令の解釈機能、仮想立体化機能を持たせても良い。 When the input device 1080 is a 3D color camera 1280 or a visible color camera that can simultaneously measure near-infrared spectral characteristics, the user's body movements can be used as an information input method (data usage purpose 1068). For example, the user's gestures and finger actions may be used in place of the existing mouse. In this case, a gesture interpretation function, a fingertip command interpretation function, and a virtual three-dimensionalization function may be provided within the input device 1060 or within the host 50 connected to the input device.

 近赤外分光特性も同時に測定可能な可視カラーカメラを入力デバイス1060に使用した場合の情報入力では、近赤外分光特性に現れる生体を形成する組成解析が行える。そのため静脈認証など生体固有の情報を利用した個人認証に利用してもよい。さらに近赤外分光特性の時間変化から、生体活動を自動検出が可能となる。例えばエンドユーザ1080の顔面の表情筋の収縮状況から、ホスト50内でエンドユーザ1080の感情や情動の自動認識が可能となる。また7.5節で説明したように、外部から非侵襲または非接触でエンドユーザ1080の血液中の組成分析を行っても良い。例えば血糖値計測結果を用いてユーザの体調を自動的に入力可能となる。また血液中のアドレナリン含有量から、エンドユーザ1080の興奮状態をリアルタイムでの入力可能となる。 When inputting information using a visible color camera that can also measure near-infrared spectral characteristics at the same time as the input device 1060, it is possible to analyze the composition forming the living body that appears in the near-infrared spectral characteristics. Therefore, it may be used for personal authentication using biometric information such as vein authentication. Furthermore, it becomes possible to automatically detect biological activities based on temporal changes in near-infrared spectral characteristics. For example, it becomes possible to automatically recognize the feelings and emotions of the end user 1080 within the host 50 based on the state of contraction of facial expression muscles of the end user 1080. Further, as explained in Section 7.5, the composition analysis in the blood of the end user 1080 may be performed from outside in a non-invasive or non-contact manner. For example, it becomes possible to automatically input the user's physical condition using blood sugar level measurement results. Furthermore, the excited state of the end user 1080 can be input in real time based on the adrenaline content in the blood.

 出力デバイス1070で実現できる機能で出力デバイス分類1062すると、立体表示機能や造形機能、音声情報出力機能、触覚刺激機器機能などが上げられる。この触覚刺激機能の一部としてエンドユーザ1080の運動阻止機能を含めても良い。たとえば現実世界でエンドユーザ1080が重い荷物を持ち運ぶ時、運動の自由度に制約を受ける。サイバー空間内の所定サービス提供ドメイン1058内に入ったエンドユーザ1080が受けるこの運動自由度制約を、この触覚刺激機器機能の一部で実施しても良い。 Output device classification 1062 based on the functions that can be realized by the output device 1070 includes a three-dimensional display function, a modeling function, an audio information output function, a tactile stimulation device function, etc. As part of this tactile stimulation functionality, end user 1080 movement inhibition functionality may be included. For example, when the end user 1080 carries heavy luggage in the real world, the degree of freedom of movement is restricted. This freedom of movement constraint imposed on the end user 1080 upon entering the predetermined service provision domain 1058 in cyberspace may be implemented as part of this tactile stimulation device functionality.

 上記立体表示機能を実現する出力デバイス形態1074として、薄型据置表示画面やVR(virtual reality)やAR(augmented reality)などの携帯形表示デバイスを使用してもよい。また造形機能を持った出力デバイスとして、3Dプリンタを使用しても良い。音声情報の出力には、スピーカが使用でき、皮膚表皮の加圧器は触覚刺激機能を持つ。 As the output device form 1074 that realizes the stereoscopic display function, a thin stationary display screen or a portable display device such as VR (virtual reality) or AR (augmented reality) may be used. Furthermore, a 3D printer may be used as an output device with a modeling function. A speaker can be used to output audio information, and the skin epidermis pressurizer has a tactile stimulation function.

 図30Bと図30Cは、本実施形態例で使用する入力デバイス1060と出力デバイス1070を示す。図30Bでは表示部18として使用される出力デバイス1070の実施形態例として、薄型据置表示画面(壁掛け形立体ディスプレーまたはコンピュータ表示用立体ディスプレー)900を使用する。またこの薄型据置表示画面900の一部またはその外側(壁掛け形立体ディスプレーまたはコンピュータ表示用立体ディスプレーの外側フレーム)902の一部に、1個の光源部2と複数の計測部8が配置されている。この1個の光源部2と複数の計測部8のセットが、入力デバイス1060に相当する。図30Bは、第8章までで説明した光源部2と計測部8の実施形態例を使用する。特に異なる位置に複数の計測部8を配置する事で、多角的に立体画像(立体映像)を収集できる。 30B and 30C show an input device 1060 and an output device 1070 used in this embodiment. In FIG. 30B, a thin stationary display screen (wall-mounted stereoscopic display or computer display stereoscopic display) 900 is used as an embodiment of the output device 1070 used as the display unit 18. In addition, one light source section 2 and a plurality of measurement sections 8 are disposed in a part of this thin stationary display screen 900 or a part of the outside thereof (outer frame of a wall-mounted 3D display or a 3D display for computer display) 902. There is. This set of one light source section 2 and a plurality of measurement sections 8 corresponds to the input device 1060. FIG. 30B uses the embodiment example of the light source section 2 and measurement section 8 described up to Chapter 8. In particular, by arranging a plurality of measuring units 8 at different positions, stereoscopic images (stereoscopic images) can be collected from multiple angles.

 薄型据置表示画面(壁掛け形立体ディスプレーまたはコンピュータ表示用立体ディスプレー)900の表面にはレンチキュラーレンズ(微細なシリンドリカルレンズ)が横方向に沿って配列されている。そして計測部8がエンドユーザ1080の眼球の3次元位置をリアルタイムに計測する。そしてエンドユーザ1080の眼球位置に合わせて(アイトラッキングして)、薄型据置表示画面(壁掛け形立体ディスプレーまたはコンピュータ表示用立体ディスプレー)900内の画素毎個別にエンドユーザ1080の右目用画像と左目用画像を作成する。またレンチキュラーレンズが、この右目用画像と左目用画像がエンドユーザ1080の右目と左目に向かう方向を制御する。そして右目と左目で捉える前後方向での仮想位置が異なる画像毎の輻輳角の違いを利用して、仮想的な立体画像をエンドユーザ1080に表示する。 Lenticular lenses (fine cylindrical lenses) are arranged horizontally on the surface of the thin stationary display screen (wall-mounted 3D display or computer display 3D display) 900. The measurement unit 8 then measures the three-dimensional position of the end user's 1080 eyeball in real time. Then, according to the eyeball position of the end user 1080 (by eye tracking), each pixel of the thin stationary display screen (wall-mounted 3D display or computer display 3D display) 900 is individually divided into an image for the right eye and an image for the left eye of the end user 1080. Create an image. The lenticular lens also controls the direction of the right and left eye images toward the end user's 1080 right and left eyes. Then, a virtual stereoscopic image is displayed to the end user 1080 by utilizing the difference in convergence angle of each image, which has a different virtual position in the front-rear direction captured by the right eye and left eye.

 図30Bの最も手前に見える仮想的なキーボードは、上記方法での表示例を示す。また光源部2と計測部8の組み合わせ動作で立体計測が行える。そのためエンドユーザ1080の10個の指先毎の3次元位置をリアルタイムで計測できる。例えばエンドユーザ1080が立体的に表示された仮想的なキーボードの上に両手を置いて10本の指を動かすと、仮想的なキーイン操作が可能となる。 The virtual keyboard visible in the foreground in FIG. 30B shows an example of display using the above method. Moreover, three-dimensional measurement can be performed by the combined operation of the light source section 2 and the measurement section 8. Therefore, the three-dimensional position of each of the ten fingertips of the end user 1080 can be measured in real time. For example, when the end user 1080 places both hands on a three-dimensionally displayed virtual keyboard and moves ten fingers, a virtual key-in operation becomes possible.

 図30Cは、入力デバイス1060と出力デバイス1070に関する他の実施形態例を示す。表示部18に相当する出力デバイス1070の実施形態例として、立体表示可能なARメガネ820、830を使用する。このARメガネ820、830は右目用画像(映像)と左目用画像(映像)を別々に表示し、上述した輻輳角の違いを利用して立体表示する。なお上記ARメガネ820、830の代わりに、VRメガネを使用しても良い。 FIG. 30C shows another example embodiment regarding input device 1060 and output device 1070. As an embodiment of the output device 1070 corresponding to the display unit 18, AR glasses 820 and 830 capable of stereoscopic display are used. These AR glasses 820 and 830 display a right-eye image (video) and a left-eye image (video) separately, and perform stereoscopic display using the above-mentioned difference in convergence angle. Note that instead of the AR glasses 820 and 830, VR glasses may be used.

 図30C(b)は、エンドユーザ1080から所定距離だけ離れた位置に配置された仮想的なキーボードを立体表示画面(映像)として表示する例を示す。エンドユーザ1080はこの仮想的なキーボードを見て、10本の指を動かしながら仮想的なキーボードにキーインしている。エンドユーザ1080の胸ポケットに固定された3Dカラー撮像素子内蔵ブローチ810は光源部2と計測部8を持ち、光源部2からの放射光がエンドユーザ1080の指先で反射される。計測部8はこの指先からの反射光を利用して、10個の指先の3次元位置を含む3Dカラー画像(映像)をリアルタイムで計測する。 FIG. 30C(b) shows an example in which a virtual keyboard placed a predetermined distance away from the end user 1080 is displayed as a stereoscopic display screen (image). The end user 1080 looks at this virtual keyboard and keys in the virtual keyboard while moving his ten fingers. A brooch 810 with a built-in 3D color image sensor fixed to the breast pocket of the end user 1080 has a light source section 2 and a measuring section 8, and the emitted light from the light source section 2 is reflected by the fingertip of the end user 1080. The measurement unit 8 uses the reflected light from the fingertips to measure a 3D color image (video) including the three-dimensional positions of the ten fingertips in real time.

 エンドユーザ1080は、電源部と制御部50、そして外部に対する通信機能が内蔵されたリュックサック850を背負っている。このリュックサック850が接続ケーブル866を介して、ARメガネ830(表示部18)に電源供給する。またこのリュックサック850が右目用画像(映像)と左目用画像(映像)を生成し、接続ケーブル866を介してARメガネ830に送信する。 The end user 1080 carries on his back a backpack 850 that includes a power supply unit, a control unit 50, and a communication function for external communication. This backpack 850 supplies power to the AR glasses 830 (display section 18) via a connection cable 866. The backpack 850 also generates a right eye image (video) and a left eye image (video) and transmits them to the AR glasses 830 via the connection cable 866.

 同時にこのリュックサック850が接続ケーブル866を介して、3Dカラー撮像素子内蔵ブローチ810に電源供給する。また計測部8は接続ケーブル876経由でリュックサック850に対して、計測部8が収集した3次元画像(3次元映像)を送信する。リュックサック850内の信号処理部42がこの3次元画像(3次元映像)を解析し、エンドユーザ1080の10個の指先の動きを推定する。そしてこの推定結果を利用して、エンドユーザ1080がキーインした仮想的なキーボード内の位置を判定する。このように3Dカラー撮像素子1280を利用した3次元計測がリアルタイムで行えると、エンドユーザ1080の動きを利用して多様な情報をサイバー空間内の所定サービス提供ドメイン1058内に入力できる。また情報入力が指先の動き(finger action)のみで可能になるため、エンドユーザ1080の負担が大幅に軽減される。このように3次元計測機能を入力デバイス1060に持たせると、エンドユーザ1080の利便性が大幅に向上する効果が生まれる。 At the same time, this rucksack 850 supplies power to the brooch 810 with a built-in 3D color image sensor via the connection cable 866. The measurement unit 8 also transmits the three-dimensional image (three-dimensional video) collected by the measurement unit 8 to the backpack 850 via the connection cable 876. The signal processing unit 42 in the backpack 850 analyzes this three-dimensional image (three-dimensional video) and estimates the movements of the ten fingertips of the end user 1080. Then, using this estimation result, the position within the virtual keyboard where the end user 1080 keyed in is determined. If three-dimensional measurement using the 3D color image sensor 1280 can be performed in real time in this way, a variety of information can be input into a predetermined service providing domain 1058 in cyberspace using the movements of the end user 1080. Furthermore, since information input can be performed using only finger actions, the burden on the end user 1080 is greatly reduced. Providing the input device 1060 with a three-dimensional measurement function in this manner has the effect of greatly improving the convenience for the end user 1080.

 図30C(b)の実施形態例では、接続ケーブル866、876を含めた制御部50内蔵のリュックサック850と3Dカラーカメラ内蔵ブローチ810(光源部2と計測部8内蔵)、表示部18に対応するARメガネ830全体で、光学装置10(図1)を構成する。また図30C(a)の実施形態例では、接続ケーブル860、870を含めた制御部50内蔵のショルダーバッグ840と3Dカラーカメラ内蔵ペンダントまたはネックレス800(光源部2と計測部8内蔵)、表示部18に対応するARメガネ820全体で、光学装置10(図1)を構成する。図30C(a)での各デバイスの機能とデバイス間で転送されるデータ内容は、図30C(b)と一致する。図30C(a)では、3Dカラーカメラ内蔵ペンダントまたはネックレス800の中に光源部2と計測部8が内蔵され、計測部8の機能を持つ。 In the embodiment shown in FIG. 30C(b), a backpack 850 with a built-in control unit 50 including connection cables 866 and 876, a brooch 810 with a built-in 3D color camera (built-in light source unit 2 and measurement unit 8), and a display unit 18 are supported. The entire AR glasses 830 constitute the optical device 10 (FIG. 1). Further, in the embodiment shown in FIG. 30C(a), a shoulder bag 840 with a built-in control unit 50 including connection cables 860 and 870, a pendant or necklace 800 with a built-in 3D color camera (with built-in light source unit 2 and measurement unit 8), and a display unit The entire AR glasses 820 corresponding to the optical system 18 constitute the optical device 10 (FIG. 1). The functions of each device and the data content transferred between devices in FIG. 30C(a) match those in FIG. 30C(b). In FIG. 30C(a), a pendant or necklace 800 with a built-in 3D color camera includes a light source section 2 and a measurement section 8, and has the function of the measurement section 8.

 エンドユーザ1080から所定距離離れた位置に配置されたように表示される仮想的なキーボードなどの仮想空間上の虚像1400は、サイバー空間内の所定サービス提供ドメイン1058内で定義(規定)される。この仮想空間上の虚像1400は3次元構造を持ち、時間経過と共に位置や形状が変化する(すなわち仮想的な4次元構造体となる)。そして本実施形態例では、サイバー空間内の所定サービス提供ドメイン1058内で定義(規定)されてエンドユーザ1080に表示される仮想空間上の虚像1400と、エンドユーザ1080の指など現実世界の実在物体1410とが高精度に連動して動作する。 A virtual image 1400 in the virtual space, such as a virtual keyboard displayed as if placed a predetermined distance away from the end user 1080, is defined within a predetermined service providing domain 1058 in cyberspace. The virtual image 1400 in this virtual space has a three-dimensional structure, and its position and shape change over time (that is, it becomes a virtual four-dimensional structure). In this embodiment, a virtual image 1400 in a virtual space defined (regulated) within a predetermined service providing domain 1058 in cyberspace and displayed to an end user 1080, and a real object in the real world such as the finger of the end user 1080 are used. 1410 operate in conjunction with high precision.

 高精度な連動動作を実現するには、仮想空間上の虚像1400と現実世界の実在物体1410との間の3次元方向での位置合わせが重要となる。その位置合わせの具体的方法として例えば、『現実世界の実在物体1410を基準にして、3次元方向の現実世界内の位置に、仮想空間上の虚像1400の位置を合わせる』方法を行っても良い。 In order to realize highly accurate interlocking operations, it is important to align the virtual image 1400 in virtual space and the real object 1410 in the real world in three-dimensional directions. As a specific method for the alignment, for example, a method of ``aligning the position of the virtual image 1400 in the virtual space with the position in the real world in three-dimensional directions based on the real object 1410 in the real world'' may be performed. .

 図30C(a)はエンドユーザ1080が、上記3次元方向での位置合わせをしている場面例を示す。具体的にはエンドユーザ1080の左手(実在物体1410)を基準として、仮想空間上の虚像1400の位置と表示サイズを合わせる。この位置合わせに、エンドユーザ1080の右手の人差し指の動きを利用する。 FIG. 30C(a) shows an example of a scene where the end user 1080 is performing alignment in the three-dimensional direction. Specifically, the position and display size of the virtual image 1400 in the virtual space are adjusted using the left hand (real object 1410) of the end user 1080 as a reference. This alignment uses the movement of the index finger of the end user's 1080 right hand.

 図31Aは、上記位置合わせ時にエンドユーザ1080が見る現実世界の実在物体1410と、仮想空間上の虚像1400の両方を重ねて示す。図30C(a)の計測部8には、3Dカラー撮像素子1280が内蔵されている。従ってこの計測部8が、エンドユーザ1080の左手をリアルタイムで撮像(撮影)する。そして立体表示可能なARメガネ820内の表示画面が、この撮像(撮影)したエンドユーザ1080の左手を仮想空間上の虚像1400として映し出す。同じエンドユーザ1080の左手に関して図31A(a)では、実在物体1410より仮想空間上の虚像1400が大きく表示されている例を示す。 FIG. 31A shows both the real object 1410 in the real world that the end user 1080 sees during the above alignment and the virtual image 1400 in the virtual space, superimposed. The measurement unit 8 in FIG. 30C(a) has a built-in 3D color image sensor 1280. Therefore, this measurement unit 8 images (photographs) the left hand of the end user 1080 in real time. Then, the display screen within the AR glasses 820 capable of stereoscopic display displays the imaged (photographed) left hand of the end user 1080 as a virtual image 1400 in virtual space. Regarding the left hand of the same end user 1080, FIG. 31A(a) shows an example in which the virtual image 1400 in the virtual space is displayed larger than the real object 1410.

 図31A(b)は、実在物体1410として存在するエンドユーザ1080の右手を示す。例えば右手の親指と人差し指との間隔に対して、『間隔を縮める速度を早く、間隔を広げる速度を遅く』動かす。すると図30C(a)内の計測部8がその動きを撮像し、信号処理部42(図1)内で『仮想空間上の虚像1400のサイズを縮小する指示』と解釈する。 FIG. 31A(b) shows the right hand of end user 1080 existing as real object 1410. For example, with respect to the distance between the thumb and index finger of the right hand, move the distance ``faster to shorten the distance and slower to widen the distance''. Then, the measuring unit 8 in FIG. 30C(a) captures an image of the movement, and the signal processing unit 42 (FIG. 1) interprets it as an "instruction to reduce the size of the virtual image 1400 in the virtual space."

 仮想空間上の虚像1400の表示位置の変更には、
A)エンドユーザ1080の右手人差し指の腹(爪と反対側)の向きで移動方向規定し、
B)エンドユーザ1080の右手人差し指の伸縮速度の違い(指を曲げる速度と指を伸ばす速度の違い)
を利用しても良い。ここで図30C(a)の計測部8は、実在物体1410(エンドユーザ1080の左手)までの前後方向距離をリアルタイムで計測している。また図31A(a)では、同じエンドユーザ1080の左手に関する実在物体1410と仮想空間上の虚像1400を重ねて表示している。従って仮想空間上の虚像1400の前後方向距離を自動的に、実在物体1410(エンドユーザ1080の左手)までの前後方向距離に合わせても良い。そして実在物体1410と仮想空間上の虚像1400との位置合わせが完了した場合、図31A(c)のようにエンドユーザ1080の右手(実在物体1410)で『OKマーク』を指先で表示しても良い。するとこの『エンター情報』が、サイバー空間内の所定サービス提供ドメイン1058側に転送される。
To change the display position of virtual image 1400 in virtual space,
A) The direction of movement is defined by the direction of the pad of the right index finger (opposite the nail) of the end user 1080,
B) Difference in the speed of extension and contraction of the right index finger of the end user 1080 (difference between the speed of bending the finger and the speed of stretching the finger)
You may also use Here, the measuring unit 8 in FIG. 30C(a) measures the distance in the front-back direction to the real object 1410 (the left hand of the end user 1080) in real time. Further, in FIG. 31A(a), a real object 1410 related to the left hand of the same end user 1080 and a virtual image 1400 in virtual space are displayed in an overlapping manner. Therefore, the distance in the front-rear direction of the virtual image 1400 in the virtual space may be automatically adjusted to the distance in the front-rear direction to the real object 1410 (the left hand of the end user 1080). When the alignment between the real object 1410 and the virtual image 1400 in the virtual space is completed, an "OK mark" is displayed with the fingertip of the right hand (real object 1410) of the end user 1080 as shown in FIG. 31A(c). good. Then, this "enter information" is transferred to the predetermined service providing domain 1058 in cyberspace.

 図31A(b)の方法で仮想空間上の虚像1400の表示位置と表示サイズする代わりに、図31A(d)のように実在物体1410のエンドユーザ1080の両手で仮想的キーボードの位置とサイズを指定しても良い。 Instead of adjusting the display position and display size of the virtual image 1400 in the virtual space using the method shown in FIG. 31A(b), the position and size of the virtual keyboard can be adjusted using both hands of the end user 1080 of the real object 1410 as shown in FIG. 31A(d). May be specified.

 図31Aは、ユーザの指の動き(finger action)でサイバー空間内の所定サービス提供ドメインへの入力情報を生成する例を示した。ユーザの指の動きや指の形状と入力情報との関係は上記に限らず、任意に設定してもよい。またユーザの指の動きに限らず、例えば眼球の動きや体全体の形状や動き(ジェスチャ)、ユーザの顔の動きや表情など実在物体1410を利用した任意の方法で、サイバー空間内の所定サービス提供ドメインへの情報入力を実現しても良い。 FIG. 31A shows an example of generating input information to a predetermined service providing domain in cyberspace using a user's finger action. The relationship between the movement of the user's finger, the shape of the finger, and the input information is not limited to the above, and may be set arbitrarily. In addition, not only the movement of the user's fingers but also the movement of the user's entire body (gesture), the movement and expression of the user's face, and any other method using the real object 1410 can be used to provide a predetermined service in cyberspace. Information input to the provided domain may also be realized.

 図30Aで示したサイバー空間内の所定サービス提供ドメイン1058に対する入力デバイス1060は、近赤外光を利用できる説明をした。この近赤光を利用した入力デバイス1060の基本機能1066は、生体計測や情動予測が含まれる。またこの近赤光を利用した入力デバイス1060が収集するデータ利用目的1068として、生体を形成する組成解析や個人認証が可能である。 It was explained that the input device 1060 for the predetermined service providing domain 1058 in cyberspace shown in FIG. 30A can use near-infrared light. The basic functions 1066 of the input device 1060 using this near-red light include biological measurement and emotion prediction. In addition, the data usage purpose 1068 collected by the input device 1060 using this near-infrared light can be compositional analysis forming a living body and personal authentication.

 図31Bと図31Cは入力デバイス1060として近赤外光を利用した場合の、利用形態例を示す。図31Bは、近赤外光を利用した入力デバイス1060の使用環境例を示す。光源部2から放射された所定光230内には、近赤外光が含まれている。そして計測部8が、計測対象物22からの反射光を計測する。 FIGS. 31B and 31C show an example of a usage mode when near-infrared light is used as the input device 1060. FIG. 31B shows an example of a usage environment of an input device 1060 that uses near-infrared light. The predetermined light 230 emitted from the light source section 2 includes near-infrared light. The measurement unit 8 then measures the reflected light from the measurement target object 22.

 図31Cは図31Bで示した使用環境例の拡大図を示す。図10Aを用いて4.1節で説明したように、1.3μmを超える波長光は生体内の水分に大きく吸収される。従って1.3μmを超える波長光を手のひら23に照射すると、血管領域600が浮き上がって観測される。この血管領域600のパターンには個人差が有る、従ってこの血管領域600のパターンの違いを利用すると、個人認証に利用できる。 FIG. 31C shows an enlarged view of the usage environment example shown in FIG. 31B. As explained in Section 4.1 using FIG. 10A, light with a wavelength exceeding 1.3 μm is largely absorbed by water in the living body. Therefore, when the palm 23 is irradiated with light with a wavelength exceeding 1.3 μm, the blood vessel region 600 is observed to stand out. There are individual differences in the patterns of the blood vessel regions 600, and therefore, the differences in the patterns of the blood vessel regions 600 can be utilized for personal authentication.

 9.2節 サービス提供の形態例
図32は、時刻操作可能なサービス提供ドメイン1500を用いた、エンドユーザ1080に対するサービス提供方法の実施形態例を示す。この時刻操作可能なサービス提供ドメイン1500は、先に説明したサイバー空間内の所定サービス提供ドメイン1058の一部分に位置する。またこの時刻操作可能なサービス提供ドメイン1500を利用した時には、日極の課金をしても良い。そして日を跨ってこの時刻操作可能なサービス提供ドメイン1500を利用した場合には、日を跨った時点で翌日分の課金が必要となる。入力デバイス1060と出力デバイス1070でサイバー空間1058内に入るには、インターネット通信料が必要となる。このインターネット通信料に、上記日極の時刻操作可能なサービス提供ドメイン1500利用費用が加算されても良い。
Section 9.2 Service Provision Example FIG. 32 shows an embodiment of a service provision method for an end user 1080 using a time-operable service provision domain 1500. This time-manipulable service providing domain 1500 is located in a part of the predetermined service providing domain 1058 in cyberspace described above. Furthermore, when using this time-manipulable service providing domain 1500, daily charges may be made. If the service provision domain 1500 that allows time manipulation is used over multiple days, billing for the next day will be required at the time the user uses the service providing domain 1500 that allows time adjustment over multiple days. Internet communication charges are required to enter the cyberspace 1058 using the input device 1060 and the output device 1070. The fee for using the service providing domain 1500 that allows time adjustment on a daily basis may be added to this Internet communication fee.

 実空間上のユーザ1450(エンドユーザ1080)は、時刻操作可能なサービス提供ドメイン1500に入る前に入場手続き1506を行う。この具体的な入場手続き1506の最初は、図31Bと図31Cを用いて前節で説明した生体認証を用いた個人認証が行われる。次に実空間上のユーザ1450(エンドユーザ1080)に対して、日極の入場料の自動引き落とし可否の問い合わせが来る。実空間上のユーザ1450(エンドユーザ1080)が日極の入場料の自動引き落としを承認すると、時刻操作可能なサービス提供ドメイン1500内の入場が許可される。時刻操作可能なサービス提供ドメイン1500内に入場したユーザは、実空間上のユーザ1450からサイバー空間内のユーザに変化する。ここではこのサイバー空間内のユーザを、エンドユーザ1080と呼ぶ。 A user 1450 (end user 1080) in real space performs an entry procedure 1506 before entering a service providing domain 1500 that allows time manipulation. At the beginning of this specific entrance procedure 1506, personal authentication using the biometric authentication described in the previous section using FIGS. 31B and 31C is performed. Next, the user 1450 (end user 1080) in the real space is asked whether or not the daily admission fee can be automatically debited. When the user 1450 (end user 1080) in the real space approves the automatic withdrawal of the daily admission fee, entry into the time-manipulatable service providing domain 1500 is permitted. A user who enters the time-manipulable service provision domain 1500 changes from a user 1450 in real space to a user in cyberspace. Here, the user in this cyberspace is referred to as an end user 1080.

 時刻操作可能なサービス提供ドメイン1500の入り口では、時刻操作可能なサービス提供ドメイン1500内の利用メニューが表示される。実空間上のユーザ1450(エンドユーザ1080)は、希望するメニュー選択1528を行う。そして第1のチェネル進行1530が、時間経過と共に始まる。 At the entrance to the time-manipulable service providing domain 1500, a usage menu within the time-manipulable service providing domain 1500 is displayed. A user 1450 (end user 1080) in real space makes a desired menu selection 1528. A first channel progression 1530 then begins over time.

 時刻操作可能なサービス提供ドメイン1500内のエンドユーザ1080は、メニュー選択1528を経由して第2のチャネル進行1540に移行出来る。ここで表示部18内を複数画面に分割しても良い。複数画面分割により、第1のチェネル進行1530と第2のチャネル進行1540を同時に表示ができる。この複数画面同時表示機能を利用すると、エンドユーザ1080は効率的に時刻操作可能なサービス提供ドメイン1500内で活動できる効果が有る。実空間上のユーザ1450の実空間上の体は一体しか無いため、複数の経験は同時に行えない。しかし本実施形態例で示すサービス提供方法では、サイバー空間内のエンドユーザ1080は同時に複数経験を受けられる。この複数画面同時表示機能により、エンドユーザ1080は実空間上で受けられないサービスを提供できる。 End users 1080 within time-enabled service provision domain 1500 can transition to second channel progression 1540 via menu selection 1528. Here, the inside of the display section 18 may be divided into multiple screens. By dividing multiple screens, the first channel progress 1530 and the second channel progress 1540 can be displayed simultaneously. By using this multiple screen simultaneous display function, the end user 1080 has the effect of being able to operate within the service providing domain 1500 where time can be manipulated efficiently. Since the user 1450 has only one body in real space, multiple experiences cannot be performed at the same time. However, in the service providing method shown in this embodiment, the end user 1080 in cyberspace can receive multiple experiences at the same time. With this multiple screen simultaneous display function, the end user 1080 can be provided with services that are not available in real space.

 日極課金方式を採用すると、エンドユーザ1080は同日中は何度も時刻操作可能なサービス提供ドメイン1500に対する出入りが可能となる。実空間上のユーザ1450に急用が発生して、時刻操作可能なサービス提供ドメイン1500から一時退場1510すると、エンドユーザ1080は実空間上のユーザ1450に戻る。ここで経過時刻1498がtの時に、エンドユーザ1080が時刻操作可能なサービス提供ドメイン1500から一時退場1510した場合を考える。 If a daily billing method is adopted, the end user 1080 can enter and leave the service providing domain 1500, which allows time manipulation, multiple times during the same day. When the user 1450 in real space has an emergency and temporarily leaves 1510 from the service providing domain 1500 where time can be manipulated, the end user 1080 returns to user 1450 in real space. Here, consider a case where the end user 1080 temporarily leaves 1510 from the service providing domain 1500 in which time can be manipulated when the elapsed time 1498 is tR .

 実空間上のユーザ1450がサービス提供ドメイン外での作業1520を終えて再入場1512した時には時刻操作可能なサービス提供ドメイン1500内では経過時刻1498が進行し、時刻tより大幅に遅れた時刻1498に時刻操作可能なサービス提供ドメイン1500に再入場1512する事になる。 When the user 1450 in the real space finishes work 1520 outside the service providing domain and re-enters 1512, the elapsed time 1498 advances within the service providing domain 1500 where time can be manipulated, and the time 1498 is much later than the time tR . The user re-enters 1512 the service providing domain 1500 where the time can be manipulated.

 本実施形態例ではこの再入場1512時に、時刻操作可能なサービス提供ドメイン1500内の時刻tに戻っても良い。そして早送り再生1546の機能を利用して、エンドユーザ1080はサービス提供ドメイン外での作業1520期間中に時刻操作可能なサービス提供ドメイン1500内で発生したイベントを追認識できる。このように本実施形態例で示す時刻操作可能なサービス提供ドメイン1500内では、エンドユーザ1080に対して『時間と空間を超越したサービス』を提供できる。 In this embodiment, at this re-entry time 1512, the user may return to time tR in the service providing domain 1500 where time can be manipulated. Then, by using the fast forward playback function 1546, the end user 1080 can follow up on events that occurred within the service providing domain 1500 in which the time can be manipulated during the work 1520 period outside the service providing domain. In this way, within the time-manipulable service providing domain 1500 shown in this embodiment, "services that transcend time and space" can be provided to the end user 1080.

 特に本実施形態例で使用する入力デバイス1060は、例えば100先や1Km先などの充分遠方の計測対象物22に対して高精度で距離計測(表面の凹凸特性計測)が行える。従って表示画面のズーム機能を用いる事で、瞬時に1Km先の計測対象物22の真近に近付いたように表示できる。このようなズーム表示機能を利用すると、『空間を超越したサービス』として『瞬間移動』の体験をエンドユーザ1080に提供できる。 In particular, the input device 1060 used in this embodiment can perform distance measurement (surface unevenness characteristic measurement) with high precision for a measurement target 22 that is sufficiently far away, such as 100 meters away or 1 km away. Therefore, by using the zoom function of the display screen, it is possible to instantly display the object 22 to be measured 1 km away as if it were very close. By using such a zoom display function, it is possible to provide the end user 1080 with the experience of "instantaneous transportation" as a "service that transcends space."

 本実施形態例に示す入力デバイス1060は、小形軽量化が可能である。そのため小形軽量化した入力デバイス1060を例えば、ドローン内に内蔵させても良い。それにより空からの撮像画像(映像)が入力可能となる。この空からの撮像画像(映像)を利用すると、エンドユーザ1080に対する『重力を超越したサービス』も提供できる。つまり時刻操作可能なサービス提供ドメイン1500は、エンドユーザ1080に対して空を飛ぶ体験や、宙に浮く体験を提供しても良い。 The input device 1060 shown in this embodiment can be made smaller and lighter. Therefore, the input device 1060, which is smaller and lighter, may be built into the drone, for example. This allows input of captured images (videos) from the sky. By using this captured image (video) from the sky, it is also possible to provide "services that transcend gravity" to the end user 1080. In other words, the service providing domain 1500 that allows time manipulation may provide the end user 1080 with the experience of flying in the sky or floating in the air.

 時刻操作可能なサービス提供ドメイン1500は、入場初日1502から入場2日1504に切り替わるタイミングで入場料の追加課金を行っても良い。この切り換わりタイミングで時刻操作可能なサービス提供ドメイン1500は、エンドユーザ1080に対して『翌日入場料支払いの問い合わせ』を表示する。エンドユーザ1080が『翌日入場料支払い承認1550』を行うと、時刻操作可能なサービス提供ドメイン1500内のサービスが継続される。 The service providing domain 1500 that allows time manipulation may charge an additional admission fee at the timing of switching from the first day of admission 1502 to the second day of admission 1504. At this switching timing, the service providing domain 1500 that can manipulate the time displays an "inquiry about next day admission fee payment" to the end user 1080. When the end user 1080 performs "Next Day Admission Fee Payment Approval 1550", the service within the time-operable service providing domain 1500 is continued.

 9.3節 計測して得られた4次元データのデータフォーマット例 
本9.3節では例えば、3Dカラー撮像素子1280から得られた4次元画像(映像)情報のデータフォーマットのみを説明する。3Dカラー撮像素子1280の撮像時に収集された音声情報は、既存の音声記録フォーマットを使っても良い。そしてこの4次元画像(映像)情報再生時の音声情報との再生同期合わせは、後述するデータフォーマット内の時間関連情報1702を利用しても良い。
図33Aは、3Dカラー撮像素子1280から見たローカル座標軸例を示す。ここでは3Dカラー撮像素子1280の垂線方向にZl座標軸を定義する。そして撮像素子1262~1268の配列方向に沿って、XlとYlの座標軸を定義する。また撮像時刻をTlと定義する。
Section 9.3 Data format example of 4-dimensional data obtained by measurement
In this Section 9.3, for example, only the data format of the four-dimensional image (video) information obtained from the 3D color image sensor 1280 will be explained. Audio information collected during imaging with the 3D color image sensor 1280 may use an existing audio recording format. The playback synchronization with the audio information during playback of the four-dimensional image (video) information may be performed using time-related information 1702 in the data format, which will be described later.
FIG. 33A shows an example of local coordinate axes viewed from the 3D color image sensor 1280. Here, a Zl coordinate axis is defined in the perpendicular direction of the 3D color image sensor 1280. Then, coordinate axes of Xl and Yl are defined along the arrangement direction of the image sensors 1262 to 1268. Furthermore, the imaging time is defined as Tl.

 本実施形態例では3Dカラー撮像素子1280から入力される位置情報は、3次元の空間的位置情報に時間軸も加えた4次元の座標軸上の位置として定義する。このように時間軸Tlも加えた4次元座標で計測結果を表現する事で、映像(動画)などの計測対象物22の動きや時系列的な形状変化を含めた多様な情報が管理できる効果が生まれる。 In this embodiment, the position information input from the 3D color image sensor 1280 is defined as a position on a four-dimensional coordinate axis, which is three-dimensional spatial position information plus a time axis. By expressing the measurement results in four-dimensional coordinates including the time axis Tl in this way, it is possible to manage a variety of information including the movement and time-series shape changes of the measurement object 22 such as video (video). is born.

 図33Bは、計測対象物22の表面から得られた4次元データのデータフォーマット例を示す。3Dカラー撮像素子1280は、計測対象物22の表面上各点の4次元座標とRGBW強度(カラー情報内の赤色強度と緑色強度、青色強度、白色強度)の情報を収集する。 FIG. 33B shows an example of the data format of four-dimensional data obtained from the surface of the measurement target object 22. The 3D color image sensor 1280 collects information on the four-dimensional coordinates and RGBW intensities (red intensity, green intensity, blue intensity, and white intensity in color information) of each point on the surface of the measurement object 22.

 計測対象物22の表面上各点は、ノード(node)1600で規定する。計測対象物22の表面上で凹凸の細かい部分は、ノード1600間の間隔を狭く設定する。また凹凸形状の粗い部分は、ノード1600間の間隔を広く設定する。また各ノードの4次元座標値は、図33Aの座標軸で規定される。このノード1600の集合で表現された計測対象物22の表面形状の表現形態を『メッシュ(mesh)』と呼ぶ。 Each point on the surface of the measurement object 22 is defined by a node 1600. For finely uneven parts on the surface of the measurement object 22, the intervals between the nodes 1600 are set narrowly. Further, in the rough portion of the uneven shape, the intervals between the nodes 1600 are set wide. Furthermore, the four-dimensional coordinate values of each node are defined by the coordinate axes in FIG. 33A. The form of expression of the surface shape of the measurement object 22 expressed by this set of nodes 1600 is called a "mesh".

 所定時刻における(時間軸上の値を固定した時の)メッシュの形で表現する計測対象物22の表面画像を、ここでは『メッシュフレーム(mesh frame)』と呼ぶ。本実施形態例では、複数のメッシュフレームを定義する。すなわち図33B(a)に示す『メッシュフレームI(image)』内では、全てのノード1600-1~8の4次元座標とRGBW強度を管理する。一方で図33B(b)に示す『メッシュフレームP(progress)』では、前述したメッシュフレームIと異なる情報を持ったノード1600のみの情報またはメッシュフレームIと異なる情報を持ったノード1600のみの差分値情報のみを管理する。 Here, the surface image of the measurement object 22 expressed in the form of a mesh at a predetermined time (when the value on the time axis is fixed) is referred to as a "mesh frame". In this embodiment, multiple mesh frames are defined. That is, in the "mesh frame I (image)" shown in FIG. 33B(a), the four-dimensional coordinates and RGBW intensities of all nodes 1600-1 to 1600-8 are managed. On the other hand, in the "mesh frame P (progress)" shown in FIG. 33B(b), the information of only the node 1600 that has information different from the mesh frame I described above or the difference of only the node 1600 that has information different from the mesh frame I is shown. Manage only value information.

 例えば図33B(a)に対して図33B(b)では、ノード1_1600-1とノード4_1600-4のみで位置ずれが生じた例を示す。この場合にはメッシュフレームPの管理情報としては、ノード1_1600-1とノード4_1600-4のみの情報(あるいはメッシュフレームIでのノード1_1600-1とノード4_1600-4の情報からの差分値)が管理される。図33B(a)に対する図33B(b)の違いとして、ノード1_1600-1とノード4_1600-4の位置の変化を例示した。しかしそれに限らず色強度の変化のみが発生した場合にも、メッシュフレームP内の情報として管理される。このようにメッシュフレームIに比べてメッシュフレームPの情報を低下させると、時間変化に応じた4次元映像の管理情報を大幅に低減できる効果が生まれる。 For example, compared to FIG. 33B(a), FIG. 33B(b) shows an example in which a positional shift occurs only between node 1_1600-1 and node 4_1600-4. In this case, the management information of mesh frame P is managed only by the information of node 1_1600-1 and node 4_1600-4 (or the difference value from the information of node 1_1600-1 and node 4_1600-4 in mesh frame I). be done. As a difference between FIG. 33B(b) and FIG. 33B(a), a change in the positions of node 1_1600-1 and node 4_1600-4 is illustrated. However, the information is not limited to this, and even when only a change in color intensity occurs, it is managed as information within the mesh frame P. By reducing the information of the mesh frame P compared to the mesh frame I in this way, an effect is created in which the management information of the four-dimensional video according to time changes can be significantly reduced.

 図33Cは、メッシュフレーム1610、1620単位で規定されるノード1600の管理情報例を示す。この管理情報例は、図33Bに示したメッシュフレーム1610、1620毎のメッシュ構造を前提としている。この管理情報として、メッシュフレーム1610、1620単位で管理するメッシュフレーム情報1700と、同一メッシュフレーム内のノッド毎に管理するノッド情報1800から構成される。 FIG. 33C shows an example of management information for the node 1600 defined in units of mesh frames 1610 and 1620. This example of management information is based on the mesh structure of each mesh frame 1610 and 1620 shown in FIG. 33B. This management information includes mesh frame information 1700 that is managed in units of mesh frames 1610 and 1620, and node information 1800 that is managed for each node within the same mesh frame.

 メッシュフレーム1610、1620毎に3Dカラー撮像素子1280が撮像した時刻情報が、時間関係情報(計測情報など)1702として『年/月/日/時間/分/秒/秒の小数点以下情報』のフォーマットで記録される。この4次元の画像(映像)データ再生時に行われる音声情報との再生同期合わせに、この情報を使用しても良い。一般的なAV(audio and video)情報再生時の同期合わせにはPTS(presentation time stamp)が使用される。このPTSが定義された音声情報と同期を合わせて表示する場合には、この情報をPTSに変換して同期合わせしても良い。 The time information captured by the 3D color image sensor 1280 for each mesh frame 1610 and 1620 is in the format of "year/month/day/hour/minute/second/second decimal point information" as time-related information (measurement information, etc.) 1702. recorded. This information may be used for playback synchronization with audio information performed during playback of this four-dimensional image (video) data. PTS (presentation time stamp) is used for synchronization during playback of general AV (audio and video) information. If this PTS is to be displayed in synchronization with defined audio information, this information may be converted into PTS and synchronized.

 本実施形態例では、複数種類のメッシュフレームが規定される。そのフレーム種別情報1704の欄では例えば、『Iフレーム』か?『Pフレーム』か? に関する識別情報が記録される。
例えば図32を用いて前節(9.2節)で説明したように、早送り再生1546をする場合に、このフレーム種別情報1704を用いると、高速で『Iフレーム』(メッシュフレームI)のみのノッド情報1800を取得できる。従ってこのフレーム種別情報1704は、早送り再生1546に対する処理利便性を提供できる。
In this embodiment, multiple types of mesh frames are defined. In the column of frame type information 1704, for example, is it "I frame"? Is it "P frame"? Identification information is recorded.
For example, as explained in the previous section (section 9.2) using FIG. Information 1800 can be obtained. Therefore, this frame type information 1704 can provide processing convenience for fast-forward playback 1546.

 本実施形態例では、複数種類のメッシュフレームに対するフレーム番号1706が設定されている。このフレーム番号1706を利用すると、タイムサーチなどのエンドユーザ1080が指定した経過時刻1498への時間的アクセス処理の利便性が向上する。 In this embodiment, frame numbers 1706 are set for multiple types of mesh frames. Using this frame number 1706 improves the convenience of temporal access processing such as time search to the elapsed time 1498 specified by the end user 1080.

 参照フレーム1708の欄内には、該当するメッシュフレームが参照するメッシュフレームのフレーム番号が格納される。この“参照するメッシュフレーム”として、メッシュフレームI_1610を指定しても良い。図33Bで指定したように、メッシュフレームP_1620内のノッド情報1800は、メッシュフレームI_1610との差分情報しか持たない。従って該当するメッシュフレームがメッシュフレームP_1620の場合には、この該当するメッシュフレームに対応するノッド情報1800と、参照フレーム1708内のノッド情報1800を組み合わせる事で、全てのノッド情報1800が獲得できる。 The frame number of the mesh frame referenced by the relevant mesh frame is stored in the reference frame 1708 column. Mesh frame I_1610 may be designated as this "reference mesh frame". As specified in FIG. 33B, the node information 1800 in mesh frame P_1620 has only difference information from mesh frame I_1610. Therefore, when the relevant mesh frame is mesh frame P_1620, all the node information 1800 can be obtained by combining the node information 1800 corresponding to this relevant mesh frame and the nod information 1800 in the reference frame 1708.

 参照時刻との差分時間情報1710は、該当するメッシュフレームの時間関連情報1702と該当するメッシュフレームが参照するメッシュフレーム(メッシュフレームI_1610)の時間関連情報1702との差分値を示す。 Difference time information 1710 with reference time indicates a difference value between time-related information 1702 of the applicable mesh frame and time-related information 1702 of the mesh frame (mesh frame I_1610) referred to by the applicable mesh frame.

 強度ビット数(ダイナミックレンジ)1714は、ノッド情報1800内の各種色情報強度1812~1818の表現ビット数を表わす。この強度ビット数(ダイナミックレンジ)1714の値を上げると、各種色情報強度1812~1818を細かな階調で表現できる代わりに、この管理情報のデータサイズが増大する。 The number of intensity bits (dynamic range) 1714 represents the number of bits representing the various color information intensities 1812 to 1818 in the nod information 1800. If the value of the number of intensity bits (dynamic range) 1714 is increased, the various color information intensities 1812 to 1818 can be expressed in fine gradations, but the data size of this management information increases.

 接続ノッド数1720とは、メッシュ構造の表現形式状態を表す。図33Bでは、3個のノッド1600で三角形の基本セルが構成されている。そして#1のノッド1600-1は、#2のノッド1600-2と#4のノッド1600-4、#3のノッド1600-3、#6のノッド1600-6、#5のノッド1600-5の5個のノッド1600と接続されている。従って図33Bのメッシュ構造の表現形式状態では、接続ノッド数1720は“5”となる。しかしそれに限らず例えば、四角形で基本セルを構成しても良い。この場合の接続ノッド数1720は、違う値を取る。 The number of connected nodes, 1720, represents the representation format state of the mesh structure. In FIG. 33B, three nodes 1600 constitute a triangular basic cell. The #1 node 1600-1 is connected to the #2 node 1600-2, the #4 node 1600-4, the #3 node 1600-3, the #6 node 1600-6, and the #5 node 1600-5. It is connected to five nodes 1600. Therefore, in the representation format state of the mesh structure in FIG. 33B, the number of connected nodes 1720 is "5". However, the basic cell is not limited thereto, and for example, the basic cell may be formed of a rectangle. In this case, the number of connected nodes 1720 takes a different value.

 総ノッド数1720とは、該当するメッシュフレーム1610、1620内で管理されるノッド1600の数を示す。またデータサイズ1718とは、該当するメッシュフレーム1610、1620内で管理されるノッド1600に関するノッド情報1800のデータサイズを示す。ここで1個のノッド1600に関するノッド情報1800のデータサイズをP、総ノッド数をNで表わすと、このデータサイズ1718はN×Pで与えらる。 The total number of nodes 1720 indicates the number of nodes 1600 managed within the corresponding mesh frames 1610 and 1620. Further, the data size 1718 indicates the data size of the node information 1800 regarding the node 1600 managed within the corresponding mesh frame 1610, 1620. Here, if the data size of the node information 1800 regarding one node 1600 is expressed as P and the total number of nodes is expressed as N, then this data size 1718 is given by N×P.

 全てのメッシュフレーム1610、1620内の全てのノッド1600には、通し番号でノッド番号が設定される。従ってメッシュフレーム1610、1620内での全てのノッド1600は、このノッド番号1802で識別されると共に管理される。従ってこのノッド番号1802が、ノッド情報1800内の最初の位置に配置される。また本実施形態例では、メッシュフレームP_1620内では、メッシュフレームI_1610から変化したノッド1600しか管理されない。従ってメッシュフレームP_1620内で管理されるノッド1600のノッド番号1802を検索するだけで、メッシュフレームI_1610に対して『どのノッド1600が変化したか?』が容易に検索できる。 All the nodes 1600 in all the mesh frames 1610 and 1620 are set with serial numbers. Therefore, all nodes 1600 within mesh frames 1610, 1620 are identified and managed by this node number 1802. Therefore, this nod number 1802 is placed at the first position in the nod information 1800. Further, in this embodiment, only the nodes 1600 that have changed from mesh frame I_1610 are managed within mesh frame P_1620. Therefore, by simply searching for the node number 1802 of the node 1600 managed within the mesh frame P_1620, the question ``Which node 1600 has changed?'' will appear in the mesh frame I_1610. ” can be easily searched.

 図33Bのメッシュ構造の表現形式では、#1のノッド1600-1は、#2のノッド1600-2と#4のノッド1600-4、#3のノッド1600-3、#6のノッド1600-6、#5のノッド1600-5の5個のノッド1600と接続されている。そしてこの接続関係が、該当するノッド1600と接続する接続ノッド番号1804内に記載される。この情報を利用すると、計測対象物22表面の詳細な凹凸形状の解析が容易となる。 In the representation format of the mesh structure in FIG. 33B, #1 node 1600-1 is #2 node 1600-2, #4 node 1600-4, #3 node 1600-3, and #6 node 1600-6. , #5, and the five nodes 1600 of node 1600-5. This connection relationship is then written in the connection node number 1804 connected to the corresponding node 1600. Using this information, it becomes easy to analyze the detailed uneven shape of the surface of the measurement target 22.

 X座標値1806とY座標値1808、Z座標値1810は、該当するノッド1600の3次元座標値を示す。また白色強度1812と赤色強度1814、緑色強度1816、青色強度1818のそれぞれは、該当するノッド1600からの反射光の色情報を示す。 The X coordinate value 1806, Y coordinate value 1808, and Z coordinate value 1810 indicate the three-dimensional coordinate value of the corresponding node 1600. Further, each of white intensity 1812, red intensity 1814, green intensity 1816, and blue intensity 1818 indicates color information of the reflected light from the corresponding nod 1600.

 9.4節 マッピング処理 
時刻操作可能なサービス提供ドメイン1500内の仮想空間内では、4次元マップで管理される場合が多い。空間的3次元に時間軸を加えた4次元マップで管理する事で、過去の履歴検索や経過時刻1498を使用した再生用検索の容易性が向上する。
Section 9.4 Mapping processing
The virtual space within the service providing domain 1500 where time can be manipulated is often managed using a four-dimensional map. By managing with a four-dimensional map that adds a temporal axis to three spatial dimensions, it becomes easier to search for past history and search for playback using the elapsed time 1498.

 例えば実施形態例として、複数のエンドユーザ1080がこの時刻操作可能なサービス提供ドメイン1500の仮想空間内で会議を開催する場合を考える。この時刻操作可能なサービス提供ドメイン1500の仮想空間内での会議室は、中央に仮想机を挟んで6名が会議に出席できると想定する。そしてこの仮想空間内の会議室では図34Aの実施形態例が示すように、マップ内共通な4次元座標軸1828が予め設定されている。この4次元座標軸1828内のXc軸とYc軸は、仮想机の角方向に合わせてある。またZc軸は、仮想空間内の会議室の上方を向かう。また時間軸Tcに沿って、会議開催時刻が規定される。またこの会議室内にはプレゼンテーション画面1822が配置され、図34Aの右下に照明用ライト(マップ内採光条件1826)が配置される。 For example, as an example of an embodiment, consider a case where a plurality of end users 1080 hold a conference in the virtual space of the service providing domain 1500 where the time can be manipulated. It is assumed that the conference room in the virtual space of the time-manipulatable service providing domain 1500 allows six people to attend the conference with a virtual desk in the center. In the conference room in this virtual space, a common four-dimensional coordinate axis 1828 within the map is set in advance, as shown in the embodiment of FIG. 34A. The Xc axis and Yc axis within this four-dimensional coordinate axis 1828 are aligned with the corner direction of the virtual desk. Further, the Zc axis points above the conference room in the virtual space. Further, the conference holding time is defined along the time axis Tc. Furthermore, a presentation screen 1822 is arranged in this conference room, and an illumination light (lighting condition in the map 1826) is arranged at the lower right of FIG. 34A.

 図34Bは、時刻操作可能なサービス提供ドメイン1500内の4次元マップを利用し、出力デバイス1070に表示する右目用画像(映像)と左目用画像(映像)の生成に至る処理ステップを示す。図34Bの処理の開始(ST20)から終了(ST31)までの間に、マップ上の立体画像(映像)の合成からエンドユーザ1080への3次元画像(映像)の表示処理までを行う。なお図33Bのようにフレーム(映像内の1枚の画像)毎の管理単位をメッシュフレームと呼び、複数のフレームを跨ったメッシュフレームの総称を図34B以降では4次元メッシュデータと呼ぶ。 FIG. 34B shows the processing steps leading to the generation of a right-eye image (video) and a left-eye image (video) to be displayed on the output device 1070 using a four-dimensional map within the time-operable service providing domain 1500. From the start (ST20) to the end (ST31) of the process in FIG. 34B, processes from synthesizing the three-dimensional image (video) on the map to displaying the three-dimensional image (video) to the end user 1080 are performed. Note that, as shown in FIG. 33B, a management unit for each frame (one image in a video) is called a mesh frame, and a mesh frame spanning multiple frames is collectively called 4-dimensional mesh data from FIG. 34B onwards.

 図34Bを用いて最初に、一連の処理の流れを概説する。その後で、図34C~図34Eを用いて具体的な処理内容を説明する。この一連の処理内容は複雑なため、図34Bの説明だけで内容を理解するのは難しい可能性がある。後述する図34C~図34Eを利用すると、直観的に理解し易くなる。 First, a series of processing steps will be outlined using FIG. 34B. After that, specific processing contents will be explained using FIGS. 34C to 34E. Since the content of this series of processing is complex, it may be difficult to understand the content from only the explanation of FIG. 34B. It becomes easier to understand intuitively by using FIGS. 34C to 34E, which will be described later.

 1個のみの3Dカラー撮像素子1280での撮像では、撮像する計測対象物22の1方向から撮像しか行えない。しかし例えば図30Bのように互いに異なる位置に配置された複数の計測部8を同時に使用すると、計測対象物22に対する多角的な撮像が可能となる。このように同一の計測対象物22に対して多角的な方向から撮像した場合にはステップ21に示すように得られた複数の画像(映像)情報を合成して、同一の計測対象物22に対する4次元メッシュデータが作成できる。 In imaging with only one 3D color image sensor 1280, imaging can only be performed from one direction of the measurement target 22 to be imaged. However, if a plurality of measurement units 8 disposed at different positions are used simultaneously, as shown in FIG. 30B, for example, it becomes possible to capture images of the measurement target 22 from multiple angles. In this way, when the same measurement object 22 is imaged from multiple directions, the obtained plural image (video) information is synthesized as shown in step 21, and the same measurement object 22 is imaged. 4D mesh data can be created.

 次にエンドユーザ1080は、時刻操作可能なサービス提供ドメイン1500内で使用するマップを選択する(ST22)。このマップ選択は、使用したい会議室の選択を意味する。しかしこのマップ選択ステップ(ST22)は会議室選択に限らず、時刻操作可能なサービス提供ドメイン1500内で使用するあらゆるマップ選択が含まれる。例えば時刻操作可能なサービス提供ドメイン1500内での観光旅行を選択した場合には、観光地の4次元地図情報を選択しても良い。 Next, the end user 1080 selects a map to be used within the time-operable service providing domain 1500 (ST22). This map selection means selection of the conference room you want to use. However, this map selection step (ST22) is not limited to conference room selection, but includes all map selections used within the time-operable service providing domain 1500. For example, if a sightseeing trip within the service provision domain 1500 in which time can be manipulated is selected, four-dimensional map information of a tourist spot may be selected.

 またこのマップ選択(ST22)には、選択したマップ内でエンドユーザ1080が占有したい場所(占有ロケーション1830)の指定も含まれる。 This map selection (ST22) also includes designation of a location (occupied location 1830) that the end user 1080 wants to occupy within the selected map.

 エンドユーザ1080が選択したマップ毎にマップ内共通4次元座標軸1828が定義されている。またエンドユーザ1080を撮像した3Dカラー撮像素子1280には、図33Aで説明した固有の座標軸が規定されている。従ってステップ24では、エンドユーザ1080が指定した占有ロケーション1830に合わせて、3Dカラー撮像素子1280固有の座標軸からマップ内共通4次元座標軸1828への座標変換処理を行う。またこの直後のステップ25では、マップ内採光条件1826に合わせた色強度調整を行う。 A common four-dimensional coordinate axis 1828 within the map is defined for each map selected by the end user 1080. Further, the 3D color image sensor 1280 that captures the image of the end user 1080 has the unique coordinate axes described with reference to FIG. 33A defined therein. Therefore, in step 24, coordinate conversion processing is performed from the coordinate axes unique to the 3D color image sensor 1280 to the common four-dimensional coordinate axes 1828 in the map in accordance with the occupied location 1830 designated by the end user 1080. Immediately after this, in step 25, color intensity adjustment is performed in accordance with the intra-map lighting conditions 1826.

 個々のエンドユーザ1080毎の座標変換(ST24)と色強度調整(ST25)が完了すると、レンダリング処理として、個々のエンドユーザ1080の4次元メッシュデータをマップ上に合成処理を行う(ST26)。このマップ上のレンダリング処理により、複数の異なるエンドユーザ1080が時刻操作可能なサービス提供ドメイン1500内の特定場所に集う事ができる。 When the coordinate transformation (ST24) and color intensity adjustment (ST25) for each end user 1080 are completed, the four-dimensional mesh data of each end user 1080 is synthesized on a map as a rendering process (ST26). Through the rendering process on this map, a plurality of different end users 1080 can gather at a specific location within the service providing domain 1500 where time can be manipulated.

 その後のステップ27で、マップ上のエンドユーザ1080個々に表示する時の視点位置を設定する。そしてこの視点位置と視点方向に合わせて、再度マップ上にレンダリングされた全てのメッシュデータを座標変換する(ST28)。そしてこの座標変換後の全てのメッシュデータを利用し、ステップ29ではユーザの眼球位置に合わせた右目と左目の画像(映像)を生成する。 In the subsequent step 27, the viewpoint position when displaying to each end user 1080 on the map is set. Then, all the mesh data rendered on the map is again subjected to coordinate transformation in accordance with this viewpoint position and viewpoint direction (ST28). Then, using all the mesh data after this coordinate transformation, in step 29, images (videos) of the right eye and left eye are generated in accordance with the user's eyeball position.

 ステップ29で生成した右目と左目の画像(映像)を出力デバイス1070に表示して、エンドユーザ1080への4D画像(映像)表示を行う。 The right eye and left eye images (video) generated in step 29 are displayed on the output device 1070 to display the 4D image (video) to the end user 1080.

 図34Cは、マップ内共通4次元座標軸に合わせたロケーション毎の4次元座標軸1838からの座標変換ステップの説明図である。このステップは、図34B内のステップ24の処理を詳細に説明した図である。 FIG. 34C is an explanatory diagram of the step of converting coordinates from the four-dimensional coordinate axes 1838 for each location in accordance with the common four-dimensional coordinate axes in the map. This step is a diagram explaining in detail the process of step 24 in FIG. 34B.

 図34Cでは、特定のエンドユーザ1080が仮想的な会議室内の占有ロケーション#2_1830-2の座席に、前を向いて着席した状態例を示している。この特定のエンドユーザ1080が前を向いて着席した姿を、別の会議出席者(別のエンドユーザ1080)が見えるように画像処理(レンダリング処理)する必要が有る。 FIG. 34C shows an example of a state in which a specific end user 1080 is seated facing forward in a seat at occupied location #2_1830-2 in a virtual conference room. It is necessary to perform image processing (rendering processing) so that another meeting attendee (another end user 1080) can see this particular end user 1080 seated facing forward.

 この特定のエンドユーザ1080の姿を撮像する環境として、図30Bの例示のように特定のエンドユーザ1080の対面に設置された3次元カラー撮像素子1280(計測部8)が撮像する場合を考える。この3次元カラー撮像素子1280に予め設定されたローカルなZl座標軸では図33Aが示すように、この特定のエンドユーザ1080の顔の向きと反対方向が“正”の向きを表わす。(すなわちこの特定のエンドユーザ1080の顔が向く方向は、エンドユーザ1080固有なローカルな座標軸Zlの“負の方向”を示す。)
 図34Cが示すように、このエンドユーザ1080固有なローカルな座標軸Zlの“負の方向”は、マップ内共通4次元座標軸ではYc方向に合致する。従ってこの特定のエンドユーザ1080を撮像して得られた4次元メッシュデータの座標軸を、マップ内共通4次元座標軸に合わせて座標変換する必要が有る。
As an environment for capturing an image of the specific end user 1080, consider a case where a three-dimensional color image sensor 1280 (measuring unit 8) installed facing the specific end user 1080 captures an image as illustrated in FIG. 30B. As shown in FIG. 33A, in the local Zl coordinate axis preset in this three-dimensional color image sensor 1280, the direction opposite to the direction of the face of this specific end user 1080 represents a "positive" direction. (In other words, the direction in which the face of this particular end user 1080 faces indicates the "negative direction" of the local coordinate axis Zl unique to the end user 1080.)
As shown in FIG. 34C, the "negative direction" of the local coordinate axis Zl unique to the end user 1080 coincides with the Yc direction in the common four-dimensional coordinate axes within the map. Therefore, it is necessary to coordinate the coordinate axes of the four-dimensional mesh data obtained by imaging this particular end user 1080 to match the common four-dimensional coordinate axes within the map.

 同様に占有ロケーション#4_1830-4に着席したエンドユーザ1080を3次元カラー撮像素子1280で撮像した時のローカルな座標軸Xl、Yl、Zlの方向と、マップ内共通4次元座標軸Xc、Yc、Zcの方向は互いに異なる。 Similarly, the directions of the local coordinate axes Xl, Yl, and Zl when the end user 1080 seated at occupied location #4_1830-4 is imaged by the three-dimensional color image sensor 1280, and the directions of the common four-dimensional coordinate axes Xc, Yc, and Zc in the map. The directions are different from each other.

 このように同一マップ内でエンドユーザ1080が選択した異なる占有ロケーション1830毎に、そのエンドユーザ1080を撮像して収集した4次元メッシュデータ内のローカルな座標軸Xl、Yl、Zlの方向が異なる。従ってエンドユーザ1080が選択した占有ロケーション1830の場所に合わせた座標変換を行う。その結果、全てのエンドユーザ1080を撮像して収集した4次元メッシュデータの基準となる座標軸が、マップ内共通4次元座標軸Xc、Yc、Zcに統一される。4次元メッシュデータの基準となる座標軸がマップ内共通4次元座標軸Xc、Yc、Zcに統一されて初めて、会議に参加したメンバー全ての4次元メッシュデータを合成するレンダリング処理が可能となる。 In this way, the directions of the local coordinate axes Xl, Yl, and Zl in the four-dimensional mesh data collected by imaging the end user 1080 differ for each different occupied location 1830 selected by the end user 1080 within the same map. Therefore, coordinate transformation is performed to match the location of the occupied location 1830 selected by the end user 1080. As a result, the coordinate axes that serve as the reference for the four-dimensional mesh data collected by imaging all the end users 1080 are unified into the common four-dimensional coordinate axes Xc, Yc, and Zc within the map. Only when the reference coordinate axes of the 4-dimensional mesh data are unified to the common 4-dimensional coordinate axes Xc, Yc, and Zc within the map, it becomes possible to perform a rendering process to synthesize the 4-dimensional mesh data of all the members who participated in the meeting.

 図34Dは、図34Bのステップ25内で行われる色強度調整方法の説明図である。例えば球体の左側に発光部2が配置された場合を考える。球体内の発光部2に近い左側は、発光部2の放射光に照らされて明るくなる。反対に球体内の発光部2の影になる右側は、暗くなる。この明るく照らされたノッド1600では各色の強度が増加し、発光部2の放射光の色に近付く。そして影の暗い部分のノッド1600の各色の強度が低下する。マップ内の採光条件1826(すなわち外光やライトの位置とライトの発光色)に合わせて4次元メッシュデータ内のノッド1600の色強度を調整すると、エンドユーザ1080に表示する画像(映像)のリアリティが増加する効果が生まれる。 FIG. 34D is an explanatory diagram of the color intensity adjustment method performed in step 25 of FIG. 34B. For example, consider a case where the light emitting section 2 is placed on the left side of the sphere. The left side of the sphere, which is closer to the light emitting section 2, is illuminated by the emitted light from the light emitting section 2 and becomes brighter. On the other hand, the right side of the sphere, which is in the shadow of the light emitting part 2, becomes dark. In this brightly illuminated nod 1600, the intensity of each color increases and approaches the color of the emitted light from the light emitting section 2. Then, the intensity of each color of the nod 1600 in the dark part of the shadow decreases. Adjusting the color intensity of the nodes 1600 in the 4-dimensional mesh data according to the lighting conditions 1826 in the map (i.e., the position of external light and lights and the emitted light color of the lights) improves the reality of the image (video) displayed to the end user 1080. This has the effect of increasing.

 図34Eは、図34B内のステップ27で示したマップ上視点位置設定に関する詳細説明図を示す。エンドユーザ1080に対する出力デバイス1070として立体表示可能な装着形ディスプレ(VR/AR)を使用した場合、表示対象物毎の前後方向位置情報が非常に重要となる。従って4次元マップ内でのユーザの表示視点位置1860に拠り、立体表示画面の前後位置方向が変化する。 FIG. 34E shows a detailed explanatory diagram regarding the setting of the viewpoint position on the map shown in step 27 in FIG. 34B. When a wearable display capable of stereoscopic display (VR/AR) is used as the output device 1070 for the end user 1080, front-back position information for each display object becomes very important. Therefore, depending on the user's display viewpoint position 1860 within the four-dimensional map, the front-back position direction of the stereoscopic display screen changes.

 本実施形態例では、時刻操作可能なサービス提供ドメイン1500内の任意の位置にユーザの表示視点位置1860とその視点が向く方向を設定可能にしている。この機能を持たせると、エンドユーザ1080へ提供する画像(映像)の臨場感が飛躍的に向上する効果が生まれる。例えばサッカー試合観戦中に仮想的にサッカーチームメンバと一緒に走り回るとか、サッカー試合の審判の視点での試合進行の観戦が可能となる。 In this embodiment, the user's display viewpoint position 1860 and the direction in which the viewpoint faces can be set at any position within the service providing domain 1500 where time can be manipulated. Providing this function has the effect of dramatically improving the sense of realism of images (videos) provided to the end user 1080. For example, while watching a soccer match, it is possible to virtually run around with the soccer team members, or to watch the progress of the soccer match from the perspective of the referee.

 図34Eに示すように本実施形態例では、ユーザの表示視点1860とその視点方向に合わせた表示視点での4次元座標軸1868を定義する。ここでエンドユーザ1080に立体表示する奥行き方向(前後方向)を、Zd軸に設定する。またエンドユーザ1080から見た横方向を、Xd方向に設定する。 As shown in FIG. 34E, in this embodiment, a user's display viewpoint 1860 and a four-dimensional coordinate axis 1868 at a display viewpoint that matches the direction of the user's viewpoint are defined. Here, the depth direction (back and forth direction) for stereoscopic display to the end user 1080 is set to the Zd axis. Further, the horizontal direction as viewed from the end user 1080 is set to the Xd direction.

 そして図34Bのステップ28では、マップ内共通4次元座標軸上に構築された全ての4次元メッシュデータを、表示視点での4次元座標軸1868上の4次元メッシュデータになるように座標変換する。この後に生成(図34BのST29)する立体画像(映像)をエンドユーザ1080に表示(図34BのST30)する。この時にエンドユーザ1080が表示された立体画像(映像)に対する拡大(ズーム)機能を要求する場合が有る。このように事前に表示視点での4次元座標軸1868上の4次元メッシュデータに座標変換すると、ユーザの拡大(ズーム)要求に迅速に答え易くなる効果が生まれる。 Then, in step 28 of FIG. 34B, all the four-dimensional mesh data constructed on the common four-dimensional coordinate axes in the map are coordinate-transformed so that they become four-dimensional mesh data on the four-dimensional coordinate axes 1868 at the display viewpoint. After this, the stereoscopic image (video) generated (ST29 in FIG. 34B) is displayed to the end user 1080 (ST30 in FIG. 34B). At this time, the end user 1080 may request an enlargement (zoom) function for the displayed stereoscopic image (video). When the coordinates are converted in advance to four-dimensional mesh data on the four-dimensional coordinate axis 1868 at the display viewpoint in this way, an effect is created in which it becomes easier to quickly respond to the user's enlargement (zoom) request.

 発明の実施形態を説明したが、この実施形態は一例として提示したものであり、発明の範囲を限定することは意図していない。この新規な実施形態は、その他の様々な形態で実施されることが可能であり、発明の要旨を逸脱しない範囲で、種々の省略、置き換え、変更を行うことができる。この実施形態やその変形は、発明の範囲や要旨に含まれるとともに、特許請求の範囲に記載された発明とその均等の範囲に含まれる。 Although an embodiment of the invention has been described, this embodiment is presented as an example and is not intended to limit the scope of the invention. This novel embodiment can be implemented in various other forms, and various omissions, substitutions, and changes can be made without departing from the gist of the invention. This embodiment and its modifications are included within the scope and gist of the invention, as well as within the scope of the invention described in the claims and its equivalents.

2…光源部、4…情報伝達経路、6…光伝搬経路、8…計測部、10…光学装置、
12…測定装置、14…サービス提供システム、16…外部システム、18…表示部、
20…対象物、22…計測対象物、24…光学的制御対象物、
26…光記録/再生用媒体、28…その他光照射対象物、30…発光量制御部、
32…記録信号生成部、34…情報/信号変換部(暗号化/変調処理含む)、
40…信号受信部、42…信号処理部、44…信号/情報変換部(復号化/復調処理含む)、
50…システム内制御部、52…非光学的各種センサ、
60…アプリケーション分野(各種光応用分野)適合部、62…特性分析/解析処理部、
64…製造適合制御/処理部、66…監視制御/管理部、68…治療適合制御/処理部、
70…医療/福祉関連検査処理部、72…情報提供部、74…収集情報保存部、
76…その他各種アプリケーション適合部、78…長軸方向、80…サービス提供方法、
82…所定光利用方法、84…光学的計測部、86…所定光生成方法、88…短軸方向、
90…所定光学部材、92…所定光学部材の入射面、94…所定光学部材の出射面、
96…入射面側垂線、98…出射面側垂線、
110…導波素子(光ファイバ/光導波路/光ガイド)、112…コア領域、
114…クラッド領域、116…強度分布の重心位置、118…光反射面、
120…コリメートレンズまたはシリンドリカルレンズ、
122…所定光学部材の巨視的入射面、124…光ファイバ断面内位置、
126…巨視的入射面の垂線、128…等位相面、130…楔形プリズム、
132…コア領域内位置、134…集光面上位置、136…光振幅分布、
138…屈折率分布、140…多分割フレネルプリズム(所定光学部材)、
142…フレネルレンズまたはフライアイレンズ、144…結像レンズ、
146…迷光成分、148…多分割光反射素子(フレネル形反射板)、152…電場分布、
200…初期光、202…第1の光、204、208…第2の光、206…第3の光、
210…光学特性変換素子、212…第1の領域、214…第2の領域、
216…第3の領域、220…光合成場所、222…第1の光路、224…第2の光路、
226…第3の光路、230…所定光、240…光学的操作場所、
300…ラインセンサ(1次元配列検出セルアレイ)、310…ピンホール、
312…ハーフミラー板、314、316…折り返しミラー板、
318…コリメートレンズ、320…分光素子(反射形ブレーズドグレーティング)、
322、324…Fθレンズまたはコリメートレンズ、330…集光レンズ、
400…初期波連、402…位相非同期、406…波面分割後、408…分割後遅延、
410…光合成処理、420…光強度平均化、470…発光部、480…光学特性変更部。
2... Light source section, 4... Information transmission path, 6... Light propagation path, 8... Measurement section, 10... Optical device,
12... Measuring device, 14... Service providing system, 16... External system, 18... Display unit,
20...Object, 22...Measurement object, 24...Optical control object,
26... Optical recording/reproduction medium, 28... Other light irradiation targets, 30... Light emission amount control unit,
32...recording signal generation section, 34...information/signal conversion section (including encryption/modulation processing),
40... Signal receiving section, 42... Signal processing section, 44... Signal/information converting section (including decoding/demodulation processing),
50... System internal control unit, 52... Various non-optical sensors,
60...Application field (various optical application fields) adaptation section, 62...Characteristic analysis/analysis processing section,
64... Manufacturing suitability control/processing section, 66... Monitoring control/management section, 68... Treatment suitability control/processing section,
70...Medical/Welfare Related Inspection Processing Department, 72...Information Providing Department, 74...Collected Information Storage Department,
76...Other various application compatible parts, 78...Long axis direction, 80...Service provision method,
82... Predetermined light utilization method, 84... Optical measurement unit, 86... Predetermined light generation method, 88... Minor axis direction,
90... Predetermined optical member, 92... Input surface of the predetermined optical member, 94... Output surface of the predetermined optical member,
96... Perpendicular to the incident surface side, 98... Perpendicular to the exit surface side,
110... Waveguide element (optical fiber/optical waveguide/light guide), 112... Core region,
114...Clad region, 116...Gravity center position of intensity distribution, 118...Light reflecting surface,
120...collimating lens or cylindrical lens,
122... Macroscopic entrance plane of a predetermined optical member, 124... Position within the optical fiber cross section,
126... Perpendicular to the macroscopic entrance surface, 128... Equiphase surface, 130... Wedge-shaped prism,
132...Position in the core region, 134...Position on the light collecting surface, 136...Light amplitude distribution,
138... Refractive index distribution, 140... Multi-segment Fresnel prism (predetermined optical member),
142...Fresnel lens or fly's eye lens, 144...imaging lens,
146... Stray light component, 148... Multi-divided light reflecting element (Fresnel type reflector), 152... Electric field distribution,
200...Initial light, 202...First light, 204, 208...Second light, 206...Third light,
210... Optical property conversion element, 212... First region, 214... Second region,
216... Third region, 220... Photosynthesis place, 222... First optical path, 224... Second optical path,
226...Third optical path, 230...Predetermined light, 240...Optical operation location,
300... line sensor (one-dimensional array detection cell array), 310... pinhole,
312...Half mirror plate, 314, 316...Folding mirror plate,
318... Collimating lens, 320... Spectroscopic element (reflection type blazed grating),
322, 324... Fθ lens or collimating lens, 330... condensing lens,
400...Initial wave sequence, 402...Phase asynchronization, 406...After wave field division, 408...Delay after division,
410...Photosynthesis processing, 420...Light intensity averaging, 470...Light emitting unit, 480...Optical property changing unit.

Claims (11)

 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 前記所定光学部材の入射面と直交する入射面側垂線と、前記所定光学部材の出射面と直交する出射面側垂線が規定され、
 前記第1の光の進行方向が、少なくとも前記入射面側垂線と前記出射面側垂線のいずれかとの間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させる所定光生成方法。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
An entrance surface side perpendicular perpendicular to the entrance surface of the predetermined optical member and an exit surface side perpendicular perpendicular to the exit surface of the predetermined optical member are defined,
The traveling direction of the first light has an inclination angle between at least one of the normal to the incident surface side and the normal to the exit surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. . A predetermined light generation method that changes optical characteristics between the first light and the second light.
 所定光学部材を含む光学特性変更部において、
 同一発光部から放射された第1の光と第2の光は、前記所定光学部材を経由し、
 前記所定光学部材の入射面と直交する入射面側垂線と、前記所定光学部材の出射面と直交する出射面側垂線が規定され、
 前記第1の光の進行方向が、少なくとも前記入射面側垂線と前記出射面側垂線のいずれかとの間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させる光学特性変更部。
In an optical property changing section including a predetermined optical member,
The first light and the second light emitted from the same light emitting part pass through the predetermined optical member,
An entrance surface side perpendicular perpendicular to the entrance surface of the predetermined optical member and an exit surface side perpendicular perpendicular to the exit surface of the predetermined optical member are defined,
The traveling direction of the first light has an inclination angle between at least one of the normal to the incident surface side and the normal to the exit surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , an optical property changing unit that changes optical properties between the first light and the second light;
 発光部と所定光学部材から構成される光源において、
 同一の前記発光部から放射された第1の光と第2の光は、前記所定光学部材を経由し、
 前記所定光学部材の入射面と直交する入射面側垂線と、前記所定光学部材の出射面と直交する出射面側垂線が規定され、
 前記第1の光の進行方向が、少なくとも前記入射面側垂線と前記出射面側垂線のいずれかとの間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させる光源。
In a light source composed of a light emitting part and a predetermined optical member,
The first light and the second light emitted from the same light emitting section pass through the predetermined optical member,
An entrance surface side perpendicular perpendicular to the entrance surface of the predetermined optical member and an exit surface side perpendicular perpendicular to the exit surface of the predetermined optical member are defined,
The traveling direction of the first light has an inclination angle between at least one of the normal to the incident surface side and the normal to the exit surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , a light source that changes optical characteristics between the first light and the second light.
 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を利用する所定光利用方法。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
A predetermined light utilization method that utilizes composite light obtained by combining the first light and the second light.
 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を計測対象物に照射し、
 前記計測対象物から得られた第3の光を検出する検出方法。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
Irradiating a measurement target with a composite light obtained by combining the first light and the second light,
A detection method for detecting third light obtained from the measurement target.
 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を計測対象物に照射し、
 前記計測対象物から得られた第3の光を利用してイメージングを行うイメージング方法。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
Irradiating a measurement target with a composite light obtained by combining the first light and the second light,
An imaging method that performs imaging using third light obtained from the measurement target.
 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を表示に利用する表示方法。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
A display method that uses composite light obtained by combining the first light and the second light for display.
 光学測定部は、発光部と所定光学部材を含み、
 同一の前記発光部から放射された第1の光と第2の光は、前記所定光学部材を経由し、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を計測対象物に照射し、
 前記計測対象物から得られた第3の光を用いて前記計測対象物に関する計測を行う光学的計測部。
The optical measurement section includes a light emitting section and a predetermined optical member,
The first light and the second light emitted from the same light emitting section pass through the predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
Irradiating a measurement target with a composite light obtained by combining the first light and the second light,
An optical measurement unit that measures the measurement target using third light obtained from the measurement target.
 発光部と所定光学部材から構成される光学装置において、
 同一の前記発光部から放射された第1の光と第2の光が前記所定光学部材を経由し、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を利用する光学装置。
In an optical device composed of a light emitting part and a predetermined optical member,
The first light and the second light emitted from the same light emitting part pass through the predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
An optical device that utilizes composite light obtained by combining the first light and the second light.
 同一発光部から放射されて所定光学部材を経由する第1の光と第2の光において、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を利用してサービスを提供するサービス提供方法。
In the first light and the second light emitted from the same light emitting part and passing through a predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
A service providing method that provides a service using a composite light obtained by combining the first light and the second light.
 サービス提供システムは、光学装置を含み、
 前記光学装置は、発光部と所定光学部材を含み、
 同一の前記発光部から放射された第1の光と第2の光は、前記所定光学部材を経由し、
 前記所定光学部材の入射面と直交する入射面側垂線が規定され、
 前記第1の光の進行方向が、前記入射面側垂線との間に傾き角を持ち、
 前記第2の光の進行方向を前記第1の光の進行方向に対して傾けるか、または前記所定光学部材内での前記第2の光の光路を前記第1の光の光路と異ならせて、前記第1の光と前記第2の光との間の光学特性を変化させた後、
 前記第1の光と前記第2の光を合成させた合成光を利用してサービスを提供するサービス提供システム。
The service provision system includes an optical device;
The optical device includes a light emitting section and a predetermined optical member,
The first light and the second light emitted from the same light emitting section pass through the predetermined optical member,
A perpendicular line to the entrance surface side that is perpendicular to the entrance surface of the predetermined optical member is defined,
The traveling direction of the first light has an inclination angle with the perpendicular to the incident surface side,
The traveling direction of the second light is tilted with respect to the traveling direction of the first light, or the optical path of the second light within the predetermined optical member is made different from the optical path of the first light. , after changing the optical characteristics between the first light and the second light,
A service providing system that provides a service using composite light obtained by combining the first light and the second light.
PCT/JP2022/031122 2022-08-17 2022-08-17 Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system WO2024038526A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2024541332A JPWO2024038526A5 (en) 2022-08-17 Optical device, service providing system, method for using specified light, method for providing service, and display method
PCT/JP2022/031122 WO2024038526A1 (en) 2022-08-17 2022-08-17 Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system
US19/053,752 US20250193366A1 (en) 2022-08-17 2025-02-14 Three-dimensional projective transformation method, three-dimensional projective reverse transformation method, optical device, synthesized light applying method, measurement method, imaging method, signal processing method, data analysis method, three-dimensional display method, service providing method, measurement device, and service providing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/031122 WO2024038526A1 (en) 2022-08-17 2022-08-17 Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US19/053,752 Continuation-In-Part US20250193366A1 (en) 2022-08-17 2025-02-14 Three-dimensional projective transformation method, three-dimensional projective reverse transformation method, optical device, synthesized light applying method, measurement method, imaging method, signal processing method, data analysis method, three-dimensional display method, service providing method, measurement device, and service providing system

Publications (1)

Publication Number Publication Date
WO2024038526A1 true WO2024038526A1 (en) 2024-02-22

Family

ID=89941556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/031122 WO2024038526A1 (en) 2022-08-17 2022-08-17 Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system

Country Status (1)

Country Link
WO (1) WO2024038526A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01292821A (en) * 1988-05-20 1989-11-27 Nikon Corp Optical apparatus
JP2013533502A (en) * 2010-05-21 2013-08-22 コーニング インコーポレイテッド System and method for reducing speckle using diffusive surfaces
JP2014013833A (en) * 2012-07-04 2014-01-23 Japan Steel Works Ltd:The Laser beam reshaping device, laser beam reshaping method, laser processing device, and laser processing method
JP2019015709A (en) * 2017-07-07 2019-01-31 安東 秀夫 Right source unit, measurement apparatus, near-infrared microscopic apparatus, optical detection method, imaging method, calculation method, functional bio-related substance, state management method, and manufacturing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01292821A (en) * 1988-05-20 1989-11-27 Nikon Corp Optical apparatus
JP2013533502A (en) * 2010-05-21 2013-08-22 コーニング インコーポレイテッド System and method for reducing speckle using diffusive surfaces
JP2014013833A (en) * 2012-07-04 2014-01-23 Japan Steel Works Ltd:The Laser beam reshaping device, laser beam reshaping method, laser processing device, and laser processing method
JP2019015709A (en) * 2017-07-07 2019-01-31 安東 秀夫 Right source unit, measurement apparatus, near-infrared microscopic apparatus, optical detection method, imaging method, calculation method, functional bio-related substance, state management method, and manufacturing method

Also Published As

Publication number Publication date
JPWO2024038526A1 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
US11828946B2 (en) Systems and methods for retinal imaging and tracking
US12306409B2 (en) Diffractive optical elements with optical power
US20250155722A1 (en) Augmented reality display with waveguide configured to capture images of eye and/or environment
US11778149B2 (en) Headware with computer and optical element for use therewith and systems utilizing same
US11269182B2 (en) Content presentation in head worn computing
CN107250882A (en) The power management calculated for wear-type
JP6467832B2 (en) Mobile information gateway for use in emergency situations or with special equipment
US9366862B2 (en) System and method for delivering content to a group of see-through near eye display eyepieces
US9229227B2 (en) See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US8477425B2 (en) See-through near-eye display glasses including a partially reflective, partially transmitting optical element
US20160015470A1 (en) Content presentation in head worn computing
CN105446474B (en) Wearable smart machine and its method of interaction, wearable smart machine system
CN109754256A (en) Model, device, system, methods and applications based on code chain
US20160085071A1 (en) See-through computer display systems
US20120249797A1 (en) Head-worn adaptive display
US20120242678A1 (en) See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
CN107533642A (en) Equipment, method and system for the biological characteristic user&#39;s identification using neutral net
JP2015062118A (en) Mobile information gateway for home healthcare
JP2015062119A (en) Mobile information gateway for medical personnel
CN106662685A (en) Waveguide eye tracking employing volume bragg grating
CN106170729A (en) For the method and apparatus with the head-mounted display of multiple emergent pupil
KR20140066258A (en) Video display modification based on sensor input for a see-through near-to-eye display
US20230341263A1 (en) Synthesized light generating method, synthesized light applying method, and optical measuring method
CN111123525A (en) Holographic intelligent display device integrated with pupil tracking function and implementation method
WO2024038526A1 (en) Prescribed light generation method, optical characteristics modification unit, light source, prescribed light usage method, detection method, imaging method, display method, optical measurement unit, optical apparatus, service provision method, and service provision system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22955700

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2024541332

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE